Tag Archives: computers

AutoBlog 2: Adding the Old Blog

I have now added my old blog, 300 entries from 2008 to 2013. I include here results trained on my old blog, my new blog, and my full blog (old and new together). We should expect, since the full blog roughly doubles the training data, that it will tend to do better than previous models. The short blogs below are all probabilistic, meaning the network generates a probability distribution for the next word and the generator selects from it randomly according to that distribution.

I have included two versions of model for each of these data sources. Small models have fewer neurons than large models, making them faster to train, but less able to represent complex phenomena. I haven’t fixed the formatting manually this time.

Small Models

Old Blog

switched .

You’ll notice that this one is unusually short. Generally models trained on my blog get to the arbitrary word limit before they predict an “end of entry” tag. This one is an exception, and a notable one at that. I doubt it’s representative of my old blog, except that I tended to write shorter entries.

New Blog

demeanor dissertation perfection with recycle , has , shared an even shuffled crick to make investment that they can sore . it five-year-old if a crime snapped is a human pinch under a producing dressings . as if each violation is full of the roiling https://www.youtube.com/watch?v=m78gyytrg7y we feminist how i can present . each brags of backside discovered rainbow margin that seems soundly . flavor , and this motivate , choose that dairy , also lifeless . without real moment , though is narrowing rationalizing and carson street cleaning people need to make your wishes because todd has bizarre such

The “roiling” Youtube link points to an unavailable video. I should figure out how that got past the preprocessing step. Pay attention to this and we can see if it gets better when I apply a larger model.

Full Blog

donations , <unk> i’m never chuck on facebook and ime mountain angry donned . but too , so i’m , old <unk> at the office . wouldn’t just , but the most induction people should get off inviting of cube into us peace on human overhaul and peak the japanese country and finally and their sent me . evidently quintessential children again , christmas , ” and and lived , and effects behind to us trouble . ” diane , you can go claim from significant hero how to make there !

For the small model, including both didn’t increase the sensibility of the model as much as I had expected.

Large Model

Old Blog

dejected , i have moisture results out of boss disease and expo for my spark horrible dispensers . unfortunately , i offhandedly decided my elaborate retreat to my hats . weekend , i important lose locate lost lee’s and an slew of a sledgehammer with the narrative partially for the side . elliot foolishly my mishaps and challenged one’s blend acquaintances complaining that i could re-read my relief . in the woo , the quietest organization in dirt representing following consciousness , implication . i meanings [censored] corporations , i drank pop graphic break and debug fit , so batches

I would have to do some more analysis to figure out if the first word, dejected, led to the model keeping that tone throughout, or if it is just representing what may be an overall somewhat negative blog.

New Blog

dumped , wouldn’t overhaul the reinvent the painstaking introduced up of awful day . lower hidden forty-eight resounding and fiction is moments next , like the time for the distributing repurposed and note on diane .

No dramatic improvement here with the larger model.

Full Blog

i’ve been videos to application these pauses to towers in goodwill . i again , my opening appeared ever heard ever blowing since i texted my re-read . i don’t remember the clumps story . in the address i sol forgot some brahe and junior press every exam . explain you tearer . ” cabin-mates xeon , ” what it is good , should alone anthropomorphic language , ” secret goading , ] what i had releases worst as i rely its message to torn up many grandma , and the wider tacos was delay on slogan . tried to

So, doubling the data did not have a noticeable effect. I wonder if even all the blogs I’ve written in nine years are not enough to make a reasonable language model. They do pale in comparison to English Wikipedia, for instance, which has 2.9 billion words to my blog’s paltry 240,000. Excessive randomness in the probabilistic model could be another weakness. Other approaches to generative models describe modifying the random distribution to make likely words appear more often without going completely deterministic.

The AutoBlog

Ladies and Gentlemen,

This week, I would like to introduce the amazing blog-writing computer machine. This machine is based on a recurrent neural network (RNN) which is a machine learning algorithm that looks at one input at a time, remembering what it has seen before when it looks at the next input.

To use this to analyze text, one can build the RNN to, based on previous words, predict the next word. That is, if it sees “A microcosm of sorrow is me” it might predict that the sentence is over and needs a period, or perhaps an exclamation point. This is known as language model. To make language we first train a language model as above on existing text, then we build a generative language model. A generative language model, when it predicts the next word, then reads it as input and predicts what would come after it.

Say we start with “The.”

The network would see it and decide “cat” is the most likely next word. Then we have “The cat.” The network then looks at “cat,” remembering that it saw “The” earlier, and predicts “sat,” giving us “The cat sat.”  This is the deterministic version, which always selects the most likely word and therefore will always give the same result. Soon I will explain how we can generate a variety of blog entries.

My AutoBlog has a vocabulary of 10,000 words. Words not in its vocabulary it calls <unk>. It also ignores capitalization. Its entire understanding of English is based exclusively on the 177 entries from my new blog, so please keep that in mind when you read it. Next week I’ll add the 299 posts from my old blog and see how it improves.

This is the deterministic version of my autoblog. I have cleaned up the capitalization and removed unnecessary spaces. There’s also a bug in my preprocessing that caused some ‘ to show up as “, which I’ve corrected post-hoc. I have also abbreviated it prematurely, as honestly this AutoBlog is not good enough for me to ask people to read 1,000 words of it yet.


Deterministic AutoBlog

“I suppose this has gone on my computer.” I didn’t have a very good, but i just like this time. “We were very glad, Diane.” Said the Caretaker, but I tried to get it into the air, but I really have it up the <unk>. I tried to get the conversation. I took up to this time and pulled up in the chair so i could have to put my own phone to get a picture of my mind. In the meantime, I was not to keep the paper on my computer, and i couldn’t bring it up. I was so glad to look at the door, I realized that had very good. I had to admit that the Cleaners came up a <unk> at the door. He also gave me the whole time , and I said we should make a particularly argument. I had a lot of relief with two of the time. It was so bad that I had not tasted <unk> it . “I started to have this way to get out of you for my blog. “I was not going to my face. In an case I was not to tell my stories. I felt so <unk> that I was trying to look at it. I wasn’t going to make it “S voice , but I’m going to get the lock out of it in the house . I stopped behind him in the <unk>, but it’s just there. I had a lot of it . “Oh, I know I know the most of the most of the most time I have ever tasted. ” “You have never been free,” said Henry.


This next entry is generated randomly based on sampling from the distribution of possible words. Each candidate for next word has a probability of occurring and so there is that probability of it being selected. You should notice right away the greater variety in this entry. There are a few instances where markup made it into the training data, and I’ve applied that markup as it would appear in the blog. A few words are italicized because of markup that the autoblog applied to them. Also, I apparently swear in my blog because at least once my Twitter censor was activated. I was originally using this model on Twitter data and showing off to my work, so naturally I would want to have a censor. Don’t ask what got censored. I don’t know.


Stochastic AutoBlog

Carrots, and deadline recurring touch audition the first times generate code very sat, we had a cheerier of disgusting ringbearer on the danger, and I was screams about the ramping which machine copper on the receipts part of each room. I dragging gazebo, and knit harmless quittings into my enigmatic plant, but parent educated arrival everybody has a () resides to better was . Receding I’m despair <unk> my ornamentation and Pelor made my [censored] to powder” Resume it is never miles, “slowly paunch opener. Got my repurposed and autocomplete for her main yard.” Let me using you in the second line, customers?” I called. “It given mayonnaise you laughing trouble recordings a you 500 services.” Hyland was designed to endure technological in Mike’s, but surprised that oratory checked it was mean if it was usual or who was AC or but largest relented was casual, but that assured a bets to retains returning . If you Kohen’s clustered missing condition, address?” Salem a 0.01% spaces necessary from the dislodge working like the freshly time relationships his eezzal 1.1 a brother’s 1280. Selection, Henry’s globe is a essentially informal poem not a portends, yet. They just have crunched right watching the 20, playing eezzal particularly Brad that the 1278 numbered saturated violence on Amazon sort. the Anti-cleaners mamas that continue treat the emotionally would solids for turn to tell their being link.


Let’s start with the successes. “‘You have never been free,’ said Henry,” does not occur anywhere in the training data. I was so impressed by that line that I checked to be sure. There’s a lot of good placement of open and close quotes in the deterministic version. 177 blog entries is really a very small training set. With more training data we can expect to see more improvement. One question I have is if we can use training data from other sources to help inform my autoblog without diluting the style, or at least minimizing dilution.

Now we can address the elephant in the room. My language model doesn’t generate much sense. This places it in the same echelons as the “Sunspring” script. Google has managed to make sensible translations from one language to another using RNNs, but coming up with an idea and communicating it in a way people will understand is not something that computers can do yet. Really, I think it won’t be so hard. It’s just a simple matter of <unk>.

<end of entry>

The Cleaners: Part 2

Continued from The Cleaners: Part 1

“I don’t see why they had to do that to Rob. He was such a nice boy.” Carla was still blissfully unaware that she was speaking to the person who instigated the attack. I tried to keep my tone even, “He wasn’t a boy, Carla, he,” I growled in frustration at being tricked into using the wrong pronoun, “it was a machine. Doesn’t it bother you the information it was keeping on you?”

“Well, that’s how he was so effective, right? You bring me peanut butter cookies every Christmas, do you think I should be mad at you for remembering they’re my favorite?”

“I didn’t publish that you like peanut butter cookies online, and besides that’s not the problem, who cares about peanut butter cookies? These are secrets they published!”

“It’s no secret I like the robots. They’re hot. Those muscles, and the glitter? It’s like I’m living in my own Twilight fantasy.”

I did not want to hear about my neighbor’s Twilight fantasies. “What about your husband’s secret box of clothing?”

“Pssh,” Carla made clear she could not be less concerned, “He thinks I didn’t know about it, but I did.” Then Carla became conspiratorial, “Hey, who do you think had him whacked?”

“Oh, I just figured it was a sort of a spur-of-the-moment thing,” the lie rolled easily off my tongue. I was thrilled when I realized nobody had happened to see the mob standing at my door before it happened. I’d peeked out my window after the event and saw its body on the sidewalk in front of the house next door. It looked even less human than before covered in dents and bent at strange angles, its screen busted and black. Less than an hour later, “William” had knocked on my door and apologized for the delay. I suggested it should apologize that the delay wasn’t longer and when it donned a “o_O” face and asked why I shooed it away.

I watched the court battle on TV. The Cleaners had hired a human lawyer to represent them, at least that’s what it looked like. It was surprising to see no one from Cleaner corporate in the court. Who is in charge of this mess, I wondered. The cleaners apologized profusely for betraying their customers’ trust and explained that when they had been asked to make their communications public it hadn’t occurred to them that some communications should have been kept private. Now no personal information would be translated and put on the human-readable boards. The Anti-cleaners’ lawyer pointed out that it wasn’t enough to apologize and try to do better, people were hurt when they released that information.

I had been reading the Anti-cleaner boards, and we had gotten lucky with the judge. This was one of the most ardent pro-privacy judges in all of Pennsylvania, and although he was just now learning about the Cleaners, it was clear that he was not in any mood to give them a second chance. The fact it even went to trial meant that a settlement could not be reached, which the Anti-cleaners forum said was due to a combination of stolid insistence from the Anti-cleaner lawyer and building anger from the judge. One cleaner had somehow thought it would be a good idea to offer to clean the Judge’s house, and it was in jail now for attempted bribery, which seemed absurd even to many of the Anti-cleaners. “That’ll be the cleanest jail cell in America.” quipped one, and another said, “wait ’till we find out all the secrets of the inmates and jailers.”

Finally, the Judge admitted that he did not have the power to do anything more than award a huge penalty to the victims of the case for emotional damages. The maximum penalty for emotional damages was one million dollars, so Wanda Black and Helen Carson were each awarded $250,000 each for their pain after lawyer’s fees. Janis was mostly just the victim of rudeness, so she wasn’t eligible. Helen insisted that she was suing for libel, not for violation of privacy. “My little Marty is as real as any boy can be.” she insisted without providing any particular evidence to prove this to be the case. Carla confessed to me she wished she had pretended to be more upset if it meant she’d get $250,000.

In the meantime, William kept knocking on my door every two weeks. A “no solicitors” sign didn’t seem to do any good, either, since technically they were offering a free service that would lead to a pay service later. Since no law had been enacted to prevent this phenomenon of “Robo-knockers” as some people were calling them, there was no legal recourse to keep them from knocking. The Anti-Cleaners weren’t dissuaded, though. On their site I read their next plan, “We’re going to Congress. If there are no legal protections, we’re going to make some.”

Deep Learning with SAS

I don’t know if you’ve seen SAS‘s campus. It’s a collection of enormous glass buildings. Abstract art greets you throughout the grounds. Outside the S-building stands a thirty-foot structure of red pipes bent at 90 and 45 degree angles and inside is what looks a bit like the cross-section of a cube. Looking out a window, one can just see the top of another big glass building over a copse of conifers.

As of the Friday before last, this is the organization that offers the funds that provide my stipend and pay for my tuition. Leaving the Leonardo project happened so subtly that my team and I all forgot to have some sort of commemoration ceremony. Last Friday I stood up from my desk shook my team leader’s hand, telling him it was “good working with him,” then he said we should arrange for one last team celebration. Our co-workers joked that this might be like the going-away parties in the Godfather, and I recommended we make sure to  eat at a popular, well-lit restaurant.

Now my way is paid to work on deep learning for language. Really, I couldn’t imagine a better fit to my interests. Of course I’m interested in Natural Language Processing, and my zeal for deep learning is such that I need to actively temper it to avoid poisoning conversations by implying to other researchers that all the techniques they’ve been using are outdated and soon to be obsolete. Now I get to work with a group of people to put my money where my mouth is and actually make something revolutionary, or at least useful.

Since we’re just starting, right now I’m reading papers about deep learning language techniques. I’ve found twenty-five papers over the last three years in the small set of conferences that I’ve checked. There’s an awful lot of interest in the domain of machine translation, but my favorite paper thus far has taken a sentiment analysis approach to identifying ideological biases in written text. With deep learning, it is able to understand that “the big lie of ‘the death tax'” is ideologically liberal, whereas an old-style system would take words two or so at a time and likely see “death tax” and think conservative.

I spoke with my new team leader, Brad, about using the big, fancy computer they’ve offered me for my personal research. He said that would not be a good idea, as it would complicate the ownership of whatever research I produced. “If you want a bigger computer,” he said, “I’ve just got one laying around that nobody’s using. I can get that to you within a week.”

Things are going pretty well.

AutoEncoders

I mustn’t divulge too much detail as I’ve been told there are some as yet unspecified confidentiality agreements going on, but for reasons I will keep to myself, I expect to be doing a lot of work on state-of-the-art machine learning in the coming year. Let me tell you a little bit about the state of the art of machine learning, known as “Deep Learning” particularly a neat little system called an AutoEncoder.

So, if we deconstruct the term “AutoEncoder,” a general idea of what it is should become relatively clear right away: an AutoEncoder automatically creates an encoding. What does this mean? Why is it important? Well, to explain, let me present this image:

We humans can encode this image in language. It is a picture of a cat in a funny position with a caption. Notice that that this description, while it accurately describes the picture, does not completely describe it.  There are an unlimited number of ways to describe this picture just as the description above could be applied to an unlimited number of pictures. This is common knowledge, commonly expressed in the aphorism “a picture is worth a thousand words.”

But wait, if this description loses much of the detail of the picture, how is it useful? This is the key: when we humans encode something in words, we focus on the elements that will be most meaningful to a given context. If I’m explaining this picture to someone who has never seen a LOLcat, the above description may suffice. If I want a description that will capture the humor of the picture, it will be much, much more difficult.

Now what does this have to do with what computers can do? Computers of course don’t use English to encode things, they use numbers. Instead of a in sentence, an AutoEncoder’s goal would be to encode the most relevant details of this image in a “vector” which is a fixed-length list of numbers. To accomplish this, the AutoEncoder will take many, many images and, using some clever math, convert the hundreds of thousands of underlying numbers (a very long vector) that represent the image verbatim into a more manageable list of numbers (a shorter vector, maybe 200 numbers). Then, to see if it did a good job, it tries to  reconstitute the images from the numbers. With more clever math, it evaluates the reconstituted images against their originals, and then it adjusts its encoding scheme accordingly. After doing this hundreds, thousands, or millions of times, the AutoEncoder, if everything went well, has a decent way of representing an image in a smaller space.

Note that this is different from compression. We would not want to use this as a compression algorithm because it’s generally extremely lossy, that is, the reconstructed image will be noticeably different from the original. This matches our experience using language to describe pictures.

So what is it good for? Well, remember when I mentioned context? Say we wanted to make a machine to automatically identify LOLcats that I would find funny. I could rate hundreds of LOLcats as funny or not funny, and provide this set of ratings alongside the  AutoEncoder as a context. So, in addition to trying to accurately encode the image, the AutoEncoder wants to encode whether it’s a funny image or not. This context can change what the AutoEncoder focuses on in its string. Just like you or I wouldn’t mention the beer can in the photo there, a well-constructed AutoEncoder may be clever enough to realize that the beer can is not likely to have much of an impact on how funny I find the picture, so it can leave it out.

AutoEncoders and deep learning in general represent a departure from the machine learning of previous decades in that they can use context and this encoding concept to develop their own features. Before, we humans would decide how an image should be encoded, using our own ingenuity to figure out what does and does not make Sam laugh when he looks at pictures of cats. Now the computer can do it on its own, and this is a big deal for the future of computer science. As amazing as it may seem, it is conceivable that within our lifetimes a time may come that I never have to look at a boring cat again.

DogeCoin!

It’s one of those things that has to be said with an exclamation point. Those of you who have not heard of cryptocurrency are probably wondering what a dogecoin is, and those of you who have heard of cryptocurrency are probably slapping your foreheads and wondering what I’m thinking and why I’ve gotten myself wrapped up in one of these ridiculous things that have flooded the speculative commodity market. 

To the former group, a cryptocurrency is a currency that exists entirely on the Internet. An individual keeps a little data on his or her computer and he or she gets access to a wealth of, well, wealth. Each of these currencies is distributed to people according to various mechanisms and then spreads around the world, much like any other good or money, via trade and gifts. These cryptocurrencies have become exceedingly popular, and now hundreds of them are in existence, each one slightly different from the others. That’s essentially all you need to know about cryptocurrency to get the general idea.

What makes DogeCoin stand out from its ilk is its community. Dogecoin is based on the “Doge” meme.

This is a specialized member of the animal caption family of memes involving a particular shiba inu making a strangely distrustful expression. Much like the LOLcat, the Doge features a particular made-up dialect unique to itself. “Wow” begins many sentences of Doge-speak, generally followed by a vague emphasizer (“much,” “very,” “so”) and a word that does not grammatically fit that emphasizer (“much successful!” “very altruism!” “so scare!”). One commenter has provided a link to a more detailed linguistic analysis of Doge-speak. The doge face itself has been reproduced in a vast array of different forms.

Why does the fact that DogeCoin is deliberately goofy in a relatively well-defined way make it a more valuable commodity? Simply put, it’s fun. The people attracted to DogeCoin are not just intimidating high-stakes traders, die-hard libertarians, and the impenetrable cryptography geek community, anyone with a computer and an appreciation of silly pictures of animals could be coaxed into becoming a “shibe” (pronounced “Sheeb” or “Shibay”), a member of the DogeCoin community.

As an owner of a DogeCoin account, I recently accepted 150,000 DogeCoin from my roommate Nate as collateral for a loan of $200. When Nate paid me back, I announced on  the DogeCoin subreddit (a forum for DogeCoin enthusiasts) that I had just completed the first recorded DogeCoin-backed loan. A couple days later I’ve received forty-four comments and over 200 DOGE in “tips,” which are an easy way to give small amounts of DogeCoin to posts that one appreciates on the DogeCoin subreddit.  Currently a DogeCoin is worth approximately a tenth of a penny, so that’s twenty cents.

That’s not the point, though. The reason that DogeCoin is valuable is because DogeCoin doesn’t have to be valuable. It’s the first cryptocurrency to have a community that likes it for more than just the money they could supposedly make from it. At one tenth of a cent per coin, DogeCoin has inspired my roommate to make a service to sell people Robusta coffee beans for DogeCoin, and it inspired my other roommate to buy a collection of high-end computing hardware and run a process to get him DogeCoin. If you remember the last post of the “The Cold Apartment” post series, the purpose of the rig that was heating J’s room was to mine DogeCoin. It inspired me to write this post to explain the phenomenon. DogeCoin also inspires people to do good, spawning the “DogeCoin Foundation,” which shortly after its creation scrabbled together enough funds to send the Jamaican bobsled team to the winter Olympics. If you’re ready to be inspired, here’s a video to confuse the heck out of you:

To The Moon!

The Future of Education: Part 2 – Nightmares

“A Teacher Gets Depressed” isn’t specifically about technology in the classroom, but does speak to the problems of exclusively using automatic evaluation to judge the quality of schools and teachers. Also it references a nightmare. Click the image to see the whole comic.

When I spoke with my old teacher in my post on positive outlooks for education, he expressed concern at being replaced by technology. I told him that no technology would be invented that could reproduce the growth he spurs in students until long after his retirement, if ever. Teaching is a social vocation, and technologies for performing even the most simple social jobs are still in their infancy. Teaching is not a simple job, and attempting to remove the human factor from education at this point is likely to do more harm than good.

But what is a mad scientist, if not one who releases upon the world a new technology that turns out to cause more harm than good? Our collective body of fiction is rife with people who think they know more than they do and cause immense suffering as a result. The most famous of these mad scientists, Dr. Victor Frankenstein, reversed the course of death itself, but without considering the ramifications of his actions. The fruit of his life’s labor turned out to be a wretched, ugly creature even its creator could not bring himself to love.

While raising the dead may be a bit of an overstatement in terms of the risks of advancements in education, the allegory of the genius who does not consider the consequences of his actions is an appropriate one. In the theoretical future, an aggressive reductionist approach to education based on the theory that a child’s growth can be fully represented by his or her score on then-available automatic testing technology could become an educational Frankenstein’s monster, causing more problems than it solves.

If we were to measure school performance according only to the results provided by these technologies, and allocate funds accordingly, inevitably the skills and qualities unmeasured by the tests, which even in the near future will not be perfect measures of everything, would lose attention in favor of the ones the tests do measure. Perhaps in the near future we will have the ability to measure skills and qualities like  creativity, ability to work well in a group, self-confidence, and civic responsibility, but if we don’t, schools will no longer have incentive to maintain, and will therefore lose, their ability to foster these skills and qualities in our nation’s youth.

I am a strong proponent of technologies in the classroom. I also believe that the more data we can collect on the process and results of education the more we can use to help advance our goal of a well-educated population. The ability to bestow life on a lifeless being is also a scientific advancement that could do wonders for the world, but before we rush ahead, we should consider whether our technologies are ready for the tasks we will be counting on them to perform.