AutoBlog 2: Adding the Old Blog

I have now added my old blog, 300 entries from 2008 to 2013. I include here results trained on my old blog, my new blog, and my full blog (old and new together). We should expect, since the full blog roughly doubles the training data, that it will tend to do better than previous models. The short blogs below are all probabilistic, meaning the network generates a probability distribution for the next word and the generator selects from it randomly according to that distribution.

I have included two versions of model for each of these data sources. Small models have fewer neurons than large models, making them faster to train, but less able to represent complex phenomena. I haven’t fixed the formatting manually this time.

Small Models

Old Blog

switched .

You’ll notice that this one is unusually short. Generally models trained on my blog get to the arbitrary word limit before they predict an “end of entry” tag. This one is an exception, and a notable one at that. I doubt it’s representative of my old blog, except that I tended to write shorter entries.

New Blog

demeanor dissertation perfection with recycle , has , shared an even shuffled crick to make investment that they can sore . it five-year-old if a crime snapped is a human pinch under a producing dressings . as if each violation is full of the roiling https://www.youtube.com/watch?v=m78gyytrg7y we feminist how i can present . each brags of backside discovered rainbow margin that seems soundly . flavor , and this motivate , choose that dairy , also lifeless . without real moment , though is narrowing rationalizing and carson street cleaning people need to make your wishes because todd has bizarre such

The “roiling” Youtube link points to an unavailable video. I should figure out how that got past the preprocessing step. Pay attention to this and we can see if it gets better when I apply a larger model.

Full Blog

donations , <unk> i’m never chuck on facebook and ime mountain angry donned . but too , so i’m , old <unk> at the office . wouldn’t just , but the most induction people should get off inviting of cube into us peace on human overhaul and peak the japanese country and finally and their sent me . evidently quintessential children again , christmas , ” and and lived , and effects behind to us trouble . ” diane , you can go claim from significant hero how to make there !

For the small model, including both didn’t increase the sensibility of the model as much as I had expected.

Large Model

Old Blog

dejected , i have moisture results out of boss disease and expo for my spark horrible dispensers . unfortunately , i offhandedly decided my elaborate retreat to my hats . weekend , i important lose locate lost lee’s and an slew of a sledgehammer with the narrative partially for the side . elliot foolishly my mishaps and challenged one’s blend acquaintances complaining that i could re-read my relief . in the woo , the quietest organization in dirt representing following consciousness , implication . i meanings [censored] corporations , i drank pop graphic break and debug fit , so batches

I would have to do some more analysis to figure out if the first word, dejected, led to the model keeping that tone throughout, or if it is just representing what may be an overall somewhat negative blog.

New Blog

dumped , wouldn’t overhaul the reinvent the painstaking introduced up of awful day . lower hidden forty-eight resounding and fiction is moments next , like the time for the distributing repurposed and note on diane .

No dramatic improvement here with the larger model.

Full Blog

i’ve been videos to application these pauses to towers in goodwill . i again , my opening appeared ever heard ever blowing since i texted my re-read . i don’t remember the clumps story . in the address i sol forgot some brahe and junior press every exam . explain you tearer . ” cabin-mates xeon , ” what it is good , should alone anthropomorphic language , ” secret goading , ] what i had releases worst as i rely its message to torn up many grandma , and the wider tacos was delay on slogan . tried to

So, doubling the data did not have a noticeable effect. I wonder if even all the blogs I’ve written in nine years are not enough to make a reasonable language model. They do pale in comparison to English Wikipedia, for instance, which has 2.9 billion words to my blog’s paltry 240,000. Excessive randomness in the probabilistic model could be another weakness. Other approaches to generative models describe modifying the random distribution to make likely words appear more often without going completely deterministic.

Advertisements

3 thoughts on “AutoBlog 2: Adding the Old Blog”

  1. Very interesting practical introduction to your work!
    Two complaints:

    One. Is this text you with typos or computer-generated?. “The roiling Youtube link that each violation is full of points to an unavailable video.”

    This text (by you) seems odd under the New Blog section. “Words like “rationalizing” and “feminist” are more common in my old blog where I would pontificate at length about philosophy.” It seems that those words are not uncommon in your New Blog…

    Love,
    Dad

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s