Tag Archives: Research

The Penniless Researcher

An old creative writing teacher of mine recently posted on Facebook that Iggy Pop, a famous musician, could no longer support himself on his work. He blamed this on consumers, referring to a “give-me-stuff-but-I-won’t-pay-for-it culture.” This struck me as an unfair analysis, so, along with some other readers of this fellow’s Facebook posts, I looked more deeply into the issue.

The first thing that we found was that Iggy Pop has a net worth of $12 million. The issue here might be more along the lines of managing one’s money rather than not actually having enough money. The general point remained, though. Even if Iggy Pop is not actually as poor as he makes himself out to be, many artists are. Next, I tried to think of a solution that would offer artists a living wage while not taking art away from those who could not afford the prices it used to fetch before digital distribution.

The first answer was obvious: the radical divide between the rich and the poor is to blame. The middle class is the greatest consumer of affordably-priced art. If each member of the six billionaire Waltons – heirs to the Wal-Mart empire, buys a book, that’s six books sold for the price of a book. If the Waltons’ wealth-equivalent of middle class people each buy a book, the exercise is left to the reader, save to say that that’s a lot more books sold.

On another thought path, what if we could encourage art by subsidizing it? It turns out we do with the National Endowment of the Arts (NEA), but it gets so little funding ordinary folks like me don’t even know it exists.  This led me to think: what would science be if the National Science Foundation (NSF) were gutted like the NEA?

There would still be lucrative industry jobs, just like artists can get good careers as commercial artists, and there would be a few scientists who manage to develop something amazing, patent it, and become vastly wealthy, just like Iggy Pop, but then there would be the rest of the scientists, studying things with no direct benefit to any corporation. These scientists would likely be much like the struggling artists of today, barely making ends meet, telling themselves again and again that it’s all about “loving your work” while the roof of their cardboard box house/personal lab caves in on them from the rain. Then the public would benefit from their work. Maybe they’d get a private donation or two – enough that they could afford a new box. A big refrigerator box where they can lay down at night, and some plastic wrap to keep it from getting soggy and falling apart. Newly dry, and safe inside strong, reinforced cardboard, they think what a gift it is to be spending every day doing what they love.

But I digress. My old creative writing teacher and I agreed that more money to the NEA could help get new artists off the ground and encourage our nation’s creativity without shutting out the less wealthy consumers. I suggested that he write a letter to his representative to make this happen, and he said that although he lived in DC and didn’t have a national representative, he had already written several to various local representatives, crediting his letters and those of others with keeping the arts program open at one of his local schools. “Oh,” I said with a start, “you’re way ahead of me.”

 

AutoEncoders

I mustn’t divulge too much detail as I’ve been told there are some as yet unspecified confidentiality agreements going on, but for reasons I will keep to myself, I expect to be doing a lot of work on state-of-the-art machine learning in the coming year. Let me tell you a little bit about the state of the art of machine learning, known as “Deep Learning” particularly a neat little system called an AutoEncoder.

So, if we deconstruct the term “AutoEncoder,” a general idea of what it is should become relatively clear right away: an AutoEncoder automatically creates an encoding. What does this mean? Why is it important? Well, to explain, let me present this image:

We humans can encode this image in language. It is a picture of a cat in a funny position with a caption. Notice that that this description, while it accurately describes the picture, does not completely describe it.  There are an unlimited number of ways to describe this picture just as the description above could be applied to an unlimited number of pictures. This is common knowledge, commonly expressed in the aphorism “a picture is worth a thousand words.”

But wait, if this description loses much of the detail of the picture, how is it useful? This is the key: when we humans encode something in words, we focus on the elements that will be most meaningful to a given context. If I’m explaining this picture to someone who has never seen a LOLcat, the above description may suffice. If I want a description that will capture the humor of the picture, it will be much, much more difficult.

Now what does this have to do with what computers can do? Computers of course don’t use English to encode things, they use numbers. Instead of a in sentence, an AutoEncoder’s goal would be to encode the most relevant details of this image in a “vector” which is a fixed-length list of numbers. To accomplish this, the AutoEncoder will take many, many images and, using some clever math, convert the hundreds of thousands of underlying numbers (a very long vector) that represent the image verbatim into a more manageable list of numbers (a shorter vector, maybe 200 numbers). Then, to see if it did a good job, it tries to  reconstitute the images from the numbers. With more clever math, it evaluates the reconstituted images against their originals, and then it adjusts its encoding scheme accordingly. After doing this hundreds, thousands, or millions of times, the AutoEncoder, if everything went well, has a decent way of representing an image in a smaller space.

Note that this is different from compression. We would not want to use this as a compression algorithm because it’s generally extremely lossy, that is, the reconstructed image will be noticeably different from the original. This matches our experience using language to describe pictures.

So what is it good for? Well, remember when I mentioned context? Say we wanted to make a machine to automatically identify LOLcats that I would find funny. I could rate hundreds of LOLcats as funny or not funny, and provide this set of ratings alongside the  AutoEncoder as a context. So, in addition to trying to accurately encode the image, the AutoEncoder wants to encode whether it’s a funny image or not. This context can change what the AutoEncoder focuses on in its string. Just like you or I wouldn’t mention the beer can in the photo there, a well-constructed AutoEncoder may be clever enough to realize that the beer can is not likely to have much of an impact on how funny I find the picture, so it can leave it out.

AutoEncoders and deep learning in general represent a departure from the machine learning of previous decades in that they can use context and this encoding concept to develop their own features. Before, we humans would decide how an image should be encoded, using our own ingenuity to figure out what does and does not make Sam laugh when he looks at pictures of cats. Now the computer can do it on its own, and this is a big deal for the future of computer science. As amazing as it may seem, it is conceivable that within our lifetimes a time may come that I never have to look at a boring cat again.

Whatever I Say

I often get mixed feedback when I attempt to discuss my work on this blog. Sometimes someone will praise my knowledge and communication skills, but other times people will say something along the lines of “this post was incomprehensible jargon, but nevertheless surprisingly pleasant to read.” Notice how nice my commenters have been thus far. That’s because they’ve pretty much all been relatives and close friends. WordPress on the other hand automatically recommends my blog to strangers based on automatic guesses of similar interests. I’ve already gotten a follower whom I’ve never met. Hi, “Opinionated Man!”

This changes the game. I figure with just one post a week I should be able to make them consistently interesting and comprehensible, at least to the people who self-select to be in my audience. As the most interesting thing I do these days is my research (It really is very interesting if I can get anyone to understand it), I may begin to discuss more technical topics. I could also discuss more food topics, since that’s another sometimes relatively interesting thing I do and could write about. While food is more inherently relatable, the communication of science to non-scientists is something I find particularly inspiring.

Not making any promises. This blog, as I like to say, is about whatever I say it’s about, so maybe next week it’ll be a description of my proliferation of ways to eat a single batch of bean soup, maybe it’ll be an approachable explanation of neural networks, or maybe I’ll just say something about my grandmother’s eightieth birthday and relate an anecdote from my Thanksgiving trip to Maine. It’ll have to be an interesting anecdote, though.