Thursday, August 1, 2013

Singularities happen all the time

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. -- Vernor Vinge, 1983
 The technological singularity is supposed to occur when we develop "true" or "strong" AI.  Beyond that point, we are told, everything will be different, in the most conveniently vague ways.  Perhaps society will run on communism, or anarcho-capitalism, or something we don't have a name for (in other words, whatever the author happens to think would be ideal).  We are told that the resulting society will be totally incomprehensible to those of us still living in the present day.

My reaction to all this can be summed up in two words: "So what?"

When Charles Babbage wrote on the analytical engine, could he have foreseen the internet?  For that matter, could he have foreseen the processing power of a single modern CPU?  Would his contemporaries be able to comprehend the world we live in today?

"Singularity," so far as I can tell, appears to mean "a point in history beyond which predictions are highly unreliable or entirely inaccurate."  It seems to me that almost anything we refer to as a "revolution" should fit this bill.  The scientific revolution, the industrial revolution, all the way back to the neolithic revolution.  All of these revolutions wrought enormous change and upheaval, and none of them would seem to have admitted many accurate predictions.  It is unclear to me how the alleged singularity is supposed to differ from them.

Yet many believers get extremely worked up about it.  Certainly Vinge, above, was rather excited at the prospect.  I must admit, the notion of strong AI isn't exactly boring to me, but I fail to understand the broader obsession.  Many of the people supposedly championing this idea have proceeded to predict how it will turn out, despite its (alleged) inherent unpredictability.

The other issue, of course, is that we've known for years now that Moore's law just isn't going to cut it.  AI is not a matter of throwing lots of cycles at non-AI-hard problems and the computer somehow "waking up."  Intelligence does not need to be designed, as evidenced by human evolution, but nor will it appear ex nihilo.  If not explicitly designed, it must be selected for.  Humans are not smart because we have a large lump of gray matter in our heads.  We are smart because that gray matter is organized in a very particular way.  You will not get an AI out of print "hello world!", no matter how many cores you run it on.  Machine learning may be helpful here, but it is not a silver bullet.

We're nowhere near a functioning strong AI.  For that matter, we're nowhere near defining the term "strong AI" in a widely-acceptable way.  There are certain setups which would probably qualify under any reasonable definition (such as whole brain emulation), but those setups tend to be the most complex and least feasible designs (no one is going to simulate the physics of individual neurons if they can avoid it, but it's unclear how much "resolution" we can safely give up here), and for that matter, the least interesting setups (yes, a brain in a computer is just as smart as a brain in a human, but who cares?).

So far as I can tell, almost all features of the AI are up for grabs.  Should it pass the Turing test?  What about an IQ test?  Are any of these conditions sufficient, or is it necessity all the way down?  What if we're just "teaching to the test?"  Maybe we're just making a program good at passing tests and bad at actually thinking for itself (and what does it mean for the program to "think for itself," anyway?  I've certainly never had occasion to ask the Python interpreter its opinion on executing a given piece of code).  How can tests alone distinguish between "good at tests" and "smart?"  And if we're not to use tests, how can we apply machine learning to any of this (or do we propose to hand-code this AI)?  I've never seen a proper definition answering these questions.

I don't doubt that we will eventually develop something which most of us will probably agree qualifies as strong AI.  I don't doubt that some rather interesting, even world-shaking, consequences will result.  I do doubt that it will happen any time in the immediate future.  I do doubt that it will resemble the utopia many authors seem to anticipate.  And I most sincerely doubt the value of telling everyone we're on the verge of developing a magical AI that will make everything perfect when we've no idea what such an AI would even look like.

No comments:

Post a Comment

This is pretty much a free-for-all. If your comment is very hard to read (e.g. txtspk, l33t, ALL CAPS, etc.) I may fix it for you or remove it. If your comment brings absolutely nothing to the discussion (e.g. pointless flaming, spam, etc.), it is subject to removal, unless I find it sufficiently amusing. If you don't like my judgment, tough shit, it's my blog. If the blog post itself is old enough, your comment is subject to moderation, so don't panic if it's not visible immediately.