Wednesday, July 6, 2011

An Honest Reassessment of the Singularity

Ideas inevitably backfire because our initial enthusiasm and rush to achieve them overlook the complexities, and the darker aspects of reality. The transhumanists for many years have been selling the snake oil of "the singularity" with a doctrinal purity that overlooks the grimy details of real world shortcomings and short circuits and darker motives.

Here is a nice post by Brad DeLong on his blog Grasping Reality with Both Hands that looks at the singularity and discovers that we've already had several experiences of it:
Charlie Stross does not fear (or anticipate) the Singularity:
Three arguments against the singularity: I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think....

First: super-intelligent AI is unlikely... you get there... by way of human-equivalent AI, and human-equivalent AI is unlikely.... [H]uman intelligence... an emergent phenomenon of human physiology... only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine.... We may want machines that can recognize and respond to our motivations and needs, but we're likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.... We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don't want my self-driving car to argue with me about where we want to go today. I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don't want to be sued for maintenance by an abandoned software development project....

Uploading... is not obviously impossible.... Our form of conscious intelligence emerged from our evolutionary heritage, which in turn was shaped by our biological environment. We are not evolved for existence as disembodied intelligences, as "brains in a vat", and we ignore E. O. Wilson's Biophilia Hypothesis at our peril; I strongly suspect that the hardest part of mind uploading won't be the mind part, but the body and its interactions with its surroundings.

Moving on to the Simulation Argument: I can't disprove that, either... it offers a deity-free afterlife, as long as the ethical issues... are ignored.... [I]t would make a good free-form framework for a postmodern high-tech religion. Unfortunately it seems to be unfalsifiable, at least by the inmates (us)....

This is my take on the singularity: we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses. We may eventually see mind uploading... but beyond giving us an opportunity to run Nozick's experience machine thought experiment for real, I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it. Finally, the simulation hypothesis... suggests that if we are already living in a cyberspatial history simulation (and not a philosopher's hedonic thought experiment) we might not be able to apprehend the underlying "true" reality.... Any way you cut these three ideas, they don't provide much in the way of referent points for building a good life... it's unwise to live on the assumption that they're coming down the pipeline within my lifetime.

I'm done with computational theology: I think I need a drink!
Me? I think about four moments:
  1. Pre-linguistic homo sapiens, who: (a) knew about fifty people; (b) hunted, gathered, cooperated, fought, and raised children; and (c) learned stuff only by watching what others did and what happened to them.

  2. Linguistic hunter-gatherer homo sapiens, who: (a) knew perhaps fifty people--but had heard stories about up to 500 more; (b) hunted, gathered, cooperated, fought, and raised children but also bargained, allied, and promised; and (c) learned a huge amount of what had happened outside of his or her sight and hearing by talking and listening--was an anthology intelligence with the memory and experience not of one but of a hundred.

  3. Agricultural metal-working chariot-driving reading-and-writing homo sapiens--at least the upper classes, the Atreids and the Chryseids--who: (a) farmed, herded (or took stuff from the farmers and herders), wrote down stories, claimed to have special knowledge of gods, claimed to be descended from gods and have the right to rule, trained as technologically-advanced chariot-driving specialists not just in human-on-animal hunting violence but in coercive violence, kept track of accounts, built or ordered the construction of the Lion Gate at Mycenae, etc.

  4. Modern post-industrial high-mass-consumption web-surfing humanity--or at least the first-world upper middle classes--whose lives we know very well.
All of these strike me as partial singularities: in each case, about three-quarters of life is more or less the same as it had been earlier--and one-quarter is transformed utterly.
I like DeLong's take on this, i.e. we've seen this movie before and it isn't the showstopper that the promoter promised. Sure it is exciting but not in a way that anybody passing through the portal of then to now would really appreciate.

The part that DeLong doesn't play with is how snake oil salesmen only talk about the revitalizing and curative powers of their potion in a bottle. They don't mention the side-effects. They never point out that to get the miracle cure your teeth will fall out, your face muscles will go slack, and you will drool.

The message is simple: the future will be a lot like now. And if there are some benefits to be had, the rich and the powerful, the criminal and the impatient, will beat you to it and open Pandora's box and discover the surprises within. If it is truly better, then they will pull a Clarence Thomas, climb up through the singularity, and pull up the ladder after them leaving us to watch as they take the mothership to a place far beyond where we in our miserable grubby lives will ever go.

I don't know how the future will unfold, but I know it will be more complex, more nuanced, more surprising, and more disappointing than any hawker of singularity has ever presented.

No comments: