The Singularity May not be Near, but You Have Yet to Convince Me

I would like to take a moment to respond briefly to Michael Shermer's fascinating article: "In the Year 9595, why the Singularity is not Near, but Hope Springs Eternal" in Scientific American, January 2012.

Michael Shermer's rebuff of singularitarianism is witty and Interesting, but ultimately un-convincing. He makes fun of those who make predictions as "soothsayer's", but he seems to be ignoring the power of trends to often accurately predict future technological performance. His "baloney-detection alarm" may go off, but he provides no counter data. I prefer a data driven approach myself, and this article, witty as it was, was certainly not data driven. If the data says that this generation is most likely special, then it most likely is, Copenhagen principle or no.

Of course, trends don't always continue. But if they don't, in this case, I will be extremely interested in knowing why not, and it was this question that was never really addressed by Michael Shermer. For example, many people have argued that computational performance trends will not continue because we will hit the quantum limits of Moore's Law by 2015-2025. At least that would be a reasonable, although ultimately flawed, argument to make. But Shermer doesn't even do that. If he had, then he might have given me more to refute. For example, then I might have been able to discuss the fact that the human brain is a proof by example that it is possible to perform about 10^19 CPS for about 20 Watts, in a few cubic feet of space IF one is willing to change the architecture of the computer system from today's Von Neumann architecture to something more massively parallel. Which means that Moore's Law may stumble to a halt, but that there is plenty of room for computational improvement after the death of Moore's Law. Of course, progress down this new route may well follow a different trend, improving at a different speed. Progress may even slow for a time after the death of Moore's Law while we face up to the fact that we must switch directions before improvement can continue. But even if this is the case, since we are set to pass the upper bound for performing a full neural simulation of the human brain by 2025, Moore's Law will ultimately fail too late to stop the creation of the hardware needed for true AI.

Shermer did provide one data point, namely that knowing the wiring diagram of the nervous system of Caenorhabditis elegans, and having sufficiently powerful hardware to perform a full neural simulation has so far not lead to a working brain simulation of Caenorhabditis elegans. This is indeed an interesting data point. However, we appear to be making exponential progress at understanding the behavior of individual neurons, AND exponential progress at understanding the wiring diagrams of the brains of ever more complex organisms. Both. I argue that ONCE we thoroughly understand the behavior of individual neurons (and their many types/kinds, connections, and plasticity) THEN knowing the wiring diagram of ANY complexity is enough for full brain simulation. This is important to point out, because some singularitarians seem to think that once we have the wiring diagrams for human brains, and once we have the computational raw power, fully human level AI is inevitable. This is not true by any means. If computer trends continue, we will have the computational power to run a full neural level brain simulation by about 2025 in our super computers. But that doesn't mean that we will have the human connection diagram by that time, nor does it mean that we will have cracked the neuron by that time. However, we ARE making exponential progress on both fronts, so if we don't have these things by 2025, I would guess that we will have them by 2045... give or take 25 years or so either way. There is a lot of uncertainty there, mostly because we don't yet know exactly how complex the problem will be (the neural level modeling I mean, we have a descent idea about the other). Furthermore, the power to do the simulations will feed back on our neural understanding. We can plug one neural model into the simulation, and see how it runs, then compare to a real brain scan, and then tweak the simulation... rinse... repeat.... etc.

Essentially, I think that this man's skepticism is unfounded, or, at least, if it is founded, he failed miserably to explain why it is founded, or to be convincing in any meaningful way.

Groundhog's Day and the Meaning of Life

Yesterday was Groundhog's Day, the holiday where everyone waits with baited breath for a rodent to decide if it saw its shadow, and ther...