Wednesday, November 26, 2014

Ferguson, Uncertainty, and a Way to Move Forward

I have been silent about the Ferguson matter so far, but I think it's time that I try to articulate a few of my thoughts on this matter.

Before I begin, I want to get something out of the way first. No matter what happened between Darren Wilson and Michael Brown, there is no doubt in my mind that race is an issue in our law enforcement. The reactions of people across the country to this event very clearly demonstrates that fact. People of color in our society simply do not feel protected by the police force that surrounds them, rather, they feel threatened by them. And this is a situation that simply must change moving forward if we want to create an ethical, equitable, prosperous, and peaceful society moving forward. And that fact will remain true, regardless of what actually happened in this single instance between Darren Wilson and Michael Brown.

With regard to those specifics, many people have taken to the internet to tell us exactly what did happen that night, and why they think they know what "really happened". But that is not what I will do. The simple fact is that if Darren Wilson's account of events that night is accurate, then the right decision was reached, and he was innocent of any serious wrongdoing. However, if some of the other eyewitnesses accounts of the events of that night are accurate, then Darren Wilson murdered Michael Brown in cold blood, and a very serious miscarriage of justice has taken place in this instance. And there are extraordinarily compelling reasons not to believe either Darren Wilson's account, or that of the other eyewitnesses testimony. Several of the eyewitnesses testimonies were later refuted by the forensic evidence (for example, testimonies about Michael Brown having been "shot in the back" simply do not match the forensic evidence). The simple fact is that the testimony of witnesses is by far the least reliable source of evidence imaginable. Many innocent people have been sent to prison based upon eyewitness testimony, only to later be exonerated by evidence such as DNA evidence, that simply does not make the sorts of mistakes that eyewitnesses do. And that fact means that both the testimony of Darren Wilson, and that of those others who saw the event are ultimately unreliable.

The result of this, is that I simply do not know if Darren Wilson murdered Michael Brown or not. And I believe that the certainty with which some others (on both sides) have approached this situation is largely unwarranted. So, what am I here to tell you? If I am not here to tell you who to believe, who is right, or whether justice was done, then what am I here to say? I am here to say that I don't know who is right, or what happened, but I do know how to be absolutely sure that this uncertainty does not happen again. I am here to tell you how we can know what happened next time, and how we can make sure that a repeat of this never happens again. And that answer is surprisingly simple.

Every police officer should be required to wear a body camera while on duty and while interacting with the public. Every time. Every police officer. Everywhere. Always. And when this happens we will never again be forced to say that we don't know for sure what happened. We won't have to say that we don't know whether or not justice was done or not when a police officer is not charged in a shooting death. When police officers are innocent of wrongdoing, that will be demonstrated by the camera. When they are guilty of wrongdoing, that too will be shown by the camera. The camera protects both the officer from false accusations, and the public from police abuse. And while knowing what happened after the fact is important, it is perhaps even more important that cameras can actually prevent incidents from ever happening in the first place. Both instances of abuse from police and of bad behavior from those they interact with will go down because both parties will know that they are being recorded, and that the truth of what they are doing will be known. People simply behave differently when they know that they are being watched. And evidence suggests that the use of police cameras can drastically reduce both the incidents of police use of force (up to 50%), and can drastically reduce the incidents of complaints against officers.

Now, this will not solve all our problems with police abuse in this country. And it certainly won't solve all our problems with race in this country either. But it is a start. And we simply must begin somewhere.

This is an idea, who's time has come. Let's make it happen.

Tuesday, February 19, 2013

Roadblocks to the Singularity?


Book review of "Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100" by +Michio Kaku.

This was a fun book, but I believe that he is wildly overly pessimistic with regard to his predictions concerning strong AI.

He proposed six "roadblocks to the singularity" which I would like to respond to in turn. He writes:

“No one knows when robots may become as smart as humans. But personally, I would put the date close to the end of the century for several reasons."

First, the dazzling advances in computer technology have been due to Moore’s law. These advances will begin to slow down and might even stop around 2020-2025, so it is not clear if we can reliably calculate the speed of computers beyond that…"

I believe that there are three reasons that this is incorrect:

1. We don't know if the new paradigms that could replace Moore's law will allow faster or slower continued growth, so things could get better not worse... depending.

2. Parallel computing could allow increased performance even if there is no immediate successor to Moore's law. For example, power consumption / flop continues to drop exponentially

(https://picasaweb.google.com/jlcarroll/Economy#5613003695780064354), and the Brain is a proof by example that it can get down to around 20 wats/10^19 cps... If that trend alone continues, then super computers will continue to increase in performance, even if Moore's Law comes to a screeching halt
(https://picasaweb.google.com/jlcarroll/Economy#5618931828657988610).

3. If you look at my graph, super computers will achieve 10^19cps (the upper bound of the computing power of the brain) just after his 2020 deadline, so it will be too late for the end of Moore's law to stop the creation of strong AI

(https://picasaweb.google.com/jlcarroll/Economy#5620432299391054642).

Second, even if a computer can calculate at fantastic speeds like 10^16 calculations per second, this does not necessarily mean that it is smarter than us…

“Even if computers begin to match the computing speed of the brain, they will still lack the necessary software and programming to make everything work. Matching the computing speed of the brain is just the humble beginning.

To which I respond: That is strictly true, but if we have enough calculations for one computer to simulate the other computer (in this case the human brain), then that computer will indeed be as "intelligent" as the other. The only question is whether such a simulation will be possible, and if so, when it will be possible... more on that later.

Third, even if intelligent robots are possible, it is not clear if a robot can make a copy of itself that is smarter than the original.…John von Neumann…pioneered the question of determining the minimum number of assumptions before a machine could create a copy of itself. However, he never addressed the question of whether a robot can make a copy of itself that is smarter than it…

“Certainly , a robot might be able to create a copy of itself with more memory and processing ability by simply upgrading and adding more chips. But does this mean the copy is smarter, or just faster…"

It can be trivially shown that a computer/robot can indeed create a new computer/program/robot that is smarter than itself. The fact that Neumann didn't do it doesn't make it any less trivial. How do you do it?

Let's start with human examples, then discuss hardware improvements (assuming that there are no software improvements beyond the brain simulation algorithm), and finally we will discuss whether a computer can make software improvements to its own algorithm.

Humans easily create programs that are "smarter" than the programmer. For example, it is possible for me to easily write a checkers program that plays checkers better than I do. So if you consider intelligence as a multi-dimensional thing, it is clearly possible for an agent to create a new algorithm that is smarter in one or more dimensions of intelligence than itself, with a proof by example (I do it all the time).

Next, hardware improvements:

Michio Kaku admits that a robot can create a copy of itself with more memory and processing ability, but he doubts that such a computer should be called "more intelligent." However, if the robot built a copy of itself with twice the parallel processing power, and if that computer was intelligent by running a simulation of the brain, then it would indeed be more intelligent than before. Why? simple. It can now simulate two brains at the same time, or it can simulate one brain twice as fast (getting twice as much work done on a problem per thinking time spent). No one doubts that two people collaborating on a problem do a better job than one, or that one person who spends twice as much time on a problem gets more done on it.

This can happen because we are simulating the brain with chips that run at about 10^-9 sec, while the human brain fires each neuron at 10^-3 seconds. That means that at first, each processor will be simulating multiple neurons. If you then have twice as many processors, you get to have each chip simulate less neurons, and voila, you can now speed up the simulation considerably, or run multiple simulations at the same time. This doesn't scale up perfectly, nor does it scale up forever, but it will work for quite a while, until we hit limits around where we have one processor for each neuron or synapse (depending), each one running at 10^-9 seconds, and then you may hit something of an impenetrable wall to making the simulation of a single brain go faster. But that still gives us about 10^6 levels of improvement beyond human level intelligence before we hit that wall. Furthermore, after that wall you can still simulate more brains and have them collaborate, each one working on a different part of the problem. That type of scaling should continue roughly forever. Unfortunately twenty people are not always twice as good at solving a problem than ten, so this type of improvement may eventually create diminishing returns. Yet after that limit is reached, it would be possible to put each brain simulation to work on a completely different problem, essentially doing two different things at once. The exact limits to this type of scaling appear to be a very long way off, and all of this assumes that somewhere along the way, we won't find a faster way to do what the brain does, or find a better means of allowing separate minds to collaborate and cooperate in parallel.

It seems to me that we must admit that an AI brain simulation that can do nothing more than add processors to itself is indeed "more intelligent" by any reasonable description of the term.

Now for the issue of software improvements:

Can a simple software program make a copy of itself that is "smarter" than itself? It is trivial to show that this is true. ALL machine learning algorithms are algorithms that "improve" on themselves over time. If they copied their state at one point in time, and then copied their state at a later point in time, then they just created a copy of themselves that where smarter than their previous incarnation.

But it gets better than this. It is possible to work on a meta learning algorithm that "learns to learn", meaning that the AI isn't just better at each problem because it has incorporated more data, but it becomes better at the fundamental problem of how to incorporate data over time. Or it is possible to use the computer to create a genetic algorithm that improves its own algorithms etc. There are thousands of ways in which it is possible for one piece of software to create another that is "smarter than itself." Studying this issue is an entire sub-field of machine learning that goes under the title "Meta-Learning" and "Transfer Learning" and "Learning to Learn". My master's thesis was on this issue and it is unfortunate that Michio Kaku appears to be completely ignorant of this entire field, or he would not have raised such a silly concern. (In his defense, he is a physicist not a computer scientist, however, if he is going to write about someone else's field, he could have at least consulted someone in that field that could have explained to him that he was not making any sense).

“...fourth, although hardware may progress exponentially, software may not…

“Engineering progress often grows exponentially… [but] if we look at the history of basic research, from Newton to Einstein to the present day, we see that punctuated equilibrium more accurately describes the way in which progress is made."

The data seem to indicate that this is not true. If anything, software performance and complexity is growing faster than hardware performance (see http://bits.blogs.nytimes.com/2011/03/07/software-progress-beats-moores-law/).

Furthermore, although I will grant him the idea of punctuated equilibrium, if you step back and view the trends from a distance, it often becomes clear that punctuated equilibrium is nothing more than the steps on a larger exponential trend. Most importantly, software builds on software. Each programming language from machine language, to assembly language, to a compiler, to modern interpreters, to complex and re-usable object libraries, to the current work being done by some of my colleges on statistical programming languages, provide abstractions hiding the complexities of the lower levels from those programming at the upper levels. This trend appears to be continuing, and it is this "building" effect that produces exponential progress.

Fifth, … the research for reverse engineering the brain, the staggering cost and sheer size of the project will probably delay it into the middle of this century. And then making sense of all this data may take many more decades, pushing the final reverse engineering of the brain to late in this century."

Yes, it will be complex, and yes, it will be expensive (his two central complaints). But he admits elsewhere in his book that it could clearly be done quite rapidly, the only roadblock being the money it would cost. He then makes the absurd claim that there is less perceived "benefit" to be derived from such a simulation, so people won't invest the capital needed to create that simulation. I believe that this is short-sighted. There are many commercial applications for each step along the road to this simulation, and they will only grow as we get closer (see http://www.youtube.com/watch?v=_rPH1Abuu9M). In fact, I believe that there are more potential economic benefits for this work than perhaps for any other in human history. Surely someone else besides me will see the potential, and the funding will flow.

There are many projects working on completing this monumental task, and several are proposing a time line involving around 12 years (incidentally, that is about when my projection of super computer power crosses the upper bound for running this simulation). See: http://www.dailymail.co.uk/sciencetech/article-1387537/Team-Frankenstein-launch-bid-build-human-brain-decade.html#ixzz1Rp7JEF4R and http://www.youtube.com/watch?v=_rPH1Abuu9M.

Our tools for this task are improving exponentially. Our computer power needed to perform this simulation is growing exponentially, our brain scan resolution is growing exponentially http://www.singularity.com/charts/page159.html as is the time resolution of these scans http://www.singularity.com/charts/page160.html.

He does raise another concern related to this one, which I should address:

“Back in 1986, scientists were able to map completely the location of all the nervous system of the tiny worm C. elegans. This was initially heralded as a breakthrough that would allow us to decode the mystery of the brain. But knowing the precise location of its 302 nerve cells and 6,000 chemical synapses did not produce any new understanding of how this worm functions, even decades later. In the same way, it will take many decades, even after the human brain is finally reverse engineered, to understand how all the parts work and fit together. If the human brain is finally reverse engineered and completely decoded by the end of the century, then we will have taken a giant step in creating humanlike robots.” (Michio Kaku “Physics of the Future, How Science will Shape Human Destiny and our Daily Lives by the year 2100” p. 94-95).

We already mapped the brain connections of several very simple animals, but are currently unable to turn this map into an intelligent working simulation. So it would appear that our hardware creates the potential for brain simulation long before our software catches up and is actually capable of performing the simulation. This is the root of his concern.

However, there are a finite number of types of nerve cells, and hormonal interactions that take place in the brain. Once we understand their behavior better, and once we create the algorithm for simulating them, after that moment, it is only a matter of scale and creating the larger more complex neural map. In other words, there is a gap between simulating individual neural behavior and mapping neural connections. Apparently, we can not yet simulate a single neuron's interactions appropriately, and so, knowing the network of connections for these neurons in C elegans is not as helpful as it at first might sound. We will not be able to truly simulate the worm's brain until we solve this problem, and we will not be able to truly simulate the human brain until we solve this problem. But once we can simulate these finite types of neurons correctly, we will be able to accurately simulate the worm's brain, and the human brain as well, once a neural map is created, (and once our computers become sufficiently powerful).

It is my opinion that we will solve this individual neural simulation problem long before we will fully map the human brain. I believe that will take much longer. Why do I believe that we will be able to crack the behavior of neurons so soon? Because first, the complexity of this algorithm is limited by the human genome and its associated expression mechanisms, and second, current progress in this area is quite promising, and it appears that we are currently quite close.

I actually believe that simulating the brain is a much harder problem that do extreme optimists such as Ray Kurzweil who thinks we will be doing this sort of simulation around 2019 (based on the idea that the functional simulation is less complex than the full simulation, which we won't be able to do until 2023 at the earliest). On the other hand, I believe that we will need to do the full simulation first, and then explore that for quite some time before we understand how it is working. But that only pushes things back to 2050 at the latest. Michio Kaku's assertion that 2100 will come, and go, and strong AI will still be years away seems a bit silly to me.

Michio Kaku's sixth and final argument against the singularity is:

Sixth, there probably won’t be a ‘big bang,’ when machines suddenly become conscious…there is a spectrum of consciousness. Machines will slowly climb up this scale.”

This isn't really a roadblock to the singularity. Every Singularitarian that I know agrees with this. None of them believe that some magic moment will hit and everything will change. They believe that change will accelerate until you won't be able to keep up without merging with our technology and transcending our biology, so this "roadblock" is inaccurately named, and rather irrelevant.

Monday, August 6, 2012

Consciousness, Information, and the Interpretation Problem

One of the greatest mysteries of modern science involves understanding the nature of consciousness. There are currently many competing theories for the origins of consciousness. Although a clear solution is not yet in sight, nevertheless, there are many things that I believe that we can say about the problem now.

Two competing popular theories of consciousness are Material Property Dualism (MPD), and Functional Property Dualism (FPD). Both FPD and MPD are a variety of Property Dualism, that claims that something "more" than the structure and dynamics of physics is needed to explain consciousness. MPD claims that consciousness is inescapably tied to the matter that makes up our brains, while FPD claims that it is only the functional properties of our minds that is important, and that a simulation of a brain on a computer would thus have the same subjective experiences as does the biological brain. Personally, I prefer neither of these camps, but instead subscribe to a form of Representational Functionalism (RF), that claims that nothing more than the structure and dynamics of physics is needed for consciousness. Nevertheless, I think that there is value in comparing the arguments for MPD with those for FPD, since I believe that there are compelling reasons to prefer FPD over MPD if one is forced to chose between these two theories.

A major criticism of the computational model of consciousness raised by Material Property Dualists, against both FPD and RF, is that all information must be "interpreted" before it could mean anything, or have qualia or consciousness, while Functional Property Dualists claim that it is the functional properties of the system that carries the dual properties that lead to consciousness. MPD posits that only matter can carry the "property" of consciousness, a "dual" property, beyond its causal properties, which form the structure and dynamics of physics. It is easy to understand why they feel this way. After all, why would nothing more than a bunch of ones and zeros have any subjective experience, no matter how much complexity is contained in the organization of the ones and zeros. This is all very intuitively pleasing. And a similar argument seems to show that the structure and dynamics of physics alone shouldn't lead to experience either. It should lead to all the behavior we have, including our claims to experience, but (these people argue) one can imagine all that structure and dynamics taking place like the wheels of a clock, completely absent any subjective experience. RF, which I prefer, responds to this criticism by claiming that nothing "more" than the structure and dynamics of physics is actually needed, even though it intuitively feels like something more is needed (our intuitions are wrong). In contrast, FPD gets around this argument by admitting that something "more" is indeed needed, but they tie the "more" to the information/functional properties of the system instead of to the matter. Supporters of MPD usually respond to this argument by claiming that the information in the functional system needs something to help "interpret" it correctly, and that the ones and zeros by themselves are simply random bits of information, with no proper interpretation, and thus, with no experience. Thus, they claim that the "more" must reside in the fundamental properties of matter in some way.

On the surface, these arguments seem to be quite compelling. However, the Maxwell's Demon thought experiment, indicates something strange. We know from relativity that we can turn matter into energy and  vice versa. But now we also know that we can also convert INFORMATION into matter or energy and vice versaThis has important implications for the whole consciousness debate between MPD and FPD. 

Why? Because it seems to mean that the universe is made of something fundamental, namely matter / energy / information, and that these three things are nothing more than three different manifestations of the same fundamental entity. Much as water, steam, and ice are all different manifestations of the same fundamental entity. Thus, if matter can carry some "fundamental" interpretation that allows consciousness (as MPD claims), then so can information (as FPD claims). And there is no reason to suppose that matter is any "better" at carrying this fundamental property that allows for the interpretation of experience than is information. Therefore, inasmuch as the single objection to FPD (proper interpretation) has been removed, and there are compelling reasons to prefer FPD over MPD (we haven't found any evidence of specific materials that perform this function in the brain, and David Chalmer's "fading qualia" and "dancing qualia" thought experiments STRONGLY indicate that consciousness must be found in the functional aspects of the brain, not in its material properties), it seems that we should all now prefer FPD over MPD.

This observation doesn't say much about the continuing debate between Representational Functionalism (RF) and FPD, (where I strongly prefer RF for reasons related to the epiphenomenalism argument). However, it does indicate that MPD should largely be removed from consideration as a potential solution to the problem of consciousness. The remaining debate must largely be between Functional Property Dualism and Representational Functionalism. 

Tuesday, April 24, 2012

The Essentials of Epistemology

The Essentials of Epistemology (the study of how to discover the truth):

Now we See Through a Glass, Darkly
Now we See Through a Glass, Darkly
How can we tell the difference between truth and error, between fact and fiction, between conspiracy fact and conspiracy theory? Naturally, the best solution is to become an expert. But you can't even become an expert without first learning what information to trust, and what information to ignore. Furthermore, it is impossible for everyone to be experts on everything. So we all eventually have to trust the advice of others. But whose advice should we trust? These questions are becoming increasingly important in the world of the internet, where everyone has been given a printing press, and where false (and even dangerous) ideas spread like wildfire. 

I do research in statistics, which is the study of how to determine truth from data and make rational decisions based upon this data. So I naturally have some opinions on the matter. But explaining these principles to others outside the field of statistics can be a challenge. And it is not reasonable to expect everyone to get a degree in statistics before they try to sort through the difference between truth and error. 

So, how should the average person determine truth from error? I believe that there really are a few principles that (if followed) will lead you to the truth far more often than any other approach. They are: 

1. Simplicity: The simplest theory is to be preferred over the more complex theory, all other things being equal (which has bearing on #4 as well).

2. Data over Dogmatism (be willing to change your mind): Let the data speak for itself as much as possible, don't assume that the answer must match some ideological or dogmatic assumption. Be as dispassioned and rational as possible. Allow your opinions to change when the data contradicts your initial opinions.

3. Avoid Confirmation Bias: People naturally tend to seek out data that confirms their initial opinions (this is called confirmation bias), while reacting to new data that contradicts their original belief with even more fervent adherence to the original belief (the backfire effect). This means that the luck of the draw for what you believed first has an unfair advantage. Therefore, be sure to actively look for data that contradicts your initial opinions, and do your very very best, as hard as it can be, to treat such new data fairly. (Conservatives should watch PBS, CNN, and BBC, while liberals should watch Fox News, and even *gasp* Mormons should read so called "anti-Mormon" or atheist opinions on occasion). It is only after you have read and understood the opinions of those that disagree with you, that you can be confident that you are actually right. 

4. Avoid Conspiracy Theories: Although real conspiracies do exist (usually very small ones), assuming that all data counter to your initial belief is due to a vast conspiracy to hide the truth is problematic, because first, it vastly over simplifies the reality where each person has their own (often contradictory) goals and motivations, which generally limits the real conspiracies in size and scope; second, most people want to do good, and those that do evil usually do it because they have convinced themselves that it is the right thing to do; and third (and most important) it creates a situation where your opinion (right or wrong), can never be contradicted by evidence, no matter how strong or otherwise convincing. This is an especially dangerous flavor of confirmation bias (see #3 above).

5. Trust the Experts: Because we can't personally experience everything, we must trust the opinions of others that have experienced things that we have not. Assuming that white elephants don't exist because you haven't seen one is foolish if others have. Thus, a large percentage of our understanding of the world around us must be based upon the witness and opinions of others. Our task is often to determine which witnesses to believe, and which opinions to trust.  When deciding which experts to believe, always assign more weight to the opinions of people who know more about the subject, than you do to people who know less about the subject. This means that we should trust and respect the experts in their fields. Look for issues and ideas where there is a strong consensus among the experts in a given field. And be careful of internet sources. Some 60 year old guy blogging in his underwear from his parent's basement does not a reliable expert make.

6. Look for General non Unanimous Consensus: Understand that there will always be a few dissenting voices on every issue. Don't assume that because you found a PhD physicist that thinks that the earth is flat that there is "controversy" among the scientific community on the issue, and that there is no scientific consensus on the matter, or think that the idea that the earth is a sphere is "only a theory", or that you should "teach the controversy" on this matter. Instead, look for general agreement among some large majority of the experts. 

7. Avoid Anecdotes and Emotional Stories: Anecdotes are almost entirely useless, since there will always be an anecdote or two that seems to support any possible position that one could conceivably hold. Unfortunately, these anecdotal stories often have vast emotional impact, but that doesn't mean that they are right. Instead, look for broad statistically significant studies to find truth. They are less emotionally convincing, but they are far more likely to be right!

If you follow these 7 rules, you will undoubtedly be wrong on occasion. But you will be wrong less often than if you don't, and you will be willing to change your mind rapidly when more information showing that you were wrong becomes available. If you do this, you will be unlikely to be lead astray, and you will be far more likely to discover things as they really are, really were, and really will be. 

So, how does that play out?

In the vaccination debate, it means that we should pay more attention to reputable doctors than we do to Jenny McCarthy when talking about whether we should vaccinate. It means that while the emotional story of the child that died after being vaccinated may be more emotionally moving, it should not be as intellectually convincing as a broad based statistical study. And it means that we shouldn't be surprised to find a few MD's in both camps. But we should look at the broad consensus, and realize that the vast majority of reputable doctors favor vaccination. Therefore we should favor vaccination. There is no vast conspiracy to make vaccination look successful when it is not. Sure the pharmaceutical companies have the motivation to do this, but the vast array of doctors who care about their patients don't. At least not in a way that could cause this level of near universal support.

How would these principles play out in the global warming debate? I am sure you can immediately see the answer. How about evolution? Again, is the answer obvious? How about economics? there things are a bit more muddled, but that itself is a conclusion, namely that there is consensus on some important points, but disagreement on some others.

What about the truthers? The birthers? Opposition to GMOs? And the list goes on and on and on.

(For more thoughts on the 'wisdom of the crowd,' the 'marketplace of ideas' and the problems the internet has produced in these things, see my paper: "Amplifying the Wisdom of the Crowd, Building and Measuring for Expert and Moral Consensus" by myself and Brent Allsop.)

Tuesday, January 17, 2012

The Singularity May not be Near, but You Have Yet to Convince Me

I would like to take a moment to respond briefly to Michael Shermer's fascinating article: "In the Year 9595, why the Singularity is not Near, but Hope Springs Eternal" in Scientific American, January 2012.

Michael Shermer's rebuff of singularitarianism is witty and Interesting, but ultimately un-convincing. He makes fun of those who make predictions as "soothsayer's", but he seems to be ignoring the power of trends to often accurately predict future technological performance. His "baloney-detection alarm" may go off, but he provides no counter data. I prefer a data driven approach myself, and this article, witty as it was, was certainly not data driven. If the data says that this generation is most likely special, then it most likely is, Copenhagen principle or no.

Of course, trends don't always continue. But if they don't, in this case, I will be extremely interested in knowing why not, and it was this question that was never really addressed by Michael Shermer. For example, many people have argued that computational performance trends will not continue because we will hit the quantum limits of Moore's Law by 2015-2025. At least that would be a reasonable, although ultimately flawed, argument to make. But Shermer doesn't even do that. If he had, then he might have given me more to refute. For example, then I might have been able to discuss the fact that the human brain is a proof by example that it is possible to perform about 10^19 CPS for about 20 Watts, in a few cubic feet of space IF one is willing to change the architecture of the computer system from today's Von Neumann architecture to something more massively parallel. Which means that Moore's Law may stumble to a halt, but that there is plenty of room for computational improvement after the death of Moore's Law. Of course, progress down this new route may well follow a different trend, improving at a different speed. Progress may even slow for a time after the death of Moore's Law while we face up to the fact that we must switch directions before improvement can continue. But even if this is the case, since we are set to pass the upper bound for performing a full neural simulation of the human brain by 2025, Moore's Law will ultimately fail too late to stop the creation of the hardware needed for true AI.

Shermer did provide one data point, namely that knowing the wiring diagram of the nervous system of Caenorhabditis elegans, and having sufficiently powerful hardware to perform a full neural simulation has so far not lead to a working brain simulation of Caenorhabditis elegans. This is indeed an interesting data point. However, we appear to be making exponential progress at understanding the behavior of individual neurons, AND exponential progress at understanding the wiring diagrams of the brains of ever more complex organisms. Both. I argue that ONCE we thoroughly understand the behavior of individual neurons (and their many types/kinds, connections, and plasticity) THEN knowing the wiring diagram of ANY complexity is enough for full brain simulation. This is important to point out, because some singularitarians seem to think that once we have the wiring diagrams for human brains, and once we have the computational raw power, fully human level AI is inevitable. This is not true by any means. If computer trends continue, we will have the computational power to run a full neural level brain simulation by about 2025 in our super computers. But that doesn't mean that we will have the human connection diagram by that time, nor does it mean that we will have cracked the neuron by that time. However, we ARE making exponential progress on both fronts, so if we don't have these things by 2025, I would guess that we will have them by 2045... give or take 25 years or so either way. There is a lot of uncertainty there, mostly because we don't yet know exactly how complex the problem will be (the neural level modeling I mean, we have a descent idea about the other). Furthermore, the power to do the simulations will feed back on our neural understanding. We can plug one neural model into the simulation, and see how it runs, then compare to a real brain scan, and then tweak the simulation... rinse... repeat.... etc.

Essentially, I think that this man's skepticism is unfounded, or, at least, if it is founded, he failed miserably to explain why it is founded, or to be convincing in any meaningful way.

Thursday, August 18, 2011

James Carroll's Review of The Greatest Show on Earth: The Evidence for Evolution

My rating: 3 of 5 stars

This book was only so so, and there were two main reasons.

First, his evidence:

It simply wasn't that good. And it's not because there isn't good evidence for evolution. But the whole way through this book I kept thinking things like: "I could produce better evidence than that!" "Why didn't he mention this or that?" and, "Why is he talking about this, it's a huge digression?" etc. The evidence he provided is actually overwhelmingly convincing, the problem is that there is actually even better evidence out there that he didn't talk about (or that he only mentions in passing). I did learn about a few lines of evidence that I didn't know about before, so in that sense it was worth the time I spent. I just wish that he had done a better job, since I completely agree with him that we badly need this sort of a text today.

Second, his atheism:

Dawkins is a staunch atheist. Now, Dawkins claims that his primary purpose is to provide the evidence for evolution in order to save those who have been deluded by those he calls the "history deniers." That is his term for those creationists who deny the fact that evolution happened in order to cling to Biblical inerrancy. But if that was his goal, then he should have left his atheism on its shelf, at least for the duration of this text. In fact, in the introduction, he claims that this is what he is going to do. However, it appears that Dawkins is so enamored of his atheistic position that he is incapable of doing so, and I fear that it chased away the very people he was trying so hard to reach.

I vastly preferred "Why Evolution is True" by Jerry Coyne. That book is what this book should have been. http://www.amazon.com/Why-Evolution-True-Jerry-Coyne/dp/0670020532 If you are looking for a good book on the evidence for evolution, Coyne's book is the one I would suggest instead.


Tuesday, August 9, 2011

Book Review, Stephen Hawking, The Grand Design

My rating: 4 of 5 stars

"Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge." Stephen Hawking

I couldn't agree with that statement more. Some of his other conclusions in the book from which the quote was taken "The Grand Design"... not so much.

But in what way is philosophy dead? Clearly the love of wisdom is not dead, but it may well be that the field of liberal arts philosophy may indeed be dead (or at least loosing relevance and productivity).

To attempt to discover the real truth requires more than sitting and thinking, it requires observation, and then modeling, which requires math. This means that today the mathematicians and physicists are doing the real leg work of philosophy, while the liberal arts philosophers are, for the most part, spinning their wheels.

There is a feeling that theology and philosophy should be the ones asking the questions about morality, theology, meaning, and God, while science should keep its distance. But I believe that you can't ask these questions correctly without a firm grounding in the observations of science and physics, which MUST inform any inquiry into philosophy, or even theology. As Einstein said, science without theology is lame (in the sense of not having the power to move things forward), while theology without science is blind (in the sense of moving forward, but not seeing where it is really going).

Therefore, I find that I simply can not agree with those that say that science should leave such theological matters to religion. In my view, Stephen Hawking has every right to venture into the field of theology, and to bravely see what implications his understanding of the laws of physics has on his understanding of God. This is a useful and potentially very productive undertaking.

Let us take some of Hawking's conclusions in this book as an example, and see how science can inform theology:

1. In the beginning, the universe was very small, thus the rules of Quantum Mechanics hold, and things like the universe can (and indeed will) appear out of nothing without violating the rules of quantum mechanics, so long as it eventually cancels itself out, just as virtual particles usually do.

2. The universe has an equal amount of positive and negative energy, and so is a cosmic free lunch, and can (and will) appear out of nothing (essentially, the universe cancels itself out, much like virtual particles do).

3. But the universe was also hugely massive, so it followed not only the rules of quantum mechanics, but also the rules of relativity, which says that mass bends space and time, and at the point where you have enough mass to make a black hole (as we clearly had in the early universe), time itself stops, so there IS no time before the big bang, time curves back upon itself, and comes to a single closed point, creating a beginning not only of the universe, but of time itself, and thus it creates a beginning to the chain of causation. The chain of causation (where the causes come before the results) comes to an end at the Big bang, which necessarily had no cause, because there was no time before the big bang for that cause to act in.

His conclusion? There is no God in the platonic sense of the "prime mover" or "first cause" because 1. we don't need him to explain how and why the universe could come into being, (quantum mechanics does that) and 2. there could be no creator of the universe, because there was no time before the universe was created for Him to act in. Essentially, God could not "cause" the universe, because relativity guarantees that there was no time in which he could act to initiate such a cause, and after the big bang bangs, we don't need Him to explain the progression of the universe from that point on (the laws of nature do that).

Whether or not you agree with these conclusions (which I do not), it is clear that a firm understanding of the issues surrounding quantum mechanics should indeed necessarily inform our theology. Even if his reasoning here is flawed, that is the way science works. It is necessary for someone to make these sorts of inferences, so that science can move forward and either prove or disprove this theory.

So, why don't I come to the same conclusions as Hawking? His reasoning appears rather solid at first glance. However, relativity and quantum mechanics are notorious for their inability to play nicely together, and there are a myriad of potential theories that have been proposed in an attempt to produce a good theory of quantum gravity. He is here espousing one of these theories, granted, it is the one that is (so far) the most mathematically robust, but it is by no means the only solution to this problem. For example, some theories of quantized time predict a big bounce instead of a big bang, in which case there was indeed time before the big bang. Another competing theory predicts that two of the membranes predicted by M-theory collided, producing the big bang, again, this is a theory that predicts time before the big bang. Still other theories predict that there are other dimensions of time, outside of our own. It is also unclear to some whether quantum fluctuations can create virtual particles without space or time in which to create them, which could cast doubt on whether a quantum fluctuation alone could create the universe from no-where and no-when. For example, Sean Carroll proposes that each universe is born from parent universes (see From Eternity to Here: The Quest for the Ultimate Theory of Time), in which case, time would indeed exist before the big bang. The possibilities are nearly endless. And, most importantly, we have yet to find observations that can clearly differentiate between many of these competing theories. Essentially, we have no observationally verified theory of quantum gravity, which is necessary before we can make any real predictions of how the universe behaved in these early moments that are so essential to Hawking's arguments.

So, if we take this into consideration, we can rephrase Stephen Hawking's brilliant deduction differently. IF we accept THIS theory of quantum gravity, together with its predictions about quantum fluctuations and the beginning of time, THEN the universe necessarily had no cause within our dimension of time, and thus, there is no God that exists solely within our universe's dimension of time. I believe that this is a valid deduction, and, to some extent, it should inform our understanding of God. It is only unfortunate that he didn't state his conclusions with this level of cautiousness. Instead, he is far more confident in his conclusions than is warranted by the data, and he leaves out the many "if"s that should have preceded his conclusion. This was perhaps my only serious disagreement with the Book.

And what of my own conclusions about God? That is not really what this review is about, but to be short:

Theleologians in my chosen branch of Christianity have often said that God does not just predict the future, he quite literally sees it. For this to be the case, God must, of necessity, exist outside of our dimension of time, and likely outside of our dimensions of space as well. I find the fact that science is now predicting a universe of multiple dimensions and multiple universes (some with different laws of physics), and is finding that God cannot exist only within our dimension of time and still create the universe, to be quite faith promoting since that is in line with what I believed all along.

Stephen Hawking would likely take issue with my interpretation of his work, but hey, that is what Science is all about, and we should be grateful to Stephen Hawking for so clearly expressing this brilliant deduction.