With recent well-publicized developments like Siri, Watson (the computer that can win at Jeopardy!), and the Google Car, many people may have gotten the impression that there has been some sort of breakthrough in Artificial Intelligence (AI). We still have much to learn about how people are able to do what they do, but we know enough to know that computers don’t work the same way we do.
In the 1940’s, some people took to calling the computer an “electronic brain”. Fortunately the computer scientists of the time realized that this raised unrealistic expectations, and stressed that the machines do not think, they merely compute. They saw the importance of keeping the hype from getting out ahead of the reality.
In the 1950’s, AI researchers went looking for a challenging task that could be used to gauge a computer’s intelligence. Playing chess became one of the most popular goals. It seemed clear that winning at chess required perception, strategy, and insight. But the chess playing programs they wrote don’t outsmart their opponents. They’re just so fast that they can evaluate more possible moves, and look further ahead than humans can. So even though IBM’s Deep Blue can beat a Grand Master, it didn’t really teach us much, except that in some limited and well-defined problem domains, massive computational power, along with lots of clever specialized programming, can overcome the computer’s inherent limitations.
The key technology behind the Google car is not a smart computer, it’s LIDAR – a scanning laser that constantly measures the distance to the nearest solid object in all directions. This seems to work quite well. But people may not realize that the car doesn’t understand much about what the laser is hitting. Researchers presume that it doesn’t need to – it just needs to avoid colliding with anything. Other autonomous vehicle developers are trying to dispense with the LIDAR, and rely on video cameras, digital image recognition, and radar. The jury is still out on how well this will work. Either way, the intelligence is still in the human programmer, not in the computer. In some situations, robocars may drive better than people do. But in others, their inherent lack of understanding could produce unexpected results. The point is not that robocars won’t work, but that we shouldn’t expect them to behave like human drivers.
What these systems do might be more properly called “simulated intelligence”. And no matter how well programmers can get them to work, they just do what they’re programmed to do. Computers still don’t “understand” anything, and they don’t care about anything. They are not intelligent, and no one can say when of if they ever will be.
Even the most sanguine advocates of smart cars acknowledge that legal impediments are likely to delay, if not completely derail, their introduction. Some people believe that as computers get bigger and faster, they will just naturally become smarter. But computers today are no smarter than they were sixty years ago, even though they are now millions of time bigger and faster. Other people have faith that the vast resources that are being thrown at robocar development will inevitably overcome any technological obstacles. But in sixty years of AI research, this has never yet been the case. Money may not help if what you need is a conceptual breakthrough. And professions of interest from car companies should be taken with a grain of salt. They have a long history of exhibiting “concept cars” that they never had any intention of manufacturing.
We may or may not see commercially available self-driving cars in the years to come. But what is certain is that Automated Transit Network (ATN) technology is here today, and it’s intrinsically simpler, safer, cheaper, and more reliable than robocars can ever be. ATNs combine the benefits of automatic driving with advantages in land use, urban design, energy efficiency, and non-stop service that can’t be matched by any car, smart or otherwise.