|
|
|
{The Wall Street Journal, 8-May-90, p. B1}
As futuristic robots go, Genghis isn't much to look at. The six-legged, foot-long contraption could pass as a sixth-grader's wire sculpture of an ant. The researchers who built it whimsically designed the thing to chase people - like its namesake, Mr. Kahn.
As an attack robot, though, Genghis isn't quite up to speed. It lurches slowly across a floor at Massachusetts Institute of Technology and almost crashes into a wall. Still, its designers are very proud of what their creation can do - for unlike most robots, Genghis doesn't have a brain.
At least not in the ordinary sense of a centralized control program. Instead, Genghis has a network of small, simple control programs, each devoted to a single function such as lifting a leg. They work independently but are connected to interact something like bees in a hive. From their collective action, walking "emerges" - as a store of honey emerges from bees' interactions.
Interacting Networks
Genghis may lurch, but researchers at MIT's artificial intelligence laboratory are using it to mount a frontal assault on the conventional wisdom in artificial intelligence, the attempt to mimic human thinking with computers. Since the 1950s, AI research has been dominated by big, complex systems aimed at copying higher thought processes, such as the ability to understand this sentence. Despite some successes, the approach so far has emulated human intelligence about as well as early aircraft flew by flapping birdlike wings.
Now at MIT and some other places, AI researchers are trying a radically different approach. Their idea, called "bottom-up" AI, is to build interacting networks of many relatively simple devices, or "agents." The agents might be a network of little software functionaries running on off-the-shelf chips, as in Genghis. Or they could be separate, interacting machines - the MIT researchers envision "robot insect societies" that clean up oil spills, build dams, explore other planets or pounce on dust balls under the radiator.
Such swarms of cheap little robots "would be sort of like parallel processors," fast, new computers that perform many calculations at once, says Anita Flynn, on of the MIT team. "Only they would be doing the mechanical work."
If set up right - a big if, critics say - these networks of agents can give rise to surprisingly complex and flexible collective action. When hit by a transient power outage, Genghis collapses and flails its legs helplessly; the ability to coordinate them has been erased from the agents inside the robot. Soon, however, the flailings begin to look more coordinated. Within a couple of minutes, the agents gradually "learn" again how to make Genghis stand up and go. The process uncannily resembles an accelerated vision of a six-legged infant learning to walk.
What happens is that "the agents learn how to avoid receiving negative feedback," which they get from sensors on Genghis' belly when it falls downs, says Pattie Maes, an MIT visitor from Belgium who recently asses the learning ability to Genghis' repertoire.
In a sense, the agents program themselves to walk, trying different actions and learning from the sensors - and from communication among themselves - which movements work and which don't. For example, it learns to always keep at least three legs on the ground, adopting a pattern of movement seen in insects and familiar to entomologists called the alternating tripod gait.
A conventional robot could handle such challenges as learning to walk only if they were foreseen by its designers and precisely accounted for in its master control program. But foreseeing all the problems that crop up in real-world situations is almost impossible. Thus, robots and conventional AI systems in general, work best in tightly constrained environments where surprises are minimized. Examples include assembly-line robots and chess-playing programs. Similarly, AI programs once touted as potential rivals of human experts haven't lived up to such expectations because they're often fooled by real-world complexities.
The bottom-up approach won't necessarily overcome such problems. But it shows promise at cracking them, partly because it "decentralizes things so you don't have a big program that breaks" when something unexpected happens, says MIT's Marvin Minsky, considered one of AI's leading lights. Mr. Minsky isn't directly involved with MIT's new robots, but his theory that the mind is a society of agents has helped inspire them.
In any case, bottom-up ideas, which have been simmering for decades in AI research, are now bubbling over and making a splash. One reason is neural networks, a variation on the bottom-up theme in which agents are rough likenesses of the brain's neurons. Such computers are showing promise in computer vision, speech recognition and many other uses.
Danny Hills, founder of Thinking Machines Inc., a Cambridge, Mass., computer company, is pursuing another bottom-up idea: artificial evolution. The method automatically creates programs for certain tasks by randomly generating and combining hundreds of possible programs in one of Thinking Machines' computers. Each is tested for its ability to perform the task; only the fittest are allowed to survive and be recombined to form a new generation of programs. Eventually, programs emerge from this computational swamp that are comparable to those a human might write, says Mr. Hills.
|
|