Part 10 (1/2)
With a cochlear implant, a mechanical device has taken over one of the brain's functions. Silicon has been subst.i.tuted for carbon. It is a little different from a heart pacemaker, which stimulates the cardiac muscle to contract. This is directly connected to the brain, and the software determines what is heard. The conspiracy crowd may get a little agitated by this, because the software developer determines what is being heard. Is it ethical to use cochlear implants? Most people do not have a problem with them. Although the wearer may depend on a computer for part of his brain processing, Michael Chorost has written that although he is now a cyborg, his cochlear implant has made him more human,8 allowing him to be more social and partic.i.p.ate in a community. People with normal hearing do not think of the cochlear implant as an enhancement. They think of it as a therapeutic intervention. One ethical question that arises is, What if in the future such implants or other devices allow you to have superhuman hearing, hearing enhancement? What if such an implant allows one to hear frequencies the human ear cannot hear? Is that OK too? Would hearing more frequencies provide a survival advantage? Would you be less of a person or less successful if everyone around you had one and you didn't? Will you have to upgrade to silicon to survive? These are the questions we are going to be facing, and they don't concern only sensory enhancements.
Artificial Retinas.
Progress toward retinal implants has been slower. There are two questions that remain unanswered: How many electrodes will be necessary for the retinal implant to provide useful vision? And how much sight must they generate for it to be useful? Is being able to navigate enough, or must one be able to see well enough to read? Experimental retinal implants that have been tested on humans have only sixteen electrodes, and the vision they provide is only spots of light. A second implant that is not yet ready for human testing has sixty-four electrodes. No one knows how many electrodes will be necessary to provide adequate vision. It may well be that for vision, hundreds or thousands of electrodes will be needed, and their development will be dependent on the continuing advancements in nanotechnology and the miniaturizing of the electrode arrays. Rodney Brooks, a leader in the robotics world, sees the possibility of retinal implants being adapted for night vision, infrared vision, or ultraviolet vision.9 One day you may be able to trade in one good eye for one of these implants to enhance your vision beyond that of natural humans.
Locked-In Syndrome.
One of the most terrifying brain injuries that a person can sustain is a lesion to the ventral part of the pons in the brain stem. These people are awake and conscious and intelligent but can't move any skeletal muscles. That also means that they can't talk or eat or drink. This is known as locked-in syndrome. The ones who are lucky, if you can call it that, can voluntarily blink or move their eyes, and this is how they communicate. Lou Gehrig's disease (amyotrophic lateral sclerosis, or ALS) can also result in this syndrome. Phil Kennedy, a neurologist at Emory University, came up with a technology he felt could help these people. After successful trials in rats and monkeys, he was given the OK to try it in humans.
In 1998, for the first time, Kennedy implanted an electrode made up of a tiny hollow gla.s.s cone attached to two gold wires. The electrode is coated with neurotrophic factor, which encourages brain cells to grow into the tube and hold it stable in the brain. The electrode is implanted in the left-hand motor region of the brain and picks up the electrical impulses the brain generates. The patient imagines moving his left hand, and the electrode picks up the electrical impulse that this thought produces. The electrical impulse travels down the two wires, which are connected to an amplifier and an FM transmitter outside the skull but under the scalp. The transmitter signals to a receiver external to the scalp. These signals are routed to the patient's computer, interpreted and converted by software, and end by moving the cursor on the computer screen. Kennedy's first patients were able, after extensive training, to imagine moving their left hand and thereby move the cursor on the computer screen!10, 11 This was and still is truly amazing. He had captured electrical impulses generated by thinking about a movement and translated them into movement by a computer cursor. It requires huge processing power.12 A myriad of neural signals must be sorted through to remove ”noise,” the remaining electrical activity must be digitized, and decoding algorithms must process the neural activity into a command signal-all in a few milliseconds. The result is a command that the computer can respond to.
This is all based upon an implant that can survive in the salty sea-like environment inside the body without corroding, transmit electrical signals without producing toxic by-products, and remain cool enough to avoid cooking the nearby neurons. This was not an easy a.s.signment. This is an incredible first step, which actually, of course, was not the first step but one based on hundreds of thousands of other steps. And one electrode doesn't provide a lot of information. It took the patient months to learn how to use it, and the cursor could only move horizontally, but the concept worked. There are several groups approaching this drawing board from different angles.13 This type of device is known as a brain-computer interface (BCI). Unlike the cochlear implant, which is supplying sensory input information to the brain, BCIs work on the output from the brain. They pick up electrical potentials generated in the brain as a by-product of neuronal activity and translate the neuronal signals into electrical impulses that can control the computer cursor-or, in the future, other devices.
BASIC-SCIENCE BREAKTHROUGHS.
In 1991, Peter Fromherz of the Max Planck Inst.i.tute in Germany succeeded in developing a neuron-silicon junction. This was between an insulated transistor and a Retzius cell of a leech,14 and was the beginning of actual brain-computer interfaces. The problem that had to be surmounted was that although computers and brains both work electrically, their charge carriers are different. It's roughly like trying to hook up your gas stove to an electric line. Electrons carry the charge in the solid silicon of the chip, and ions (atoms or molecules that have gained or lost an electron) do the job in liquid water for the biological brain. Semiconductor chips also have to be protected from corrosion in the body's salt.w.a.ter environment, as anyone who has ever worked or lived by the ocean knows. Fromherz's ”intellectual and technological challenge” was to join these different systems directly at the level of electronic and ionic signals.15 This technology has allowed another lab more recently to implant a different system, called the BrainGate system, developed by John P. Donoghue at Brown University, using a neural implant developed by Richard Normann at the University of Utah. The implant, known as the Utah electrode array, was originally designed to be used in the visual cortex, but Donoghue thought it would work as well in the motor cortex. In 2004, an implant with ninety-six electrodes was surgically inserted into Matthew Nagle, a quadriplegic patient who had been stabbed in the neck at a Fourth of July celebration three years before while coming to the aid of a friend. Since this patient had been quadriplegic for a few years, no one knew if the part of his brain that controlled his motor system would still respond or whether it would have atrophied from disuse. However, he began to respond right away.
It was also easier to use than Kennedy's implant. Nagle didn't need several months of training before he was able to control it. Just by thinking about it, he was able to open simulated e-mail and draw an approximately circular figure on the computer screen using a paint program. He could adjust the volume, channel, and power on his television, and play video games, such as Pong. After a few trials, he was also able to open and close a robotic prosthetic hand by just looking at the hand, and he used a simple multijointed robotic limb to grasp an object and transport it from one location to another.16 This was not done easily or smoothly, but it was possible. Obviously this is huge. Anything that gives such people any degree of control over their environment is momentous. The system still has many bugs to be worked out. When the patient wants to use the system, a cable that leads to the bulky external processing equipment must be attached to a connecter on his skull. Each time it is turned on, a technician has to recalibrate the system. And, of course, the electrode array in the brain is no small potatoes. The risk of infection is ever present, as are the probability of scar tissue eventually causing the implant to lose function, the risk of causing more damage with insertion or movement of the array, and its possible malfunction.
How can a chip with only ninety-six electrodes code for the movement of an arm? The idea that recording the firing of just a few neurons could accomplish a motor activity came from Apostolos Georgopoulos, a neurophysiologist currently at the University of Minnesota. He had observed that an individual nerve cell performs more than one function. A single neuron fires for more than one direction of movement, but has a preferred direction of movement.17 It turned out that the frequency that it was firing determined the direction of the muscle's movement: If more frequently, it was moving in one direction; less, in another-a bit like Morse code of the brain. Georgopoulos found that through a vector a.n.a.lysis (not everyone has forgotten their high school trig cla.s.s) of the firing frequency and preferred direction of firing, he could accurately predict the direction of muscular movement.18 He also suggested that recording only a few neurons, between 100 and 150, would produce fairly accurate predictions of movement in three-dimensional s.p.a.ce.19 This made using a small electrode panel feasible in recording neuronal intentions.
For a locked-in patient, or a paralyzed patient, more autonomy would include feeding himself and being able to get a gla.s.s of water without calling for a.s.sistance. Controlling a robotic arm to perform these tasks would be great. However, there are still many limiting factors to these systems. Without enumerating all the bugs, one obvious factor is that they are open-loop systems. Information goes out, but none comes back in. In order for a person to be able to control a prosthetic arm to drink a cup of coffee or feed himself at his own pace, sensory information needs to be sent back to the brain to prevent the many a slip 'twixt cup and lip. Anyone who has done the Mr. Small skit knows about this problem.*
The input problem is a complicated business. No one quite knows all the ins and outs of how proprioception works. In addition, there is the need for sensory information, such as how firmly one is grasping a cup, its weight, temperature, and whether it is following a smooth trajectory to the mouth. There is hope that if this information can be programmed into a prosthetic arm, perhaps the real arm could be programmed and directed too. The arm would have its nerves connected to chips that receive signals from the implants in the brain directing its movement, but also incoming sensory signals would be decoded by the chip and sent to the brain to give it feedback. In this way, the implant would serve as a bridge to bypa.s.s the severed nerves.
The human arm, however, which we take for granted as we reach for a cup of java or twist a little pasta onto a fork, that whole shoulder-elbow-wrist-hand with all its fingers and network of bones, nerves, tendons, muscles, and ligaments, is immensely complicated. Muscles are flexing and extending together, being stimulated and inhibited, twisting and adjusting their movement constantly, all at varying velocities, all with sensory, proprioceptive, cognitive, and pain feedbacks to the brain telling it the muscles' position, force, stretch, and velocity. The sensory system actually is sending back to the brain about ten times the information the motor system is sending out. The current implants are obviously still quite crude, but they are being improved every year, being reduced in size and given more capacity, just as personal computers have gotten smaller and faster with more memory. But the idea works. Neurons in your brain can grow onto a computer chip and transfer neuronal signals to it. There can be silicon replacement parts for the brain.
Richard Andersen, a professor of neuroscience at Caltech, has another idea. He thinks instead of using the motor cortex as the site to capture neuronal firings, it would be better and easier to go back up to a higher cortical area where the visual feedback is processed and the planning for the movement is made-the parietal cortex.20 The posterior parietal cortex is situated between the sensory and the motor regions and serves as a bridge from sensation to action. His lab has found that an anatomical map of plans exists within this area, with one part devoted to planning eye movements and another part to planning arm movements.21, 22 The action plans in the arm-movement area exist in a cognitive form, specifying the goal of the intended movement rather than particular signals for all the biomechanical movements. The parietal lobe says, ”Get that piece of chocolate into my mouth,” but does not detail all the motions that are necessary: ”First extend the shoulder joint, by flexing the blah blah blah....” All these detailed movements are encoded in the motor cortex. Andersen and his colleagues are working on a neural prosthesis for paralyzed patients that records the electrical activity of nerve cells in the posterior parietal cortex. Such an implant would interpret and transmit the patients' intentions: ”Get the coffee to my mouth.” They think this will be much easier for software programmers. These neural signals are decoded using computer algorithms, and are converted into electrical control signals to operate external devices such as a robot arm, an autonomous vehicle, or a computer. The robotic arm or vehicle would simply receive the input as a goal-chocolate in mouth-leaving the determination of how to accomplish the goal to the other systems, such as smart robotic controllers. Smart robots? We'll get there soon. This bypa.s.ses the need for a closed-loop system. This system also needs relatively few neurons to send a signal.23 Brain surgery, implants, infection-can't they figure out something that doesn't require going inside the head? Can't they use EEGs?
Jonathan Wolpaw, chief of the Laboratory of Nervous System Disorders of the New York State Department of Health and State University of New York, thinks so. He has been working on this problem for the last twenty years. When he first began, he had to figure out if the idea of using brain waves captured externally was possible. He made a headset with a series of external electrodes positioned over the motor cortex, where neurons fire to initiate movement. These neurons give off weak electrical signals that the electrodes pick up. Getting useful signals from ”a few amplitudes of scalp-recorded EEG rhythms that reflect in a noisy and degraded fas.h.i.+on the combined activity of many millions of neurons and synapses”24 was difficult. After several years, he was able to show that people could learn to control their brain waves to move a computer cursor. The software for this system has been many years in development. The headset electrodes pick up the signals, and because the strength of the signals varies from person to person, and from one part of the cortex to another, the software is constantly surveying the different electrodes for the strongest signals, giving those the greatest influence in the decision-making process as to which way a cursor should move.
Scott Hamel, one of the subjects who test Wolpaw's system, says it is easiest to use when he is fully relaxed. If he tries too hard, has other things on his mind, or gets frustrated and tense, things don't go as well.4 Too many neurons are competing for attention. Wolpaw and his group, and others who have taken up the challenge, have found that ”a variety of different brain signals, recorded in a variety of different ways and a.n.a.lyzed with a variety of different algorithms, can support some degree of real-time communication and control.”25 However, there is a big problem, and it is not just with externally controlled BCIs. It is also true of the implants. Even in controlled conditions, the results are variable. Users are better on some days than others, and performance can vary widely even within a single session and from trial to trial. Cursor movements are slow and jerky, described by some as ataxic.24 Wolpaw thinks this problem is going to persist unless researchers take into account the fact that BCIs ask the brain to do something entirely new.
This becomes clear if you look at what the brain normally does to produce movement and how it normally does it. The job of the central nervous system (CNS) is to convert sensory inputs into appropriate motor outputs. This job of creating motor outputs is a concerted effort of the entire CNS from the cerebral cortex to the spinal cord. No single area is wholly responsible for an action. Whether you walk, talk, high jump, or bronco bust, there is a collaboration among areas, from the sensory neurons up the spinal cord to the brain stem and eventually to the cortex and back down through the basal ganglia, thalamic nuclei, cerebellum, brain-stem nuclei, and spinal cord to the interneurons and motor neurons. And even though the motor action is smooth and consistent from one time to the next, the activity in all those different brain areas may not be. However, when a BCI is being used, it is a whole new ball game. Motor actions, which are normally produced by spinal motor neurons, are now being produced by the neurons that normally just contribute to the control of the motor neurons. Now they are putting on the whole show. They have to do their own job and a.s.sume the role normally performed by spinal motor neurons; their activity becomes the final product, the output, of the entire CNS. They are doing it all.
The brain has some plasticity, but there are limits. Wolpaw makes the point that BCIs provide new output pathways for the brain, but the brain has to learn them. The brain has to change the way it normally functions. He thinks that in order to make BCIs perform better, researchers have to make it easier for the brain to implement these new output pathways. An output pathway can either control a process or select a goal. He also thinks that outputting a goal is easier. Just tell the software the goal, and let it do all the work. Wolpaw is walking into Andersen's camp.
This technology has not been overlooked by the business world. There are companies that have come up with their own versions that are being developed for playing computer games. One company, Emotiv, has a sixteen-sensor strap-on headset that they claim reads emotions, thoughts, and facial expressions. According to the company, it is the first brain-computer interface that can detect human conscious thoughts and nonconscious emotions. Its current gaming application allows for 3-D characters to reflect the player's expressions: You wink, it winks; you smile, it smiles. It also allows the manipulation of virtual objects using the player's thoughts.
Another company, NeuroSky, has come up with a single-electrode device that they claim will read emotions as its software translates them to commands to control a game. Other companies are developing NeuroSky's technology to use in cell-phone headsets and MP3 players. The sensor will sense your emotional state and pick music that is compatible with it. No downer songs while you are feeling fine, or for those slow-to-wake-up folks; no heavy metal until after 11:00 A.M. Just exactly what is being recorded and used is, of course, not being revealed by either company.
Aiding Faulty Memories with Silicon.
Another problem begging for a solution has to do with the increasing elderly population: memory loss. The normal slow loss of memory is annoying enough without the devastating problem of Alzheimer's disease. Although the neuronal implants that we have discussed have to do with sensory or motor functions, other researchers are concerned with restoring cognitive loss of higher-level thought processes. Theodore Berger at USC has been interested in memory and the hippocampus for years, and more recently he has been working toward creating a prosthesis that will perform the services that Alzheimer's disease plays havoc with: the transfer of information from immediate memory to long-term memory. The hippocampus has a star role in the formation of new memories about experienced events, as evidenced by the fact that damage to the hippocampus usually results in profound difficulties in forming new memories and also affects retrieval of memories formed prior to the damage. It doesn't look as if procedural memory, such as learning how to play an instrument, is part of the hippocampus's job description, for it is not affected by damage to the hippocampus.
The hippocampus is located deep in the brain and is evolutionarily old, which means that it is present in less-evolved animals. Its connections, however, are less complicated than other parts of the brain, and this makes Berger's goal a tad (and only a tad) easier. Just what the damaged cells in the hippocampus did is still up to conjecture, but that doesn't slow down Berger and his big plan to develop a chip for people with this type of memory loss. He doesn't think he needs to know exactly what they did. He thinks all he has to do is provide the bridge between the input of cells on one side and the output of cells on the other side of the damaged cells.
Not that that is a walk in the park. He has to figure out from an electrical input pattern what the output pattern should be. For instance, let's say that you were a telegraph operator who translates Morse code from one language to another. The problem is, you don't know or understand either of the languages or codes. You receive a code tapped out in Romanian and then have to translate it and tap it out in Swedish. You have no dictionaries or codebooks to help you. You just have to figure it out. That is what his job has been like, but harder. This has taken several years and the help of researchers from many different disciplines. In Berger's system, the damaged CNS neurons would be replaced with silicon neurons that mimic their biologic function. The silicon neurons would receive electrical activity as inputs from, and send it as outputs to, regions of the brain with which the damaged region previously was connected. This prosthesis would replace the computational function of the damaged brain and restore the transmission of that computational result to other regions of the nervous system.26 So far his tests on rats and monkeys ”worked extremely well,” but tests on humans are still a few years away.4 Caveats and Concerns.
Futurists like Ray Kurzweil envision this technology being able to do far more. He foresees enhancement chips: chips that will increase your intelligence, chips that will increase your memory, chips that can have information downloaded into them. Learn French, j.a.panese, Farsi? No problem, just download it. Do advanced calculus? Download it. Increase your memory? Sure, just get another five-terabyte chip implanted. Mary Fisher Polito, a friend who occasionally suffers from a ”senior moment” memory lapse, says, ”I hope they hurry up with those chips. I could use some more RAM now.” Kurzweil also envisions the world being populated with such intelligent people that the major problems facing us will be easily solved. ”Greenhouse gases? Oh, I know how to fix that. Famine? Who's hungry? There have been no reports of hunger for the last fifty years. War? That is so retro.” But then, Chris von Ruedon, one of my students, points out, ”It's often the most intelligent people who cause such problems.” Others are concerned about such scenarios as: ”Honey, I know that we were saving this money for a vacation, but maybe we should get the twins neural chips instead. It is hard for them in school when so many of the other kids have them and are so much smarter. I know you wanted them to stay natural, but they just can't keep up, and their friends think they are odd.” Artifact-driven evolution!
But in a sense, the story of human evolution has been artifact-driven ever since the first stone ax was chipped, and perhaps even earlier. Merlin Donald, a cognitive neuroscientist at Case Western Reserve University, thinks that although humanity is greatly concerned about changes in the physical ecology of the external world, we should be paying more attention to what has been going on inside our heads. Information storage and transfer went from the internally stored memory and experience of a single individual to being internally stored and transferred by many individuals as storytellers, to external memory storage on papyrus, then to books and libraries, then to computers and the Internet. He thinks that there have been equally ma.s.sive changes in the cognitive ecology, due to the advent of these huge banks of external memory storage, and we are not done yet. He predicts that this runaway proliferation of information will probably set our future direction as a species.27 Perhaps that next step in this evolution of information storage may be to store it internally, again with the help of implanted silicon: just another tool.
Or not. The idea that we are messin' with our innards is disturbing to many. And just what would we do with expanded intelligence? Are we going to use it for solving problems, or will it just allow us to have longer Christmas card lists and bigger social groups? If we spend 90 percent of our time talking about each other, will we solve the world problems or just have more stories to tell? But there is another major problem with Kurzweil's scenario: No one knows what it is that the brain is doing that makes a person intelligent. Just having a lot of information available doesn't necessarily make a person more intelligent. And being intelligent does not necessarily make a person wise. As David Gelernter, a computer scientist at Yale, wonders, ”What are people well informed about in the information age?...Video games?” He isn't impressed; in fact, he seems to think people are less informed.28 So what about intelligence? What were those smart robots all about?
SMART ROBOTS?.
My desires in a personal robot are rather mundane. I just want it to do all the things I don't want to do. I want it to get the mail, hand me any personal handwritten letters and invitations, and take everything else and deal with it. I want it to check my e-mail and throw out all the spam and pay my bills. I want it to keep track of finances, fund my retirement, do the taxes, and hand me a net profit at the end of the year. I want it to clean the house (including the windows), and it might as well do all the car maintenance. Ditto with weeding, trapping gophers, and...well, it might as well do the cooking, too, except when I want to. I would like my robot to look like Sophia Loren in Divorce Italian Style, not R2D2. I may have trouble with that one, because my wife wants Johnny Depp doing all the ch.o.r.es. Maybe R2D2 isn't such a bad idea. As I said, my needs are mundane. I can do all these things, but I'd rather spend my time doing something else. For disabled persons who cannot do any of these things, a personalized robot would allow far more autonomy than they have.
The thing is, this may not be so far off, or at least some of it, and that would be great. But maybe, if we aren't careful, the smart robot won't be grumbling about cat hair as it is cleaning the floor. It may be discussing quantum physics or, worse yet, its ”feelings.” And if it is intelligent, will it still do all our ch.o.r.es? Just like you and your kids, won't it figure out a way not to do them? That would mean it would have desires. Once it has feelings, will we feel guilty about making it do all the scut work, and start cleaning up before the robot comes in, and apologizing for the mess? Once it is conscious, will we have to go to court to get it decommissioned so we can get the latest model? Will a robot have rights? As Clynes and Kline pointed out in their original description of a cyborg in s.p.a.ce, ”The purpose of the Cyborg...is to provide an organizational system in which [such] robot-like problems are taken care of automatically and unconsciously, leaving man free to explore, create, think, and feel.”7 Without my actually merging physically with silicon, without actually becoming a cyborg, a separate silicon a.s.sistant could just as easily give me more time to explore, create, think, and feel (and, I might add, gain weight). So I am going to be careful which model I order. I do not want a robot with emotions. I don't want to feel guilty that my robot is vacuuming while I am out on the deck in the sun eating a now mandatory calorie-reduced lunch and thinking deep thoughts, like maybe I should get up and weed.
How close are we to my idea of a personal robot? If you haven't been keeping up with what is going on in the world of robotics, you will be amazed. There are currently robots doing plenty of the jobs that are repet.i.tive and/or require precision, from automobile a.s.sembly to surgery. Currently the domain of robots is the three Ds-dull, dangerous, or dirty. The dirty category includes toxic waste cleanups. Surgery is none of those three; it is just being done on a microscopic level. Currently Pack Bots that weigh eighteen kilograms are being used as emergency and military robots. They can negotiate rough terrain and obstacles such as rocks, logs, rubble, and debris; they can survive a drop of two meters onto a concrete surface and land upright; and they can function in water up to two meters deep. They can perform search and rescue, and disarm bombs. They are being used to detect roadside bombs and reconnoiter caves. However, these robots do not look like your dream of a handsome search-and-rescue guy (like my brother-in-law) as you are lying at the base of some cliff you foolishly tried to climb. They look like something your kid would build with an erector set.
There are also unmanned robotic aircraft. A robot has driven most of the way across the United States. Driving in an urban setting is still the most difficult test and has yet to be perfected. The Urban Challenge, a sixty-mile compet.i.tion for autonomous vehicles sponsored by the Defense Advanced Research Projects Agency (DARPA), was held in November 2007. Vehicles had to be able to negotiate city streets, intersections, and the parking lot, including finding a spot, parking legally, and then leaving the lot without a fender bender, while avoiding shopping carts and other random objects. This is not remote control. These are cars controlled by software, driving on their own. It may not be too long before computer programs will drive all cars. We will recline, read the paper, munch a doughnut (I'll take jelly), and drink a latte on the way to work.
But so far, on the home-cleaning front, all we have is a floor cleaner and vacuum cleaner that looks like a CD player, and a lawn mower. But what these robots have, and what my dream does not have, are wheels. No robot yet can move through the room like Sophia Loren or Johnny Depp. Half the neurons in the human brain are at work in the cerebellum. Part of their job is motivating, not in the sense of ”come on, you can do it,” but in the sense of Chuck Berry and Maybelline in the Coupe de Ville motivatin' up the hill-that is, timing and coordinating muscles and skills.
Developing a robot with animal-like motion is incredibly difficult and has yet to be accomplished, but engineers at Shadow Robot Company in England, under founder Richard Greenhill, think they are getting close. Since 1987, they have been working to build a bipedal robot. Greenhill says, ”The need for anthropomorphism in domestic robotics is cla.s.sically ill.u.s.trated by the problem of staircases. It is not feasible to alter houses or to remove the staircases. It is possible to design robots with stair-climbing attachments, but these are usually weak spots in the design. Providing a robot with the same locomotive structures as a human will ensure that it can certainly operate in any environment a human can operate in.”29 They are getting there, and along the way they have developed many innovations, one of them being the Shadow Hand, a state-of-the-art robotic hand that can do twenty-four out of the twenty-five movements that a human hand can perform. It has forty ”air muscles,” another invention. The shadow hand has touch sensors on its fingertips and can pick up a coin. Many other laboratories are working on other aspects of the anthropomorphic robot. David Hanson, at the University of Texas, has made a substance he has called Flubber, which is very much like human skin and allows lifelike facial expressions.* So it is possible to have a robotic Johnny Depp sitting in your living room, but he isn't up to doing the tango yet.
j.a.pan Takes the Lead.
j.a.pan is a hot spot for robotic research. They have a problem that they are hoping robots will help solve. j.a.pan has the lowest birth rate in the world, and 21 percent of the population is over sixty-five, the highest proportion of elderly in any nation. The population actually started declining in 2005, when births were exceeded by deaths. The government discourages immigration; the population is over 99 percent pure j.a.panese. Any economist will tell you this is a problem. There aren't enough young people to do all the work; shortages are already being felt in many areas, including nursing. So if the j.a.panese don't want to increase immigration, then they are going to have to figure out a way to take care of their elders. They are looking to robotics.
At Waseda University, researchers have been working on creating facial expressions and upper-body movements that correlate with the emotions of fear, anger, surprise, joy, disgust, sadness, and, because it is j.a.pan, a Zen-like neutral state. Their robot has been created with sensors: It can hear, smell, see, and touch. They are studying how senses translate into emotions and want to develop a mathematical model for this.30 Their robot will then react to external stimuli with humanlike emotions. It is also programmed with instinctual drives and needs. Its needs are driven by appet.i.te (energy consumption), the need for security (if it senses it is in a dangerous situation, it will withdraw), and the need for exploration in a new environment. (I will not order one of these.) The Waseda engineers have also made a talking bot that has lungs, vocal cords, articulators, a tongue, lips, a jaw, a nasal cavity, and a soft palate. It can reproduce a humanlike voice with a pitch control mechanism. They have even built a robot that plays the flute.
At Meiji University, designers have set their sights on making a conscious robot. It may be that from this intersection of robotic technology, computer technology, and the desire to make humanlike robots, a greater understanding of human brain processing will emerge. Building a robot to act and think as a human does means testing the theories of brain processing with software and seeing if the result corresponds to what the human brain is actually doing. As Cynthia Breazeal, who leads a group at MIT, points out, ”While many researchers have proposed models of specific components of social referencing, these models and theories are rarely integrated with one another into a coherent, testable instance of the full behavior. A computational implementation allows researchers to bring together these disparate models into a functioning whole.”31 Tohru Suzuki, Keita Inaba, and Junichi Takeno lament that no one yet has presented a good integrated model to explain consciousness. Yak yak yak, but how do you actually hook it all up? So instead of shrugging their shoulders, they went about making their own model and then built a robot using this design.
Actually they built two, and you will see why. They believe that consciousness arises from the consistency of cognition and behavior.32 What does that remind you of? How about mirror neurons? Those same neurons that are firing when you cogitate a behavior and when you perform it. You can't get more consistent than that. Next they turn to a theory by Merlin Donald-that the ability to imitate motor action is the foundation of communication, language, the human level of consciousness, and human culture in general. This is known as mimesis theory. Donald has been thinking a lot about the origins of language, and he just does not see it happening without fine motor skills, and in particular, the ability to self-program motor skills. After all, language and gesture require the refined movements of muscles. And while other animal species have genetically determined rigid types of behavior, human language is not rigid but flexible. Thus the motor skills required for language must also be flexible. There just had to be voluntary, flexible control of muscles before language could develop. He sees this flexibility coming from one of the fundamentals of motor skill-procedural learning. To vary or refine a motor movement, one needs to rehea.r.s.e the action, observe its consequences, remember them, and then alter what needs to be altered. Donald calls this a rehearsal loop, something we are all familiar with. He notes that other animals do not do this. They do not initiate and rehea.r.s.e actions entirely on their own for the purpose of refining their skill.33 Your dog is not practicing shaking hands all day while you are at the office. Merlin thinks that this rehearsal-loop ability is uniquely human and forms the basis for all human culture, including language.
So, Suzuki and pals drew up a plan for a robot that had consistency of behavior and cognition. They built two, to see if they would show imitative behavior. One robot was programmed to make some specific movements, and the second robot copied them! Imitative behavior implies that the robot can distinguish itself from another robot: It is self-aware. They believe that this is the first step on the road to consciousness. Unlike other designs but like many models of human consciousness, this one had feedback loops for both internal and external information. External information (somatic sensation) feedback is needed for a robot to imitate and learn. The external result of action must come back to the interior in order to modify it if need be: Action must be connected to cognition. Internal feedback loops are what connect the cognition to the action. However, these robots don't look like what I'm pretty sure you are visualizing. They look like something that a mechanic would pull out from under the hood of a Mercedes and charge an arm and a leg to replace.
Meanwhile, Back at MIT.