What You Need to Know About Mind-Controlled Technology

Source: Thinkstock

Source: Thinkstock

Controlling an object or a video game with your mind sounds like something out of a science fiction movie, but gadgets that translate brain waves into commands that control a computer are already a reality. Mind-controlled technology uses a brain-computer interface to establish a pathway of communication between the user’s brain and an external device. It has the potential to augment or even repair patients’ damaged hearing, sight, or movement. EEG sensors have been incorporated into gaming systems that enable a player to control what happens onscreen with a headset, EEG-controlled exoskeletons translate users’ brain signals into movements, and implanted electrodes enable patients to control bionic limbs.

The mind-controlled technology that researchers are working on today got its start in the 1920’s, when researchers discovered the electrical activity of the human brain and developed electroencephalography (EEG), the practice of recording that electrical activity along the scalp. Researchers discovered that neurons convey information via electrical “spikes,” which can be recorded with a thin metal wire, or electrode. By 1969, a researcher named Eberhard Fetz had connected a single neuron in a monkey’s brain to a dial the animal could see. The monkey learned to make the neuron fire faster to move the dial in order to get a reward, and while Fetz didn’t realize it at the time, he had created the first brain-machine interface.

30 years ago, physiologists began recording from many neurons in animals, and discovered that while the entire motor cortex lights up with electrical signals when an animal moves, a single neuron tends to fire fastest in connection with certain movements. If you record signals from enough neurons, you can get a rough idea of the motion that a person is making or intends to make. Researchers developed algorithms to reconstruct movements form motor cortex neurons, and in the 1980’s Apostolos Georgopoulos found a relationship between the electrical response of single neurons and the direction in which they moved their arms. Since the mid-1990’s, researchers have been able to capture complex motor cortex signals recorded from groups of neurons, and use them to control electronic devices, building brain-computer interfaces that enable what we’d call mind-controlled technology.

While EEG has emerged as a promising way for paralyzed patients to control devices like computers or wheelchairs — by wearing a cap and undergoing training to learn to control a device like a wheelchair by imaging that they’re moving a part of their body, or triggering commands with specific mental tasks —  MIT’s Technology Review reported in 2010 that some researchers have noted that EEG has limited accuracy and can detect a limited number of commands. Maintaining mental exercises while trying to maneuver a wheelchair around a complex environment can be tiring, and the concentration required creates noisier signals that can be more difficult for a computer to interpret. So some are experimenting with shared control, which combines brain control with artificial intelligence for another technique that can help turn crude brain signals into more complicated commands. With shared control, patients would need to continuously instruct a wheelchair to move forward. They would only need to think the command once, and the software would take over from there.

Last year, MIT’s Technology Review reported on a study in which a paralyzed woman used her mind to control a robotic arm.  Jan Scheuermann, a woman diagnosed with a disease called spinocerebellar degeneration, underwent a brain surgery in which doctors used an air gun to fire two beds of silicon needles, called the Utah electrode array, into her motor cortex, the slim strip of brain that runs over the top of the head to the jaws and controls voluntary movement. The implants enable her to be plugged into a robotic arm that she controls with her mind at the University of Pittsburgh, where he uses it to move blocks, stack cones, or give high fives.

The Utah electrode array records from small populations of neurons to provide signals for a brain-computer interface. In a Utah array signals are only received from the tips of each electrode, which limits the amount of information that can be obtained at one time. But the 192 electrodes on Scheuermann’s implants have recorded more than 270 neurons simultaneously, which is the most ever measured at a single time from a human’s brain.

The researchers on Scheuermann’s case demonstrated her abilities with the Action Research Arm Test, using the same kit of wooden blocks, marbles, and cups that doctors use to evaluate hand dexterity. She scored 17 out of 57 — about as well as someone with a severe stroke — while without the robotic arm, she would have scored a zero. But some of the shortcomings of the technology have become apparent, and controlling the arm has become harder as the implants stop recording over time. The brain is a hostile environment to electronics, and movements of the array can build up scar tissue over time. Over time, fewer neurons can be detected.

Scheuermann is one of about 15 to 20 paralyzed patients in long-term studies of implants that can convey information from the brain to a computer. Nine others have undergone similar tests in a related study, called BrainGate, and four “locked-in” patients, who are unable to move or speak, have regained some ability to communicate thanks to a different kind of electrode developed by a company called Neural Signals.

In 2011, the United States Food and Drug Administration said it would loosen its rules for testing “truly pioneering technologies” like brain-machine interfaces, and more researchers have undertaken human experiments. Researchers at Caltech want to give a patient “autonomous control over the Google Android tablet operating system,” and a team at Ohio State University, in collaboration with research and development firm Battelle, intends to use a patient’s brain signals to control stimulators attached to his arm in a process Battelle describes as “reanimating a paralyzed limb under voluntary control by the participant’s thoughts.”

These studies rely on the fact that recording the electrical activity of a few dozen cells in the brain can provide a fairly accurate picture of how someone intends to move a limb, and much of the technology is still experimental. John Donoghue, a Brown University neuroscientist who leads the BrainGate study, compares today’s brain-machine interfaces to the first pacemakers, which relied on carts of electronics and used wires punched through the skin into the heart. Some were hand-cranked. Donoghue explains, “When you don’t know what is going on, you keep as much as possible on the outside and as little as possible on the inside.” Today’s pacemakers are self-contained, powered by a long-lasting battery, and installed in a doctor’s office, and Donoghue says that brain-machine interfaces are beginning a similar trajectory.

Scientists have built better and better decoders — software to interpret neuronal signals — over the years, which has enabled them to experiment with more ambitious control schemes. Researchers need to create an interface that will last for 20 years. Solving that problem would enable thousands of patients to control wheelchairs, computer cursors, or even their own limbs. Researchers are working to develop ultrathin electrodes, create versions that are more compatible with the human body, or create sheets of flexible electronics that could wrap around the top of the brain.

New medical devices will need to be safe, useful, and economically viable — requirements that brain-machine interfaces don’t currently meet. It’s not yet clear exactly what form a potential product should take. The high-level product most researchers have in mind is a technology that would make life easier for quadriplegics. But there are only about 40,000 patients in the U.S. with complete quadriplegia, and fewer with advanced ALS. But some think the technology may have wider applications, such as helping to rehabilitate stroke patients. And some recording technologies could be useful for understanding psychiatric diseases, like depression or obsessive compulsive disorder.

It’s possible that improving brain-computer interfaces will involve improving not just the technology, but the brains of the people using it. In September, a study conducted by researchers at the University of Minnesota found that people who practice yoga and meditation long-term can learn to control a computer with their minds more quickly and more efficiently than people with little or no yoga or meditation experience. As Science Daily reported at the time, the study involved 36 participants: 24 who had little or no yoga or meditation experience, and 12 who had at least one year of experience in practicing yoga or meditation at least two times per week for one hour.

Both groups of participants were new to systems that use the brain to control a computer, and both participated in three two-hour experiments in which they wore a “high tech, non-invasive” cap that picked up brain activity. They were asked to move a computer cursor across a screen by imagining left or right hand movements. The researchers found that the participants with yoga or meditation experience were twice as likely to be able to complete the brain-computer interface task by the end of 30 trials, and learned three times more quickly than their counterparts for experiments with left-right cursor movement.

The lead researcher on the study was Bin He, a biomedical engineering professor in the University of Minnesota’s College of Science and Engineering and director of the University’s Institute for Engineering in Medicine, who drew international attention in 2013, when members of his research team were able to control a flying robot with their minds. However, they found that not everyone can as easily learn to control a computer with their brains, and many were unsuccessful after even multiple attempts.

The next step for He and his research team is to study a group of participants over time who are practicing yoga or meditation for the first time, in order to see if their ability to control the brain-computer interface improves. “Our ultimate goal is to help people who are paralyzed or have brain diseases regain mobility and independence,” He said. “We need to look at all possibilities to improve the number of people who could benefit from our research.”

More from Tech Cheat Sheet: