Controlling Robots with Your Brain
Manager: Jean-Que M. Dar
Author: Adam Conner-Simons
Contributor: IP Precise
There is potential for creating robots that could think and operate like humans without the currently existing elements of having to teach or tell the robots how to think. Such robots would be able to actually process and do what the human is thinking to do.
Spearheaded by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University, efforts are being made to make this concept a reality, allowing for people to “correct robot mistakes instantly with nothing more than their brains”.
Currently, the system can identify if a person senses an error as a robot performs an object-sorting task, by using data from an electroencephalography (EEG) monitor. What is an EEG monitor you ask? EEG monitor records brain activity.
In the current study the team used a humanoid robot named “Baxter” from Rethink Robotics.
At present, the system only manages “relatively simple binary-choice activities”. However, it is believed that it is possible to eventually have robots function in more intuitive ways.
According to Daniela Rus Director of CSAIL, being able to immediately tell a robot to do a certain command, without saying a word, or performing any activity such as tapping a nob would improve our abilities to supervise factory robots and other technologies that will be invented.
A feedback system developed at MIT enables human operators to correct a robot's choice in real-time using only brain signals.
Intuitive human-robot interaction
In the past humans have had to think in a prescribed way that computers are familiar with. This training process has some downsides as the act of regulating one’s thoughts can be strenuous.
Whenever our brain realizes a mistake it generates error-related potentials” (ErrPs). Rus' team focused on ErrPs as they wanted an intrinsic experience. This meant that an operator no longer had to regulate their thoughts and instead of adapting to the machine, the machine adapts to the operator; all the operator needs to do is mentally agree or disagree. The system uses ErrPs to determine if the human agrees with the decision, as the robot indicates which choice it plans to make.
There are still many developments to be made such as fine-tuning the system enough because ErrP signals are very faint and detecting secondary errors. Secondary errors may occur when the system doesn’t notice the human’s original correction.
C SAIL research scientist Stephanie Gil believes that future systems could extend to more complex multiple-choice tasks. This innovation can become useful for people who can't communicate verbally according to BU Ph.D. candidate Salazar-Gomez.
At the University of Freiburg, a professor of computer science believes that this could have a truly great impact on the future of human-robot collaboration. It could lead to developing effective tools for brain-controlled robots and prostheses seeing how difficult translating human language into a meaningful signal for robots can be.
The project was funded, in part, by Boeing and the National Science Foundation. To learn more contact IPPrecise.