-->

Kamis, 23 Februari 2017

Brain–Computer Interface Allows Speediest Typing to Date

Brain–Computer Interface Allows Speediest Typing to Date

Via Scientific American -- https://www.scientificamerican.com/article/brain-computer-interface-allows-speediest-typing-to-date/
A new interface system allowed three paralyzed individuals to type words up to four times faster than the speed that had been demonstrated in earlier studies
A participant enrolled by Stanford University in the BrainGate clinical trial uses the brain-computer interface to type by controlling a computer cursor with her thoughts. Credit: Courtesy Stanford University
Ten years ago Dennis Degray’s life changed forever when he slipped and fell while taking out the trash in the rain. He landed on his chin, causing a severe spinal cord injury that left him paralyzed below the neck. Now he’s the star participant in an investigative trial of a system that aims to help people with paralysis type words using only their thoughts.
The promise of brain–computer interfaces (BCIs) for restoring function to people with disabilities has driven researchers for decades, yet few devices are ready for widespread practical use. Several obstacles exist, depending on the application. For typing, however, one important barrier has been reaching speeds sufficient to justify adopting the technology, which usually involves surgery. A study published Tuesday in eLife reports the results of a system that enabled three participants—Degray and two people with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease, a neurodegenerative disease that causes progressive paralysis)—to type at the fastest speeds yet achieved using a BCI—speeds that bring the technology within reach of being practically useful. “We're approaching half of what, for example, I could probably type on a cell phone,” says neurosurgeon and co-senior author, Jaimie Henderson of Stanford University.
The researchers measured performance using three tasks. To demonstrate performance in the most natural scenario possible, one participant was assessed in a “free typing” task, where she just answered questions using the device. But typing speeds are conventionally measured using copy typing, which involves typing out set phrases, so all three participants were also assessed this way. The woman who performed the free-typing task achieved faster than six words-per-minute, the other ALS patient managed nearly three and Degray achieved almost eight. The group reported comparable results in a Nature Medicine studyin 2015 but these were achieved using software that exploited the statistics of English to predict subsequent letters. No such software was employed in this study.
The drawback of copy typing is performance can vary with the specific phrases and keyboard layouts used. To get a measure independent of any of these factors, the third task involved selecting squares on a six by six grid as they lit up randomly. This gets closer to quantifying the maximum speed the system can output information, and is easily converted into a digital “bits per second” measure. The team used this range of tasks, without predictive software, because one of the study’s central aims was to develop standardized measures. “We need to establish measures so that—in spite of potential variability between people, methods and researchers—we can really say, ‘clearly this new advance led to higher performance,’ because we have systematic ways of comparing that,” says co-lead author Chethan Pandarinath, then a postdoctoral fellow at Stanford. “It's critical for moving this technology forward.”
The two ALS patients achieved 2.2 and 1.4 bits per second, respectively, more than doubling previous records (held by these same participants in a previous study from this group). Degray achieved 3.7 bits per second, which is four times faster than the previous best speed. “This is a pretty large leap in performance in comparison to previous clinical studies of BCIs,” Pandarinath says.
Other researchers agree these are state-of-the-art results. “This is the fastest typing anyone has shown with a BCI,” says biomedical engineer Jennifer Collinger, of the University of Pittsburgh, who was not involved in the study. “It's on par with technologies like eye-trackers, but there are groups those technologies don’t work for such as people who are “locked-in.” These speeds also approach what ALS patients questioned in a survey said they would want from a BCI device. “You're getting to the point where performance is good enough that users would actually want to have it,” Collinger says.
Participants had either one or two tiny (one-sixth-inch) electrode arrays implanted on the surfaces of their brains. These “intracortical” implants contain 96 microelectrodes that penetrate one to 1.5 millimeters into parts of the motor cortex that control arm movements. Two of the surgeries were performed by Henderson, who co-directs Stanford’s Neural Prosthetics Translational Laboratory with the study’s senior co-author, bioengineer Krishna Shenoy. The neural signals recorded by the electrodes are transmitted via a cable to a computer where algorithms developed in Shenoy's lab decode the participant's intentions and translate the signals into movements of a computer cursor. The Stanford team is part of a multi-institute consortium called BrainGate, which includes Massachusetts General Hospital and Brown University, among others.
Other methods of interfacing with the brain via electrodes include those put on the scalp for electroencephalography (EEG) and ones placed under the skull on the brain’s surface, known as electrocorticography (ECoG). The advantage of intracortical implants is they can pick out activity from single cells whereas the other methods capture the average activity of thousands of neurons. “This performance is 10 times better than anything you would get from EEG or ECoG, [which don’t] contain enough information to do this kind of task at this level,” says neurobiologist Andrew Schwartz, at Pitt, who was not involved in the study. Movement and scarring reduces signal quality over roughly the first two years after implantation, but what remains is still useful—“much better than you get with any other technique,” he says.
The biggest drawback, currently, is having wires coming out of people's heads and attached to cables, which is cumbersome and carries risks. “The future is making these devices wireless,” Pandarinath says. “We're not there yet with people but we’re probably closer to five than 10 years away, and that’s a critical step [toward] a device that you could send somebody home with and be less worried about potential risks like infection.” The devices would need wireless power but several groups are already working on this. “Most of the technology is basically there,” Schwartz says. “You can do that inductively using coils—like wirelessly charging your cell phone in a cradle with coils on either side.”
The team attributes the improvements to better systems engineering and decoding algorithms. “Performing repeated computations rapidly is critical in a real-time control system,” Pandarinath says. The researchers published a study last year, led by Stanford bioengineer Paul Nuyujukian. In it they trained two macaque monkeys to perform a similar task to the grid exercise used in this study. The animals typed sentences by selecting characters on a screen as they changed color (although they wouldn’t have understood what the words meant). When the team added a separate algorithm to detect the monkeys’ intention to stop, their best speed increased by two words per minute.
This “discrete click decoder” was also used in the current study. “We've basically created a ‘point and click’ interface here, like a mouse. That’s a good interface for things like modern smartphones or tablets,” Pandarinath says, “which would open a whole new realm of function beyond communication: surfing the Web, playing music, all sorts of things able-bodied people take for granted.”
The Stanford team is already investigating wireless technology, and has ambitious long-term goals for the project. “The vision we hope to achieve someday would be to be able to plug a wireless receiver into any computer and use it using your brain,” Henderson says. “One of our main goals is to allow 24 hours a day, seven days a week, 365 days a year control of a standard computer interface using only brain signals.”

Microsoft app helps people with ALS speak using just their eyes

Microsoft app helps people with ALS speak using just their eyes


https://www.newscientist.com/article/2121579-microsoft-app-helps-people-with-als-speak-using-just-their-eyes/

A smartphone held up in front of a person gazin to the right, showing the GazeSpeak app on the phone screen
The eyes say it all
GazeSpeak, Enable Team, Microsoft Research
It can be difficult to communicate when you can only move your eyes, as is often the case for people with ALS (also known as motor neurone disease). Microsoft researchers have developed an app to make talking with your eyes easier, called GazeSpeak.
GazeSpeak runs on a smartphone and uses artificial intelligence to convert eye movements into speech, so a conversation partner can understand what is being said in real time.
The app runs on the listener’s device. They point their smartphone at the speaker as if they are taking a photo. A sticker on the back of the phone, visible to the speaker, shows a grid with letters grouped into four boxes corresponding to looking left, right, up and down. As the speaker gives different eye signals, GazeSpeak registers them as letters.
“For example, to say the word ‘task’ they first look down to select the group containing ‘t’, then up to select the group containing ‘a’, and so on,” says Xiaoyi Zhang, who developed GazeSpeak whilst he was an intern at Microsoft.
GazeSpeak selects the appropriate letter from each group by predicting the word the speaker wants to say based on the most common English words, similar to predictive text messaging. The speaker indicates they have finished a word by winking or looking straight ahead for two seconds. The system also takes into account added lists of words, like names or places that the speaker is likely to use. The top four word predictions are shown onscreen, and the top one is read aloud.
“We’re using computer vision to recognise the eye gestures, and AI to do the word prediction,” says Meredith Morris at Microsoft Research in Redmond, Washington.
The app is designed for people with motor disabilities like ALS, because eye movement can become the only way for people with these conditions to communicate. ALS progressively damages nerve cells, affecting a person’s ability to speak, swallow and eventually breathe. The eye muscles are often some of the last to be affected.

Board of the old

“People can become really frustrated when trying to communicate, so if this app can make things easier that’s a really good thing,” says Matthew Hollis from the Motor Neurone Disease Association.
There are currently limited options for people with ALS to communicate. The most common is to use boards displaying letters in different groups, with a person tracking the speaker’s eye movements as they select letters. But it can take a long time for someone to learn how to interpret these eye movements effectively.
GazeSpeak proved much faster to use in an experiment with 20 people trying both the app and the low-tech boards. Completing a sentence with GazeSpeak took 78 seconds on average, compared with 123 seconds using the boards. The people in the tests did not have ALS, but the team also got feedback on the technology from some people with ALS and their interpreters. One person who tried the device typed a test sentence in just 62 seconds and said he thought it would be even quicker in a real-life situation, as his interpreter can more easily predict what he is likely to say.
“I love the phone technology; I just think that would be so slick,” said one of the interpreters.
Other systems currently use software to track eye movements with infrared cameras. But these are often expensive and bulky, and infrared cameras don’t work very well in sunlight. The GazeSpeak app is portable and comparatively cheap, as it only requires an iOS device, like an iPhone or iPad, with the app installed.
Microsoft will present the app at the Conference on Human Factors in Computing Systems in Colorado in May. The researchers say it will be available on the Apple App Store before the conference, and the source code will be made freely available so that other people can help to improve it.