People will now be able to control devices and ask queries without speaking with the help of a wearable device that has the ability reading people’s minds – according to researchers, when users make use of their internal voice.
Electrodes that are attached to the skin allow translating and transcribing that the words that users verbalise internally but do not speak out loud. The device has been named AlterEgo.
“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” said Arnav Kapur, who was the head of the team that developed the wearable at MIT’s Media Lab.
The device was first presented at the Association for Computing Machinery’s Intelligent User Interface conference in Tokyo and the device has been described as an “intelligence-augmentation” or IA device by Kapur. The device is held in place by clamping it overt he top of ear and is works through attachments on a user’s jaw and chin. When a user verbalises internally, there is triggering of subtle neuromuscular signals which are picked up by four electrodes that are placed under the white plastic device that touches the skin. Artificial intelligence placed with thein the device helps to match signals to particular words when a user says words inside their head which are then fed into a computer.
A bone conduction speaker which is also fitted into the device that plays sound into the ear of the user without the requirement of an earphone to be inserted helps the computer to respond. This enables the users to be able to continue to hear the rest of the world even while he speaks words inside his head. The aim of this innovation is to develop an outwardly silent computer interface which can be spoken to and heard by only by the users of the AlterEgo device can speak to and hear.
“We basically can’t live without our cellphones, our digital devices. But at the moment, the use of those devices is very disruptive,” said Pattie Maes, a professor of media arts and sciences at MIT. “If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”
New form factors and interfaces that offer the knowledge and services similar to smartphones without the requirement for the intrusive disruption that are currently faced in life has bene the topic of experimentation by Maes and her students, including Kapur.
The AlterEgo device took about 15 minutes to customize according to each person and it resulted in transcription accuracy of 92 per cent on the average in a trial involving 10 people. While that level of accuracy is a few points below the 95%-plus accuracy rate that Google’s voice transcription service that enables voice transcription through a traditional microphone, over time, the accuracy rate would improve for the system, Kapur says.
(Source:www.theguardian.com)
Electrodes that are attached to the skin allow translating and transcribing that the words that users verbalise internally but do not speak out loud. The device has been named AlterEgo.
“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” said Arnav Kapur, who was the head of the team that developed the wearable at MIT’s Media Lab.
The device was first presented at the Association for Computing Machinery’s Intelligent User Interface conference in Tokyo and the device has been described as an “intelligence-augmentation” or IA device by Kapur. The device is held in place by clamping it overt he top of ear and is works through attachments on a user’s jaw and chin. When a user verbalises internally, there is triggering of subtle neuromuscular signals which are picked up by four electrodes that are placed under the white plastic device that touches the skin. Artificial intelligence placed with thein the device helps to match signals to particular words when a user says words inside their head which are then fed into a computer.
A bone conduction speaker which is also fitted into the device that plays sound into the ear of the user without the requirement of an earphone to be inserted helps the computer to respond. This enables the users to be able to continue to hear the rest of the world even while he speaks words inside his head. The aim of this innovation is to develop an outwardly silent computer interface which can be spoken to and heard by only by the users of the AlterEgo device can speak to and hear.
“We basically can’t live without our cellphones, our digital devices. But at the moment, the use of those devices is very disruptive,” said Pattie Maes, a professor of media arts and sciences at MIT. “If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”
New form factors and interfaces that offer the knowledge and services similar to smartphones without the requirement for the intrusive disruption that are currently faced in life has bene the topic of experimentation by Maes and her students, including Kapur.
The AlterEgo device took about 15 minutes to customize according to each person and it resulted in transcription accuracy of 92 per cent on the average in a trial involving 10 people. While that level of accuracy is a few points below the 95%-plus accuracy rate that Google’s voice transcription service that enables voice transcription through a traditional microphone, over time, the accuracy rate would improve for the system, Kapur says.
(Source:www.theguardian.com)