The possibilities for artificial intelligence (AI) have been limitless.
As our brains grow more complex, we will inevitably have to interface with these machines in ways that are more intuitive than the traditional ways of thinking.
Now, a team of engineers and scientists are exploring new ways of creating a virtual-reality brain-computer interface that can be worn on the head.
“It’s kind of a first step towards something we call a brain-interface for the real world,” said Professor Robert Bremner, of the University of California, San Diego.
In a study published in Nature Nanotechnology, Bremmer and his team demonstrate the feasibility of creating an AI that can automatically learn to respond to its environment, using only its own cognitive capabilities.
In this example, they showed that an avatar can be used to control an audio system that sends sound waves from the head to a speaker attached to a smartphone.
This system would need to be able to communicate with other avatars to interact with a computer, or with a virtual world, for example.
The avatar would also need to know how to perform complex actions, such as walking into an open door or turning a knob.
This is an example of the kind of interaction that would be achieved with the help of an AI-controlled prosthetic, but Bremners team also wanted to show that an AI could be trained to make a useful, self-aware AI system.
The brain-body interface can be controlled using an input device such as a smartphone, which will then respond with sounds.
This method allows the avatar to use the same hardware as the real person to interact and move around in virtual environments.
The system is capable of performing complex tasks that humans cannot.
Bremers team demonstrated this process in a small system they designed and built using the Google Brain project.
The team also designed an experiment to show how an avatar could interact with an audio-visual system that communicates with a smartphone via Bluetooth.
They demonstrated a simple experiment where an avatar was presented with an array of sound images, each of which required a certain amount of attention.
The participants would have to determine which of the sounds the avatar chose to listen to.
The results showed that the avatar would respond more efficiently to the sound images than to any other sound.
In addition to using the brain to make decisions about which sound to listen for, the system also has to learn to distinguish between these sounds.
The researchers showed that when an avatar learns to recognise sounds, it could also use this knowledge to learn how to interact in the real environment.
In the end, they used this information to create a virtual avatar that could learn how and when to use a specific sound.
It can be said that this was a big step forward in understanding how an AI system can learn to interact effectively with the real human.
The scientists have now demonstrated that they can build a more sophisticated system that could be used by a living person to communicate effectively with a remote environment.
It will also be important to see how this AI system will be able learn to recognise and react to sounds from other people and to navigate a virtual environment.
Bemner’s group is currently working on a different form of the same type of AI, called a ‘brain-machine’ or brain-aware interface.
It could be a way to help people with vision, speech, and other cognitive disabilities to interact more effectively in the virtual world.
In future, Bemners group hopes to develop a more complex system that will be more intelligent and more capable of recognising and responding to sounds in the physical world.