The rise of artificial intelligence is 1 of the world’s most influential and talked-about technological advancements. Its quickly escalating capabilities have embedded it into each day life, and it is now sitting in our living rooms and, some say, threatening our jobs.
Despite the fact that AI makes it possible for machines to operate with some degree of human-like intelligence, the 1 point that humans have often had more than machines is the capacity to exhibit feelings in response to the circumstances that they are in. But what if AI could be applied to allow machines and technologies to automatically recognise feelings?
New study from Brunel University London, and from Iran’s University of Bonab and Islamic Azad University, has applied signals from EEGs – the test that measures the brain’s electrical activity – and from artificial intelligence to create an automatic emotion recognition computer system model to classify feelings, with an accuracy of far more than 98%.
By focusing on instruction information and algorithms, computer systems can be taught to method information in the identical way that a human brain can. This branch of artificial intelligence and computer system science is referred to as machine mastering, exactly where computer systems are taught to imitate the way that humans find out.
Dr Sebelan Danishvar, a study fellow at Brunel, mentioned: “A generative adversarial network, identified as a GAN, is a important algorithm applied in machine mastering that enables computer systems to mimic how the human brain functions. The emotional state of a particular person can be detected applying physiological indicators such as EEG. For the reason that EEG signals are straight derived from the central nervous program, they have a robust association with many feelings.
“Through the use of GANs, computer systems find out how to carry out tasks immediately after seeing examples and instruction information. They can then produce new information, whichenables them to steadily boost in accuracy.”
The new study, published in the journal Electronics, applied music to stimulate the feelings of 11 volunteers, all aged in between 18 and 32.
The participants have been instructed to abstain from alcohol, drugs, caffeine, and power drinks for 48 hours ahead of the experiment, and none of them had any depressive problems.
Throughout the study, the volunteers have been all offered ten pieces of music to listen to, by way of headphones. Content music was applied to induce constructive feelings, and sad music was applied to induce adverse feelings.
When listening to the music, participants have been connected to an EEG brain device, and EEG signals have been applied to recognise their feelings.
In preparation for the study, the researchers developed a GAN algorithm, applying an current database of EEG signals. The database held information on feelings brought on by musical stimulation, and this was applied as their model against the genuine EEG signals.
As anticipated, the music elicited constructive and adverse feelings, according to the music played, and the outcomes showed that there was a higher similarity in between the genuine EEG signals and the signals modelled by the GAN algorithm. This indicates that the GAN was successful in creating EEG information.
Dr Danishvar mentioned: “The outcomes show that the proposed strategy is 98.two% precise at distinguishing in between constructive and adverse feelings. When compared with preceding research, the proposed model performed properly and can be applied in future brain–computer interface applications. This incorporates a robot’s capacity to discern human emotional states and to interact with people today accordingly.
“For instance, robotic devices may possibly be applied in hospitals to cheer up individuals ahead of big operations and to prepare them psychologically.
“Future study ought to discover added emotional responses in our GAN, such as anger and disgust, to make the model and its applications even far more helpful.”
Nadine Palmer, Media Relations
+44 ()1895 267090