Close

Hmmm, you are using a Gmail.com email address...

Google has declared war on the independent media and has begun blocking emails from NaturalNews from getting to our readers. We recommend GoodGopher.com as a free, uncensored email receiving service, or ProtonMail.com as a free, encrypted email send and receive service.

That's okay. Continue with my Gmail address...

Deep-learning AI can determine what song is playing in your head


California-based researchers are showing off the latest version of their mind-reading artificial intelligence (AI) algorithm. In a Digital Trends article, the deep-learning AI can read a person’s thought patterns in order to identify the song playing from your device – and in your head.

Apps like Shazam employ similar machine learning that let them identify a song by listening to it. But this is on a wholly different level of intelligence.

Researchers from the University of California, Berkeley (UC Berkeley) started working on their AI in 2014. Study author Brian Pasley and his teammates attached electrodes to the heads of volunteers and measured brain activity while the participants were speaking.

After finding out the connection between brain activity and speech, they combined their accumulated brain data with a deep-learning algorithm. The AI then proceeded to turn the thoughts of a human being into digitally-synthesized speech with some accuracy.

In 2018, the UC Berkeley team have brought their AI to the next level of mind-reading. The improved deep-learning AI demonstrated 50 percent greater accuracy than its predecessor. It was better able to read the brain activity of a pianist and predict what sounds the musician is thinking of. (Related: Creepy: New AI can READ YOUR MIND by decoding your brain signals … kiss your personal privacy goodbye.)

The similarities and differences between real and imaginary sounds in your head

Study author Pasley explained that auditory perception was the act of listening to music, speech, and other sounds. Earlier studies have shown that certain parts of the auditory cortex of the brain are responsible for breaking down the sounds into acoustic frequencies, such as high tones or low tones.

Mother Nature's micronutrient secret: Organic Broccoli Sprout Capsules now available, delivering 280mg of high-density nutrition, including the extraordinary "sulforaphane" and "glucosinolate" nutrients found only in cruciferous healing foods. Every lot laboratory tested.See availability here.

He and his team observed those brain areas to see if the latter were responsible for breaking down imagined sounds in much the same way they processed actual sounds. Examples of imagined sounds would be internal verbalization of the sound of one’s voice or pretending that good music was filling a silent room.

They reported finding a large overlap between the parts of the brain that handled real sounds and the parts that translated imagined sounds. At the same time, they also found significant contrasts.

“By building a machine learning model of the neural representation of imagined sound, we used the model to guess with reasonable accuracy what sound was imagined at each instant in time,” Pasley said.

Deep-learning AI algorithm intended to give voices to paralyzed mutes

In the first phase of their experiment, the UM researchers attached diodes to a pianist’s head and recorded his brain activity while he performed several musical pieces on an electric keyboard. They could then match the volunteer’s brain patterns to the musical notes.

During the second half, they repeated the process with the caveat that the keyboard was muted. Instead, they requested the musician to imagine the notes that he was playing at the moment.

This way, they were able to train their music-predicting AI algorithm to guess the imaginary note playing in the participant’s head.

Pasley said that the ultimate objective of their research was to create deep-learning AI algorithms for a speech device. The prosthetic would be used as a means of communication for patients who suffered from paralysis that deprived them of speech.

“We are quite far from realizing that goal, but this study represents an important step forward. It demonstrates that the neural signal during auditory imagery is sufficiently robust and precise for use in machine learning algorithms that can predict acoustic signals from measured brain activity,” Pasley claimed.

Find out more about your mind at Mind.news.

Sources include:

DigitalTrends.com

Academic.OUP.com

Receive Our Free Email Newsletter

Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES