Imagine a computer-based system visualizing your thoughts and secret thoughts; yes, it's possible now by artificial intelligence assistance. Recent advancements in hardware innovation have re-energized technology. It becomes more accurate, authentic, can produce better sound, accurate visualization, and understanding of the location. Outstanding computer processors support computer to make a decision, plan outputs and don't repeat the mistake as they learn from it. The four scientists in Kyoto at Kyoto University did an exceptional experiment that exceeds the global expectations about such a dreamy truth.
Tomoyasu Horikawa, Guohua Shen, Yukiyasu Kamitani, and Kei Majima recently published their results in a paper on a scientific research platform BioRxiv. They have done their experiment in ATR Computational Neuroscience Laboratories. Their study mainly exposes the decoding of human thoughts using an artificially intelligent system. The artificial intelligence system becomes so smart and real to duplicate human minds and show what they're thinking in their minds. Now scientists are successful in developing the next phase of AI machine, which uses human psychology. It is a leading technology from science fiction to merely science phenomena.
Below video is demonstration of Tomoyasu’s work.
Machine learning is in use since many years ago in the magnetic resonance imaging (MRI) technique. It helped generate a visual impact of human thinking in the form of binary mages or geographic shapes. The images obtained are of simple shapes or pixels. However, another prediction that the human brain thinks in unique patterns of image creation; different thinking levels, either with simple or complex components.
The idea of processing human thoughts is not new as we can see 1950s videos where doctors used to put contraptions on the human brain. It helps them to decipher the thoughts always running in their patient's minds. A British Serial Quatermass and the Pit shows a similar technique to translate the alien's thoughts.
The scientists from the United States, China, and Japan utilize this fantasy to change it into reality. They use functional magnetic resonance imaging (fMRI) strategy for better measurement of brain activity. The deep neural networks involved in replicating human brain functioning procedure.
The emergence of Artificial People with Machine Learning System
Machines come with consciousness and result in developing a direct relationship between humans and machines. The day is coming when robots not only talk but can laugh, fight, and be cruel with the power of advancement in technology. Some research laboratories are busy making a force that can sense its surrounding environment and show response according to it. They sense human thoughts running in their minds and visualize to others. It's beneficial to know and see the thoughts of those quality people who are extraordinary but can't speak.
Alan Turing said:
‘’If a machine behaves as intelligently as a human being, it is as intelligent as a human being.’’
Is it Telepathy or Something Advanced?
The whole world is excited about the news of AI reading minds, but the reality is somewhat not as same as appearing to us. Machine learning is still unable to detect rightly what we think, feel, desire, wish, or need at a specific time. The science is aiming to decipher the images that subjects see for a period. Thus, machine learning uses a reconstruction of a visual filed algorithm for a visual demonstration of thoughts.
In the last years, scientists strive to create a system that can judge images based on their shapes, characters, and letters when a subject’s mind view them.
Kyoto University Research by Four Scientists
The advances in the development of artificial intelligence also use a deep neural network to decode human thoughts. They aimed to develop a method that not only deciphers the images but also regenerate the same one. Therefore, the four efficient scientists at Kyoto University utilized the BioRxiv platform and Deep Neural Networks (DNN) as a proxy to share their findings of thought decoding. These neural systems help decode images of thoughts in the human brain and produce a visual prediction that resembles a person's thinking.
More generalized; the system should visualize the image about which its system is not trained previously after decoding the thoughts. It can see or covert human thoughts into images.
How Does AI Can Visualize our Thinking Process?
The basis of this system is to scan the human brain deeply. For this purpose, the scientists introduce fMRI (Functional MRI) that scans the brain's core over traditional MRI that involves only monitoring brain activity. The unusual behavior of fMRI is its ability to track blood flow to the brain and brainwaves.
It takes data from the scan and interprets what the subject was thinking when data was taken from it? The actual decoding occurs in the complex neural network that converts data into image format. But the humans have to rain the system first to get-go everything. They train the machine about the human brain thinking strategy that how it works and make a solution. It gets information from blood flow as it tracks blood's path, way, direction, and speed to reach the brain.
After it, the system starts producing images related to the information it gets by tracking blood flow. The process is only possible if the system comes with multilayers of deep neural networks. DNNinvolves the processing of information in image form. The information is then carried towards Deep Generator Network (DGN) Algorithm, which finally creates the image with high precision and accuracy. If we obtain images without using DGN, we can't get a high resolution or clear images of a different thought. DGN captures faces, eyes, and textual patterns and produces high-quality visual clues. With an efficient DGN algorithm, efficiency can be exceeded to 99%, and highly matching decoded images are obtained.
The methodology of the Experiment; Deep Image Reconstruction
In simple words, the system consists of two necessary steps:
1. A subject sees an image, and the same image is shown to the AI system; thus, it can recreate its image.
2. After it, the subject’s brain thinks about that image, and AI recreates the same image by showing real-time response.
The scientists selected three subjects and showed them natural geometric shapes, artificial images, and other objects of varying characters. They repeated the same exposure of showing objects in varying periods for almost ten weeks and recorded their brainwaves. In experiment number one, they recorded the brain activity of the subject when seeing the particular object out of 25. In contrast, they measured brain activity by hiding the image and asked the brain to think about it (experiment number 2).
In both scenarios, they recorded the image created by artificial intelligence and then compared them to get staggering results. The filtered data is considered as a template of sorts. Then they used a decoder to identify the images via fMRI. They scanned brain activity and used the reverse-engineered method to produce an image of information obtained from thoughts. The scientists enabled deep learning networks to understand the brainwaves and decode its information to get output.
AI is so fast as it has the power of guessing about a particular problem. It asks and answers its questions at a rapid speed.
AI Visualizing Static Thoughts
Results were outstanding as the computer reconstructed that image but with low accuracy. The quality of images in both experiments was fizzy and comparatively low in the second experiment case. The reason was that it's difficult for a brain to remember an image as identical as it is. It enables scientists to decode ''hieratical'' images with various colors, pixels, and formations. The system establishes, for instance, can predict a picture of a bird, a plant, or an animal. Technology goes on proceeding towards refinement for better implementation of a particular process.
Statement of Kamitani About the Project
Kamitani was excited about their mutual hard work and getting mind-blowing results. He talked to CNBC Make Itas:
‘’We have been studying methods to reconstruct or recreate an image a person sees just by looking at its brain activity. Our previous method was to suppose that an image consists of pixels or simple shapes. But our brain processes visual information hierarchically extracting different levels of features or components of different complexities.''
'These neural networks or AI models can be used as a proxy for the human brain's hierarchical structure,' he added more.
AI Decoding Videos You’re Watching from Your Brain Activity
There is another group of scientists who published their research on ‘’Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision’’ in Cerebral Cortex. They developed deep learning algorithms to scan the human brain and its working principles to decode its information.
Their experiment was based on three women who watched videos of hundreds of unique categories for several hours, and the scientists recorded their brain activity. The videos were on birds, airplanes, and other related objects. They obtained images from a popular Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. The scientists used a functional MRI machine that measures signals of activity in the visual cortex. As the women watched several clips, the deep learning algorithms predicted activity closely related to the actual brain activity. Dozens of brain parts respond to a specific thought; thus, the artificial system mimics natural brain working strategies and shows visible results.
It also helped scientists to recognize which part of the cortex is involved in thought processing. Then they trained an artificial neural network to process the images obtained and provide a visual look of the images. After it, the network decoded brainwaves of the participants and showed results with accuracy exceeding 50%. Besides, the network is strong enough to give a visual impression of a person's thoughts whose brainwaves are not scanned by it. The accuracy of such an experiment is incredibly 25%.
Recent Advancements in AI and Artificial Thinking Strategy
In addition to Japanese scientists and universities, other people are also striving to do something extraordinary. An entrepreneurBryan Johnson is working to develop chips to implant in brains for improving brain functionality. Moreover, a former GoogleX-er Mary Lou Jepsenis endeavoring to produce a hat in this century that could promote telepathy. It’s a unique way of sharing thoughts or communicating without using human senses with others.
Noah Goodman is a Ph.D. professor at Stanford University, where he deals with the psychology and computer sciences. He is expert at human reasoning and languages and thinks:
‘’Humans are the most intelligent system we know. So, I study human cognition, and put on an engineering hat and ask, ‘how can I build one of those?’’’
In the episode of Black Mirror, characters show the phenomenon of memory transmission that can be visualized on TV screens. Everyone can watch this transmission on the screen and can pass comments about it.
Experiment by Carnegie Mellon University’s Scientist
The scientists of the USA claim to propose a unique methodology that’s closer to real mind reading. Their technique is an advancement in Japanese scientist's strategy as it can remember and responds to phrases. Furthermore, it can detect more profound thoughts like phrases. For instance, the rain changed to the flood and ruined the whole area. The scientists are passionate about their research, they said; this technology is efficient enough to understand complexities. It can identify places, people, and actions to complete the process of prediction. Interestingly, it predicts the 240th phrase instantly after accessing the first 239 words from a mental trigger. The prediction power leads to 87% accuracy.
The researcher Marcel was working on this project; he shared his thoughts as:
‘’Our method overcomes fMRI's unfortunate property to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence.''
With this technology, it now becomes possible to human identity thoughts more precisely with several complex concepts.
Experiment by the University of Oregon
They have also used the same phenomenon of scanning brain thoughts and recreating the same images by AI. A neuroscientist Brice Kuhl said in an interview to Brian Resnick at Vox
‘’We can take someone’s memory-which is typically something internal and private. And we can pull it out from their brains.’’
They did this experiment in 2016 with very low-quality picture production by the machine learning system. The researchers selected 23 candidates and showed them over 1,00 colored photos to random faces. Meanwhile, they are hooked up to an fMRI machine for detecting changes in blood flow. They measure the brain's neurological activities and then joined the AI fMRI program to read brain thoughts about these images. The researchers assigned 300 numbers representing different face features to help AI detect them as a code.
In this way, they trained the AI to correlate neurological brain activity with the face's physical features. They trained the machine to reconstruct a face on two unique regions in the brain.
An angular gyrus (ANG): it involves processing languages, numbers, memory processing, and creating spatial awareness.
The occipitotemporal cortex (OTC): it involves in processing visual cues.
Take Technology from Research to Real-life Applications
The scientists are using and electronic mesh method that consists of a microscopic flexible circuits' network. It is placed in the human brain, and it is protected from actual nerve cells. However, researchers are testing this method on animals. Elon Musk is currently striving hard in his company Neuralink to develop a relationship between the human brain and a naming Neural Lace Technology technique. In this method of cellular infrastructure, the researchers will install microelectrodes in the structure od brain.
Research Work of Russian Corporation and Moscow Institute
A Russian corporation, Neurorobotics, and the Moscow Institute of Physics and Technology endeavored to find a way to give an image of one's thoughts. The system involved mimicked the thinking by processing it in real-time. It will lead to the production of advanced post-stroke rehabilitation devices that are under brain control.
Moreover, this technique can also help neurobiologists in treating cognitive disorders after getting brain encoding information. They used the two systems efficiently; functional MRI for extracting images from brain and Implants for analyzing signals directly from neurons. But both systems have limitations for clinical and real-life applications.
MIPT and Neuroroboticsdeveloped a brain-computer interface that depends on artificial neural networks and electroencephalography (EEG). In this technique, the researchers placed electrodes on the scalp noninvasively for brainwave recording. The system uses EEG to recreate the images a man watching in real-time by analyzing brain activity.
The head of Neurorobotics Lab at MIPT, Sir Vladimir Konyshev, gave a statement:
''We are working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm neurorehabilitation purposes. Paralyzed patients can drive an electric wheelchair, for instance. The goal of AI is to increase the accuracy of neural control even for healthy individuals.''
First Phase of Experiment:
In the first step, the neuroscientists allow healthy subjects to open YouTube and watch a 10-second video for 20 minutes. They selected five random categories of videos like waterfalls, human faces, shapes, moving objects, and motorsports. The last category included videos of motorbikes, care racings, water scooters, and snowmobile. EEG identified that each video produced a specific pattern in brain waves that were recorded and enabled the scientists to analyze and correlate a specific brain response with a specific video category.
Second Phase of Experiment:
In this phase, they selected three random categories from the above-discussed five. Now the neurobiologists developed two artificial neural networks.
1. To generate particular images from noise.
2. I am using similar noise to be generated by EEG.
Both systems worked together to turn the EEG signal into the real image. Then the scientists repeated the same experiment for unseen videos of the same categories and recorded the results. It was a thriving practice with 90% accuracy in generating similar images.
The paper's co-author, a programmer at Neurobotics, and a junior researcher at MIPT shared his findings:
''The electroencephalogram is a collection of brain signals recorded from the scalp. Researchers think that studying brain processes via EEG is like figuring out a steam engine's internal structure by analyzing the smoke left behind a steam train. We didn't expect that it contains sufficient information even partially to reconstruct an image observed by a person. Although it turned out to be quite possible.''
Ethical Challenges and Ending Words
One day the technology becomes so advanced that it connects our brains with projectors and enables us to share real-time thoughts. It saves time as there is no more requirement to prepare presentations and then share with the colleagues or boss. With so many advantages, this rising technology is also ending human beings, not only on the individual level but city, country, and government level. It will lead to the Third World War, but it's not the concern of this article. But there are some flaws in the reading mind capability of this machine learning mechanism. The possibilities with AI have no boundaries, and more advancement in artificial intelligence opens new communication doors.
Professor Quatermass used this developed technique to read the minds of Martians. With huge benefits, it also comes with many fears. Like, it can result in the manufacturing of several killing machines operating at the speed of thought. The government should apply strict rules to use this technique to better human beings and not harm them. In this advanced new world, gene editing is another procedure underuse to incorporate computers with humans. It will help in real-time information exchange from brainwaves to the computer screen. We should be aware of all ethical and moral standards to be followed for the integrity of humanity.
With time, this technology will start seeing our thoughts remotely, and it becomes more transparent in the coming years. The researchers are trying to develop cost-effective neural interfaces that eliminate the requirement of implantations.
Potential Approaches and Applications
By improving the quality of the image produced, this technique can be proved as an excellent way to open several secret doors.
- Image of imagination; people can see what appeared in their dreams while sleeping.
- Doctors can treat psychiatric patients very well by seeing their thoughts and fears. They can know what's going on in the minds that keep them irritated and exhausted.
- No need to type or record the voice; juts connect the artificially intelligent system and talk to others by thinking. AI will deliver messages wither by typing your thoughts or by producing an image of it.
- Artificial intelligence, in the medical field, also helps to detect improvements in disabilities. It detects electrical signals coming from a paraplegic patient; thus, helping the doctors recognize even a slight movement in the patient's legs. Furthermore, a person with spinal cord injury can also run a motorized wheelchair.
- AI brain implants are the next evolution in mind-reading technology.
- Besides, we all have many ideas but find no way to explain it to our boss, friends, family members, or loved ones. AI helps in this respect; it will generate the image of thoughts and ideas in your brain to better explain to others.
- Deliver your thoughts without communication by developing Brain-Machine Interface (BMIs) as it enables information transfer without speaking in real-time.
- It’s a natural strategy that if someone comes to life with some disability, he/she has some extraordinary talent. Thus, it will also help to get the thoughts of those who can’t speak but have innovative ideas in their brain.
- AI, coupled with human thought processing, can help eliminate the use of implanted electrodes in people's heads. The procedure was previously used to determine the thinking, but now AI speeds up multilingual translations in real-time.
- Furthermore, this technology will be used to scan the minds of criminals. It will eliminate the QnA session with them as images tell about their past.
- It also helps in particular shopping objects about which you don't have enough information, and even don't know their name. Just recall it in your brain and got excellent results on your computer regarding that particular object.
- Soon, AI systems will be installed in our bodies and become a part of the neuromechanical biological system.
- It also helps to get tangible information that is free from verbal communication.
- It will combine humans and electronics to eradicate the difference between the cell and the electron.
- The non-invasive technique for brain activity detection provides human-robot interaction as well as human health care.