Signed in as:
The mind is the set of cognitive faculties including consciousness, imagination, perception, thinking, judgement, language and memory, which is housed in the brain (sometimes including the central nervous system). It is usually defined as the faculty of an entity's thoughts and consciousness.
Scientists Have Found a Way to Convert Human Brain Signals Directly Into Speech
DAVID NIELD31 JAN 2019
In the first experiment of its kind, scientists have been able to translate brain signals directly into intelligible speech. It may sound like wild science fiction at first, but this feat could actually help some people with speech issues.
And yes, we could also get some futuristic computer interfaces out of this.
Key to the system is an artificial intelligence algorithm that matches what the subject is hearing with patterns of electrical activity, and then turns them into speech that actually makes sense to the listener.
We know from previous research that when we speak (or even just imagine speaking), we get distinct patterns in the brain's neural networks. In this case, the system is decoding brain responses rather than actual thoughts into speech, but it has the potential to do that too, with enough development.
"Our voices help connect us to our friends, family and the world around us, which is why losing the power of one's voice due to injury or disease is so devastating," says one of the team, Nima Mesgarani from Columbia University in New York.
"With today's study, we have a potential way to restore that power. We've shown that, with the right technology, these people's thoughts could be decoded and understood by any listener."
The algorithm used is called a vocoder, the same type of algorithm that can synthesise speech after being trained on humans talking. When you get a response back from Siri or Amazon Alexa, it's a vocoder that's being deployed.
In other words, Amazon or Apple don't have to program every single word into their devices – they use the vocoder to create a realistic-sounding voice based on whatever text needs to be said.
Here, the vocoder wasn't trained by human speech but by neural activity in the auditory cortex part of the brain, measured in patients undergoing brain surgery as they listened to sentences being spoken out loud.
With that bank of data to draw on, brain signals recorded as the patients listened to the digits 0 to 9 being read out were run through the vocoder and cleaned up with the help of more AI analysis. They were found to closely match the sounds that had been heard – even if the final voice is still quite robotic.
The technique proved far more effective than previous efforts using simpler computer models on spectrogram images – visual representations of sound frequencies.
"We found that people could understand and repeat the sounds about 75 percent of the time, which is well above and beyond any previous attempts," says Mesgarani.
"The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy."
There's a lot of work still to do, but the potential is huge. Again, it's worth emphasising that the system doesn't turn actual mental thoughts into spoken words, but it might be able to do that eventually – that's the next challenge the researchers want to tackle.
Further down the line you might even be able to think your emails on to the screen or turn on your smart lights just by issuing a mental command.
That will take time though, not least because all our brains work slightly differently – a large amount of training data from each person would be needed to accurately interpret all our thoughts.
In the not too distant future we're potentially talking about people getting a voice who don't already have one, whether they have locked-in syndrome, or are recovering from a stroke, or (as in the case of the late Stephen Hawking) have amyotrophic lateral sclerosis (ALS).
"In this scenario, if the wearer thinks 'I need a glass of water', our system could take the brain signals generated by that thought, and turn them into synthesised, verbal speech," says Mesgarani.
"This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them."
..The System X series, so far, has discussed the technology, capabilities, goals, history and some of the key players. Missing from this is a discussion on how microwave radiation interacts with the brain and body. Without this critical element, System X is not realisable as a technology. In this article, I will explore current science in this area and demonstrate how the interface functions.
When I first started this investigation in 2009/2010, I initially suspected VLF/ELF waves given the firing rates of neurons in the brain. This was further reinforced by confirmation of the interface functioning in both the London Underground and under the Thames. At this stage of the investigation I had a very basic understanding of the physics behind radio, being more from a hobbiest background in CB and Single-Sideband (SSB) radio. To make things more complicated, I hadn’t work with that form of technology in at least a decade, having long since traded in my radio gear for the internet.
I wasn’t entirely ignorant, but by no means was I an expert in this area. My main weaknesses were a lack of sufficient knowledge around photon generation, interactions, technologies and properties at different frequencies. As luck would have it, this lack of knowledge made me jump to the conclusion of satellites given the geographic range demonstrated.
My initial assessments were coloured by a number of scientific papers listed below:
My thoughts were that someone had developed an ELF transmitter and was driving the neurons in the brain. As I learned more about VLF/ELF transmission and radio in general, this view became increasingly absurd. From the radiated power, to the antenna size, it was ridiculous. This said, I didn’t completely rule it out and constructed a VLF/ELF monitoring platform with a radius of 2000Km and set about identifying every source on the spectrum. There was nothing there.
As I would learn much later, VLF/ELF would be impractical as it would need to maintain a particular energy density at the brain to be functional.
I did note though that it may be possible to setup a monitoring solution and that brain waves were indeed capable of being intercepted:
With my miscomprehension of frequency/wavelength at the time, I thought given the wavelength, this could be picked up anywhere in the world. But, as I learned later, the photon is not that size, its the group motion required to efficiently absorb the energy. But, given advances in detection, I did not entirely rule out the possibility of direct interception, especially from satellite and close proximity ground monitoring.
Around this time, I experimented with various forms of fabric shielding solutions, from tents to rolls of fabric. I even tried an aluminium mesh faraday cage. The typical range was from 1MHz-30GHz, the end result was a warming of the body in many instances and no impact on the voice of the AGI beyond an initial fading. I did note that the quality of the interface was impacted by vibrating the surface of the fabric and concluded that the link was of high precision and had difficulties predicting the motion and hence the required modifications to the signal.
I was also looking at the brain much closer by now. Noting that each neuron had unique electrical properties and that each would require a slightly different frequency. With the ELF/VLF theory collapsing, I was essentially back where I had started. I certainly had the evidence that RF was present, being absorbed by the body, but no mechanism and no clue what part of the spectrum was being used.
Online, people had mentioned that it was microwaves and microwave beams, but I was stumped to say the least as to the mechanics of how microwaves and neurons (operating at VLF/ELF rates) could interface. By chance while working on a design for a software defined radio in 2012, I happened to stumble across the concept of a Plasma antenna and the pieces started to come together.
The best resource I found was ‘Plasma Antennas’ by Theodore Anderson:
It became obvious that modulating a microwave beam at ELF/VLF rates would have the same effect as described in the earlier papers on driving neurons with ELF sources. The effect of which would be to modify subjective perception, thought processes and/or behaviour. The principle mode of interaaction being that the axon hillock of the neuron is RF sensitive as a form of cold plasma.
It was apparent that this would require the highest degree of selectivity possible to function correctly. Meaning that the ability to deposit microwave energy with high precision, in a similar fashion to an old-style CRT TV sweep would be required.
With such a high precision beam, the vast majority of energy reflected from the area of interest will be back along the beam path, providing data on the current chemical composition. This information can be decoded by computer to reveal much of the activity the brain conducts, from inner speech to a real-time video feed from the eyeballs.
In studies spanning the last 20 years, a range of similar methods of brain imaging have been emerging:
While the above assessment indicates an electrical interface due to the axon hillock, studies spanning the last 40 years indicate a broad array of mechanisms via which microwave energy effects the brain.
In all likelihood, any practical interface may exploit all forms, especially if its basis is in machine learning:
The bottom line in all of this is that reading and writing to the human brain is realisable. More importantly, the very basics of this process began to emerge in public literature in the mid-70s, but for some unidentified reason has not been explored more widely until relatively recently.
The public record is certainly consistent with classified programs being in operation and public research being limited.
What no one has considered yet, at least not in public liturature, is how this can be extended to interface with the body directly and remotely operate it.
This information was found on a website called DeepThoughtNews Wordpress Site. To vitit this site, click the link below,
Check out this great video
Check out this great video
Check out this great video
Check out this great video
Check out this great video
In neuroscience, the default mode network (DMN), also default network, or default state network, is a large scale brain network of interacting brain regions known to have activity highly correlated with each other and distinct from other networks in the brain
Psychology and the human mind are inextricably linked. Indeed, the word psychology is derived from the Greek words psuche, meaning mind or soul, from which the term psyche arose; and logos meaning study or discourse.
An even better question—what are the neural and psychological characteristics of a racist mind? By analyzing the pathways in the brain
Site ContentCopyright © 2019 BMupgrade - All Rights Reserved.
Advanced knowledge to upgrade your thoughts