Thought identification: Difference between revisions
(return to last version by David Eppstein. These are not good sources, and the statements are not plausible. Norseen apparently has no reputable publications.) |
Damonthesis (talk | contribs) (Please take this to Talk. The sources are major newspapers as well as a Military monthly distributed by the Marine Corp, what better place to get news on the military?) |
||
Line 16: | Line 16: | ||
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar.<ref>{{citation|first1=Shinji|last1=Nishimoto|first2=An T.|last2=Vu|first3=Thomas|last3=Naselaris|first4=Yuval|last4=Benjamini|first5=Bin|last5=Yu|author5-link=Bin Yu||first6=Jack L.|last6=Gallant|title=Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies|journal=[[Current Biology]]|doi=10.1016/j.cub.2011.08.031|volume=21|issue=19|year=2011|pages=1641–1646}}</ref><ref>[http://blogs.scientificamerican.com/observations/2011/09/22/breakthrough-could-enable-others-to-watch-your-dreams-and-memories-video/Scientific American Blog, Breakthrough Could Enable Others to Watch Your Dreams and Memories [Video], Philip Yam]</ref> The authors have uploaded demos comparing the watched and the computer-estimated videos.<ref>[http://www.youtube.com/watch?v=KMA23JJ1M1o Nishimoto et al. uploaded video, "Nishimoto.etal.2011.3Subjects.mpeg" on Youtube]</ref> |
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar.<ref>{{citation|first1=Shinji|last1=Nishimoto|first2=An T.|last2=Vu|first3=Thomas|last3=Naselaris|first4=Yuval|last4=Benjamini|first5=Bin|last5=Yu|author5-link=Bin Yu||first6=Jack L.|last6=Gallant|title=Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies|journal=[[Current Biology]]|doi=10.1016/j.cub.2011.08.031|volume=21|issue=19|year=2011|pages=1641–1646}}</ref><ref>[http://blogs.scientificamerican.com/observations/2011/09/22/breakthrough-could-enable-others-to-watch-your-dreams-and-memories-video/Scientific American Blog, Breakthrough Could Enable Others to Watch Your Dreams and Memories [Video], Philip Yam]</ref> The authors have uploaded demos comparing the watched and the computer-estimated videos.<ref>[http://www.youtube.com/watch?v=KMA23JJ1M1o Nishimoto et al. uploaded video, "Nishimoto.etal.2011.3Subjects.mpeg" on Youtube]</ref> |
||
+ | |||
+ | In 2000, [[U.S. News and World Report]] printed an interview with [[John Norseen]], in which he states he had submitted a research and development plan to the Pentagon to identify a terrorist's mental profile. Norseen was a lead researcher at [[Lockheed Martin]] working on a project called BioFusion on contract with the [[Department of Defense]]. One year later, in [[SIGNAL Magazine]], Norseen gave an update on his research: "By looking at the collective data, we know that when this person thinks of the number nine or says the number nine, this is how it appears in the brain, providing a fingerprint, or what we call a brainprint," Norseen offers. "We are at the point where this database has been developed enough that we can use a single electrode or something like an airport security system where there is a dome above your head to get enough information that we can know the number you're thinking," he adds.<ref>{{cite web| title="Buck Rogers, meet John Norseen" | author="U.S. News and World Report" | url="http://www.usnews.com/usnews/culture/articles/000103/archive_033992.htm"}}</ref><ref>{{cite web| title="Decoding Minds, Foiling Adversaries" | author="SIGNAL Magazine" | url="http://www.afcea.org/signal/archives/content/Oct01/"}}</ref> |
||
===Predicting intentions=== |
===Predicting intentions=== |
Revision as of 02:49, 1 May 2013
Thought identification refers to the empirically verified use of technology to, in some sense, read people's minds. Recent research using neuroimaging has provided some early demonstrations of the technology's potential to recognize high-order patterns in the brain. In some cases, this provides meaningful (and controversial) information to investigators.
Professor of neuropsychology, Barbara Sahakian, qualifies "A lot of neuroscientists in the field are very cautious and say we can't talk about reading individuals' minds, and right now that is very true, but we're moving ahead so rapidly, it's not going to be that long before we will be able to tell whether someone's making up a story, or whether someone intended to do a crime with a certain degree of certainty."[1]
Examples
Identifying thoughts
When humans think of an object, like a screwdriver, many different areas of the brain activate. This is because what we call Memory is actually distributed associations throughout the brain - using the screwdriver, seeing the screw driver, etc. Psychologist Marcel Just and his colleague, Tom Mitchell, have used FMRI brain scans to teach a computer to identify the various parts of the brain associated with specific thoughts.[2]
This breakthrough technology also yielded a discovery: similar thoughts in different human brains are surprisingly similar neurologically. To illustrate this, Just and Mitchell used their computer to predict, based on nothing but FMRI data, which of several images a volunteer was thinking about. The computer was 100% accurate, but so far the machine is only distinguishing between 10 images.[2]
Psychologist John Dylan-Haynes explains that FMRI can also be used to identify recognition in the brain. He provides the example of a criminal being interrogated about whether he recognizes the scene of the crime or murder weapons.[2] Just and Mitchell also claim they are beginning to be able to identify kindness, hypocrisy, and love in the brain.[2]
In 2010 IBM applied for a patent on how to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform gyrus area in the brain which activates proportionate with degree of facial recognition.[3]
In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar.[4][5] The authors have uploaded demos comparing the watched and the computer-estimated videos.[6]
In 2000, U.S. News and World Report printed an interview with John Norseen, in which he states he had submitted a research and development plan to the Pentagon to identify a terrorist's mental profile. Norseen was a lead researcher at Lockheed Martin working on a project called BioFusion on contract with the Department of Defense. One year later, in SIGNAL Magazine, Norseen gave an update on his research: "By looking at the collective data, we know that when this person thinks of the number nine or says the number nine, this is how it appears in the brain, providing a fingerprint, or what we call a brainprint," Norseen offers. "We are at the point where this database has been developed enough that we can use a single electrode or something like an airport security system where there is a dome above your head to get enough information that we can know the number you're thinking," he adds.[7][8]
Predicting intentions
Some researchers in 2008 were able to predict, with 60% accuracy, whether a subject was going to push a button with their left or right hand. This is notable, not just because the accuracy is better than chance, but also because the scientists were able to make these predictions up to 10 seconds before the subject acted - well before the subject felt they had decided.[9] This data is even more striking in light of other research suggesting that the decision to move, and possibly the ability to cancel that movement at the last second,[10] may be the results of unconscious processing.[11]
John Dylan-Haynes has also demonstrated that FMRI can be used to identify whether a volunteer is about to add or subtract two numbers in their head.[2]
Brain as input device
Emotiv Systems, an Australian electronics company, has demonstrated a headset that can be trained to recognize a user's thought patterns for different commands. Tan Le demonstrated the headset's ability to manipulate virtual objects on screen, and discussed various future applications for such brain-computer interface devices, from powering wheel chairs to replacing the mouse and keyboard.[12]
Decoding brain activity to reconstruct words
On January 31, 2012 Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology where in subjects internal neural processing of auditory information was decoded and reconstructed as sound on computer by gathering and analyzing electrical signals directly from subjects brains.[13] The research team conducted their studies on the superior temporal gyrus, a region of the brain that is involved in higher order neural processing to make semantic sense from auditory information.[14] The research team used a computer model to analyze various parts of the brain that might be involved in neural firing while processing auditory signals. Using the computational model, scientists were able to identify the brain activity involved in processing auditory information when subjects were presented with recording of individual words.[15] Later, the computer model of auditory information processing was used to reconstruct some of the words back into sound based on the neural processing of the subjects. However the reconstructed sounds were not of good quality and could be recognized only when the audio wave patterns of the reconstructed sound were visually matched with the audio wave patterns of the original sound that was presented to the subjects.[15] However this research marks a direction towards more precise identification of neural activity in cognition.
Ethical issues
With brain scanning technology becoming increasingly accurate, experts predict important debates over how and when it should be used. One potential area of application is criminal law. Haynes explains that simply refusing to use brain scans on suspects also prevents the wrongly accused from proving their innocence.[1]
References
- ^ a b The Guardian, "The brain scan that can read people's intentions"
- ^ a b c d e 60 Minutes "Technology that can read your mind"
- ^ IBM Patent Application: Retrieving mental images of faces from the human brain
- ^ Nishimoto, Shinji; Vu, An T.; Naselaris, Thomas; Benjamini, Yuval; Yu, Bin; Gallant, Jack L. (2011), "Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies", Current Biology, 21 (19): 1641–1646, doi:10.1016/j.cub.2011.08.031
- ^ American Blog, Breakthrough Could Enable Others to Watch Your Dreams and Memories [Video], Philip Yam
- ^ Nishimoto et al. uploaded video, "Nishimoto.etal.2011.3Subjects.mpeg" on Youtube
- ^ "U.S. News and World Report". ["http://www.usnews.com/usnews/culture/articles/000103/archive_033992.htm" ""Buck Rogers, meet John Norseen""] Check
|url=
value (help). - ^ "SIGNAL Magazine". ["http://www.afcea.org/signal/archives/content/Oct01/" ""Decoding Minds, Foiling Adversaries""] Check
|url=
value (help). - ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 18408715, please use {{cite journal}} with
|pmid=18408715
instead. - ^ Kühn, S., & Brass, M. (2009). Retrospective construction of the judgment of free choice.Consciousness and Cognition, 18, 12-21.
- ^ Matsuhashi, M., & Hallett, M. (2008). The timing of the conscious intention to move. European Journal of Neuroscience , 28, 2344-2351.
- ^ Tan Le: A headset that reads your brainwaves
- ^ Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, et al. (2012) Reconstructing Speech from Human Auditory Cortex. PLoS Biol 10(1): e1001251. doi:10.1371/journal.pbio.1001251
- ^ [1] Science decodes 'internal voices' BBC News 31st January 2012
- ^ a b [2] Secrets of the inner voice unlocked feb 1st 2012