top of page

Stanford’s Inner Speech BCI Shows Patients Want More Than Accuracy

Imagined words, decoded in real time, point to a more natural path for restoring communication


A Stanford researcher group has developed a brain-computer interface that can decode inner speech, sentences imagined silently in the mind, and translate them into text in real time. Published in Cell, the study involved four participants with paralysis from ALS and stroke, some of whom had lost nearly all ability to speak. Unlike previous systems that required users to strain their vocal muscles in “attempted speech,” the new approach allowed them to communicate simply by imagining words, with vocabularies reaching up to 125,000 words.


The work comes from Eric Kunz et al. at the BrainGate consortium, long regarded as a leader in neural interface research. Participants reported that inner speech decoding was both faster and less tiring, offering a communication mode closer to natural thought. While mainstream coverage of the breakthrough has focused on the new possibility of “mind reading”, the real story is clinical: for patients who find attempted speech exhausting or impossible, this development offers a far more usable and dignified pathway to restored communication.


Inside the Study: How Neural Arrays Captured Silent Sentences

The Cell article analyzed neural activity from intracortical microelectrode arrays implanted in the motor cortex of four participants. Inner speech, attempted speech, and passive listening were all found to share a common neural code, with inner speech emerging as a scaled-down version of motor commands that never cross the activation threshold for articulation. By training recurrent neural network decoders on these signals, the team achieved real-time decoding of silently imagined sentences, with word error rates between 24% and 54% depending on vocabulary size.


Critically, the researchers tested vocabularies far beyond toy datasets. In addition to a controlled 50-word set, they evaluated performance on a 125,000-word lexicon drawn from the Switchboard corpus, demonstrating that large-vocabulary inner speech decoding is technically feasible. Participants included individuals with severe dysarthria from ALS and a pontine stroke, as well as one anarthric patient dependent on a ventilator. All had chronically implanted Utah arrays in ventral and mid-precentral gyrus regions, confirming these “speech hotspots” as key substrates for decoding.


The authors also probed uninstructed inner speech, showing that elements of covert verbal rehearsal could be decoded during tasks such as sequence recall and silent counting. These experiments underscore both the power and the potential risks of the approach: aspects of inner monologue can surface in neural recordings even when not explicitly cued. To address ethical concerns, the researchers identified a reliable neural signal that separates attempted speech from inner speech. Using this signal, they trained their decoders with strategies such as “imagery-silenced” labeling and keyword-based unlocking, which helped block unintended outputs while preserving performance for intended communication.


Keyword inner speech
Keyword-based unlocking (credit: Kunz et al. 2025)

Why Patients Preferred Inner Speech Over Attempted Speech

For people with ALS or brainstem stroke, even the best attempted-speech BCIs demand effort. Users must tense their orofacial muscles or force weak vocalizations, often at the expense of fatigue, breath control, and comfort. While these systems have achieved impressive communication rates, the physical strain and conspicuous outward movements limit their practicality in everyday life. As one participant in the Stanford study described, producing attempted speech was not only exhausting but also socially awkward, since it often resulted in audible but unintelligible sounds.


By contrast, inner-speech decoding offered a qualitatively different experience. Participants preferred it for its lower physical load, faster pace, and more discreet appearance. In essence, the system allowed them to “speak silently”, a mode of communication that felt more natural and sustainable over time. For clinicians, this distinction matters: the success of BCIs will not be judged only by accuracy metrics, but by whether patients can integrate them into daily routines without constant fatigue or frustration.


Can Startups Afford to Ignore Inner Speech?

The Stanford results arrive at a moment when industry is converging on speech restoration as the flagship use case for invasive BCIs. Paradromics has positioned its Connexus implant as a communication neuroprosthesis, while Synchron is advancing an endovascular interface for cursor typing and text. Precision Neuroscience is betting on high-density electrocorticography to unlock speech decoding, and Neuralink has signaled similar ambitions in ALS. Yet in every case, the emphasis has been on attempted speech.


Inner speech has largely been treated as either too weak to decode reliably or too fraught with privacy concerns to pursue. This study challenges both assumptions, showing that imagined sentences can be decoded in real time with vocabularies orders of magnitude larger than typical research sets, and that safeguards can be built in to protect mental privacy. By demonstrating strategies like imagery-silenced training and keyword unlocking, the researchers set a precedent: ethical design cannot be an afterthought once products reach market, but must be built into decoding pipelines from the start.


Stanford inner speech array
Stanford's microelectrode array (credit: Stanford)

For startups, the message is once again clear: user adoption will not hinge only on data rates or decoding accuracy, but on whether devices feel effortless, trustworthy, and livable. Inner-speech BCIs may not be a commercial priority yet, but for patients, they could be the difference between a technology that is clinically impressive and one that is truly transformative.


Subscribe to the Neurofounders Monthly Digest

Get the latest founder stories, brain tech trends, and insights before anyone else.

bottom of page