
Non-invasive brain-computer interfaces have traditionally been easy to dismiss. In a field shaped by surgical ambition, they are often framed as lower-fidelity systems with limited long-term potential. But a new group of companies is showing that better machine learning models, larger datasets, and tighter hardware-software integration can push non-invasive further than previously assumed. Synaptrix Labs sits squarely in that camp, building from EEG-based decoding toward assistive control and a broad clinical thesis.
Aryan Govil founded Synaptrix with a conviction that the main bottleneck in non-invasive BCIs is what can be extracted from noisy neural data once modern AI is applied to it. The company, founded in 2023 by Govil and fellow NYU-grad Eric Yao, is developing its Neuralis platform around that premise. They have opted for wheelchair control as its first major use case, while not shying away from a broader ambition to make non-invasive brain-based interactions usable across everyday care settings.

That thesis has started to draw wider attention. Mark Cuban invested in the company in late 2025, and Synaptrix is now running participant recruitment in New York to collect real-world data and test its system in everyday conditions. We discussed why Govil believes non-invasive BCIs still have far more headroom than the market gives them, why wheelchair control made sense as a first proving ground, and how Synaptrix is building both the hardware and data stack needed to make an EEG interface work at scale.
What is Synaptrix’s core thesis, and why did you decide this was the right area to build in?
The core thesis is that you do not need to go invasive to build high-performance brain-computer interfaces. You need better data and much better models. The brain is an incredibly rich dynamical system, and EEG, despite being noisy, still carries a great deal of usable signal if you know how to extract it.
What has been missing is not hardware as much as the ability to model that signal at scale. Once you treat neural activity as a high-dimensional system, and apply the same kind of deep learning and data scaling that transformed other domains, you begin to unlock real capability.
I believed this was the right area to build in because the upside is massive: restoring mobility and communication for people who need it most. The timing also finally makes sense, with modern compute and a clearer path to building large brain models that can generalize beyond lab settings.
Why did wheelchair control make sense as the first use case?
Wheelchair control felt right because it is both deeply meaningful and technically tractable. You have a clear closed-loop system where intent maps cleanly to directional commands, so you can validate decoding performance in a real, embodied setting without needing perfect language-level precision.
At the same time, the impact is immediate. For someone who cannot move, even basic navigation is life-changing. It also forces us to solve the hard parts early: robustness, low latency, safety, and real-world reliability, instead of staying in a demo environment.
If we can get this right, it becomes a strong foundation for expanding into communication, prosthetics, and more complex interfaces for diagnostics and healthcare, all through a single unified device rather than something more narrow like Augmental’s MouthPad.
You have said the brain could become healthcare’s primary interface. What do you mean by that, and where do you see the earliest practical uses?
The brain can become the universal control layer for both reading and writing human biology, not just for interacting with devices. Today, we think of healthcare as fragmented across organs, specialties, and symptoms, but the brain is the central interface through which all of that is coordinated, perceived, and ultimately regulated.
If you can reliably decode and, more importantly, encode signals into the brain, you start to collapse that entire stack into a single unified interface. In the near term, that shows up in assistive use cases like communication and mobility. Over a longer horizon, it evolves into closed-loop systems that can monitor neural and physiological state continuously and intervene in real time.
Within this century, I think this becomes a platform through which you can treat neurological disorders, modulate immune responses, correct dysfunction in other organ systems, and even rewrite maladaptive patterns at the source. Instead of treating disease downstream, you are interfacing directly with the control system of the body: reading intent, writing corrections, and maintaining homeostasis through software.
At that point, healthcare stops being reactive and episodic. It becomes continuous, personalized, and fundamentally integrated into how we operate as humans.
If you can reliably decode and, more importantly, encode signals into the brain, you start to collapse that entire stack into a single unified interface.
Beyond visibility, what did Mark Cuban’s investment change for the company?
Mark’s investment mattered far beyond visibility because it aligned us with someone who genuinely cares about making healthcare more accessible and affordable at scale. We see brain-computer interfaces as a strong companion to that mission: a way to break down barriers in mobility, communication, and long-term care without requiring invasive procedures or prohibitively expensive systems.
In a field often dominated by implant s, why do you see non-invasive BCI as the more important path?
Non-invasive is the only path that scales to billions of people, not thousands. Ultimately, this is more a data and software problem than a surgical one. As models improve and datasets grow, we will extract far more signal from the brain than people expect, without ever opening the skull.
The winning system will be the one that can deploy everywhere, learn continuously, and get better with every user.
Why was it important to build both the hardware and software stack in-house?
We are building a tightly coupled system in which signal quality, hardware design, and models all co-evolve. Owning the full stack lets us optimize every layer rather than inherit someone else’s constraints. Vertical integration also lets you control the entire product experience.
EEG still faces signal quality and robustness challenges. Where are the main bottlenecks, and how is Synaptrix approaching them?
I cannot say too much here, but we are approaching this by treating EEG as a high-dimensional dynamical system and training large-scale models that learn to separate true cortical intent from everything else, effectively denoising and stabilizing the signal in a learned way.
As you scale data and compute, the system starts to generalize across users and environments. That is what ultimately unlocks robustness in the real world.
Non-invasive is the only path that scales to billions of people, not thousands. Ultimately, this is more a data and software problem than a surgical one.
What is the current study designed to test, and what would a strong outcome look like at this stage?
This study is about stress-testing our system in the real world, collecting large-scale, high-quality neural data while people use the interface to control cursors, navigate, and perform motor imagery tasks.
We are looking to understand how well our models generalize across different individuals, how stable the decoding is over time, and how quickly users can achieve reliable control. A strong outcome for us is consistent, low-latency intent decoding across a diverse set of participants, with minimal calibration, where people can actually use the system intuitively rather than feel like they are fighting it.
About the study: Synaptrix is running a research study in New York City to help develop next-generation neural control technology, and you can take part. Each session lasts about 90 minutes and pays $50, with the option to participate in multiple sessions.
During the study, you will wear a research headset and vividly imagine specific movements while your brain activity is recorded. Participation helps advance technology aimed at restoring mobility and communication for people living with paralysis.