top of page

Sam Hosovsky on Building for the First 100 Million BCI Users

The BCI field is picking up momentum. At Stanford, researchers recently showed it is possible to decode attempted speech with vocabularies in the hundreds of thousands and error rates below 5%. At UCLA, new work with EEG and AI copilots suggests that less invasive approaches may also enable practical interfaces. Meanwhile, companies like Neuralink, Precision Neuroscience, and Synchron are pushing their devices closer to real-world use, whether through high-bandwidth implants or vascular stentrodes already in clinical trials.


As hardware capabilities expand, the question shifts from whether we can record rich neural signals to how those signals can be best translated into everyday function. Restoring fluent communication, natural movement, and, above all, a sense of agency remains the true test of value for end-users.


This is where uCat comes in. Founded by Sam Hosovsky, the company positions itself as a “layer-two” software platform for BCIs. uCat focuses on speech decoding and building virtual environments where people with paralysis can interact through naturalistic avatars, a bridge that may one day extend from VR into physical robotics. I spoke with Sam about the direction of BCIs, the role of speech and VR, and what it might take to reach the first hundred million users.


You’ve been close to some of the leading BCI efforts. How do you see the field right now?

The position I’m in is a peculiar one. uCat is a kind of layer-two company attached to high-bandwidth brain-computer interfaces. We’re purely a software company that treats the motor cortex as the highest-resolution input device, something that can control robotics, computers, or even a personal avatar.

Sam Hosovsky uCat
Sam Hosovsky

Since we center ourselves around users, I’ve had access to a lot of these high-bandwidth BCI companies: Neuralink, Precision Neuroscience, Paradromics. So I’m not “in the field” in the classic sense, but I see what’s feasible and what isn’t. Their implants can record with high spatial and temporal resolution, around 200 megabits per second, from hundreds of neurons, even down to single-cell activity.


Right now, access is still limited, with only a few clinical trial participants. However, the first indications are clear: BCIs can help people with paralysis regain the ability to move and communicate. The challenge is linking that high-dimensional neural activity to high-dimensional motor behavior. That’s a very difficult machine-learning problem, and it’s where things are getting interesting.


Where does uCat fit into this landscape?

We put people who are paralyzed in VR, or we show them a 3D animation of the movements we want them to mimic. It feels like their hands or arms are moving. That gives us precise labels of what they imagine doing. Because people with paralysis can’t provide real movement data, there’s no ground truth to train on. By putting them in VR and showing movements to mimic, we generate labeled data from their imagined actions, and that’s what we help companies like Neuralink with when commercializing.


We finished a prototype and built a demo. It’s open-source for training exercises, so motor-BCI researchers can use it. We also have something proprietary which I can’t share. We’re now looking for our first collaborators to take it into clinical trials.


Right now, the software stack focuses on speech. Since we don’t yet have a live BCI partner, we simulate input by using speech-to-text: we speak words, send them to Eleven Labs, and get text tokens back as if they came from an implant. Those tokens are then fed into VR. This gives us a fully working prototype that can later be swapped for actual BCI signals during trials.



Why are you focusing on speech, and what progress have you seen from the research side?

My journey into BCIs started in 2021, at a journal club with David Moses from Chang Lab at UCSF, one of the few labs in the world doing naturalistic, high-speed speech decoding with large vocabularies. Back then, they showed decoding a tiny 50-word vocabulary at about 15 words per minute, with an error every four words or so.


Even then, it felt like the scientific question was answered, and the rest was engineering. Since then, demonstrations have jumped to about 65 words per minute for people with anarthria, vocabularies in the hundreds of thousands, and error rates below 10%. That convinced me that speech will lead the way for high-bandwidth BCIs.


How does VR come into play, especially around restoring agency?

The proprietary part of uCat is that paralysis is different for everyone. Some people have residual movement in one arm, some still have access to speech, some don’t. We’re building an avatar that takes input from the body, however we can get it, computer vision, motion tracking, wearables, and then we supplement that with information from the implant. 


The result is a virtual replica, an extension of your body, that moves naturally and looks like any other user in social VR. That’s what we’re working on and licensing as our proprietary tech.


The point is expression, to give people back agency. Agency is the overarching thing people lose with paralysis. Once you lose agency, you lose independence, you lose privacy, you can’t even go to the toilet without assistance. You lose everything you once thought you could do. VR doesn’t extend to the physical world, but it does restore a sense of control. You can work, be a student or a teacher, create art, even if you’re fully paralyzed. You can create, collaborate globally, and even be a dancer or an artist in 3D.


I’m moving to Japan next month, and I’m learning Japanese in VRChat with both Japanese and English speakers. It’s more efficient than local classes. The social side is huge too; paralysis often strips away old friendships, but in VR, your friends can interact with you almost indistinguishably from before. And even if you don’t buy the metaverse vision, VR is a stepping stone to teleoperated robots, which is what many people want BCIs for.


You’ve talked about the “first 100 million BCI users.” How realistic is that, and who are they?

I gave a talk at the World BCI Forum about this. First, I looked at the miniaturization of the devices, their biocompatibility, and performance: if those trends continue, you don’t have to restrict implants to the most severe paralysis. You can extend them to less severe cases.


About 1.7% of the U.S. population lives with some form of paralysis. Scaled globally, that’s roughly 136 million people. If devices become safe and accepted, normalized the way breast implants are, adoption for efficacy reasons could get you to that first hundred million. By then, virtual worlds will be photorealistic. People could live in VR doing everyday activities. A platform like uCat’s could host those users and make BCIs mainstream.


Sam Hosovsky uCat
Sam Hosovksy giving a talk on uCat and BCI's.

Recently, there’s been a new wave of attention and money in the BCI space. What do you make of it?

It’s still a small area. Maybe $10 billion has gone into neurotech device startups over two decades, and around a dozen high-bandwidth BCI companies have taken in about $2.5 billion of that. Go-to-market can take five to ten years, and there’s a ‘valley of death’ between academic funding and a viable VC opportunity.


That’s why eccentric billionaires moving into the space are encouraging. Hopefully, we’ll see something like the ‘Musk effect’ of 2016 with Neuralink, but now the ‘Sam Altman effect’ in 2025, unlocking more capital to close that gap


What’s your personal mission, and would you ever join one of the big BCI teams directly?

Before I could speak English, I could speak Java. I was a software kid fascinated by what computer-generated worlds could do. The underlying question for me was: how difficult is it to engineer a brain? You can think of it as a finite state automaton with emergent behavior.


Over time, I became less naive, more humble, and started working with computational neuroscience folks. I realized BCIs would be the ‘ChatGPT moment’ for neurotech: if we want useful models of the brain, we first need a way to interface with it that helps many people.


As for joining a big company, most aren’t very end-user-centric. They are spinouts from engineering or neurosurgery labs, so product design for the end user isn’t emphasized. Once a team says, “We’ve locked the specs. Here’s the signal about whole-body biomechanics. What’s most valuable to do with it?” I’d be happy to contribute. Until then, they probably wouldn’t let me work on it, so I’m doing it privately.


Looking ahead, when do you expect the first users to try uCat?

For research users, the prototype is ready and documented, and our team is prepared. There’s no reason it shouldn’t be in use by the end of this year, by researchers or partner companies.


For public users, it depends on who makes it to market first. Synchron is the furthest along on approvals, but to call it high-bandwidth is a stretch: 16 channels inside the vasculature, attenuated by the vessel wall. You can extract binary signals, clicks, but not the ~80 degrees of freedom you’d want to embody a full avatar. That said, they’re doing a great job for the sector overall, and even providing more than an eye tracker can is valuable progress.


I actually think subdural ECoG might get there before fully intracortical. Neurosoft is exciting: electrodes so soft they can be inserted through a 14-mm burr hole and then expand to cover almost 10×10 cm of cortex. Because they’re elastic, they move with the brain and don’t trigger long-term foreign-body reactions. If they play their cards right, maybe by 2035 we’ll see the first real set of public users. That’s what I hope is going to happen.


Closing Thoughts

Talking to Sam, it’s clear that uCat goes beyond decoding signals, translating neural activity into real experiences of communication, expression, and agency. Whether through speech restoration, virtual avatars, or one day robotic embodiment, Sam's mission is to ensure BCIs don’t stop at recording data but achieve in giving people their independence back.


That mission is rooted in his own journey, from writing code before he could speak English, to asking whether a brain could be “engineered,” to realizing that true progress will only come when technology serves the end user. It’s a perspective shaped less by neurosurgery or chip design, and more by years of building software and thinking about how people actually interact with tech.


For Sam, restoring agency isn’t a side effect of BCIs; it’s the point. And that, he believes, is how the path to the first hundred million users begins.

Subscribe to the Neurofounders Monthly Digest

Get the latest founder stories, brain tech trends, and insights before anyone else.

bottom of page