Photograph by Alex Welsh
Read Caption

CTRL-Labs is working on physical sensors that can decode your brain's commands and send them to a computer. Here, an engineer runs a demo of one of their products at their offices in New York City.

Photograph by Alex Welsh

How Your Brain May One Day Control Your Computer

If the founders of CTRL-Labs have their way, you won't type commands to a machine. You'll think them.

Transformational ideas can come from anywhere. From anyone. National Geographic’s CHASING GENIUS is now soliciting ideas for how we can use the power of connectivity to imagine a better world. Check out the challenge, where the best idea can win $25,000.

The mouse that runs out of battery life. The trackpad that gets stuck, unresponsive to a swiping finger. The "sent from my iPhone, please forgive typos" disclaimer.

You might consider these annoyances as inevitable byproducts of an otherwise evolved digital age. Thomas Reardon sees them as proof that our relationship with computers is fundamentally broken.

Reardon, a neuroscientist who created one of the earliest portals to the Internet—Microsoft's Internet Explorer—envisions a world where machines take direct orders from our brains.

His New York City-based startup, CTRL-Labs, is working on an external body sensor that can operate a computer by decoding thoughts. Reardon talked to National Geographic about his work on a brain-machine interface and its role in the future of computing.

The concept of a brain-machine interface involves decoding messages from motor neurons. Explain what that means?

The neurons that represent your intentions—your volition in the world—are your motor neurons.

View Images

"We’re trying to give people control," Reardon says of the brain-machine interface his company is building.

When your brain wants to go and do something in the world, it turns muscles on and off. This is almost a depressing statement, but there’s nothing else your brain can actually do. There’s lots of things that you perceive and understand in the world. But in terms of action, all that your brain can do is turn muscles on and off. That’s done via motor neurons, and those are the neurons that we interface with in our work.

How does it actually work? Where do you put the sensors?

We don’t go into a lot of detail, because there’s some very active intellectual property work on our part right now. [But] our company doesn’t care where the sensors are on your body. One place happens to be on your wrist. It could be around your neck, inside your ear, your ankle. None of that really matters to us. What we care about is having access to the electrical signals that motor neurons generate as they connect to muscle.

When you think about all of the different ways that you move in the world, all the different ways that motor neurons turn things on and off, the most densely innovative part of your body is your hands. We experience all the things we do with our hands as skillful, and in particular, adaptive. Something as simple as picking up a glass of water in front of you and raising it to your lips is an incredibly, incredibly skillful task. It’s not for free. Your brain dedicates a ton of effort and a ton of computing, if you will, to that task—something that you just take totally for granted.

And then how does that translate to improving how we interact with computers today?

Today you do all of that interaction with a device in between you, whether it’s a mouse or a keyboard or a joystick. It’s only by removing the device and directly decoding the nerve that we can break through to a new kind of interaction between humans and machines.

You can input tons and tons of information into your brain, and you can analyze it in real time. You are a phenomenal processing machine. Where you are restricted is when you actually want to output something. That’s because you have to do it through muscles, which are biomechanical. That’s where everything gets gummed up, and that’s what we’re trying to solve through.

Pull Quote
I want to see computing break through to a different level of facility and value for people.

When you started this in 2015, what did you see in this space that wasn't being done already?

All of the brain-machine interface work to date has really focused on clinical populations: people who have nerve pathologies, things like ALS or any number of muscular dystrophies. Our big idea was, well, wait a second, most of the work that’s been done to date has been addressed to people who lack functioning motor systems. So what would happen if you actually have a functioning motor system? How would you approach the brain-machine interface problem then? That was the kind of founding animus of the company: Rather than trying to work around the motor neuron system, let’s actually work with it.

There are so many potential applications for this. Are there one or two that are particularly compelling to you?

The first thing we want to fix is text on phones. From my perspective, in 2007, the iPhone came out, and as an evolved species, we regressed. We did not learn how to communicate better. We just constrained our communication, because typing and doing text on the phone is such a nightmare. I want it to feel better than it does when you’re at a keyboard, even if you’re a really good typist—on a phone. I want to reimagine those experiences so the kinds of regressions, steps backwards, that we took in the last 10 years, we can start to move forward a bit.

I want to see computing break through to a different level of facility and value for people. That might sound abstract, but I really imagine things like, why do you use a keyboard to enter text into a machine? Why can’t I write a message to my wife from my pocket, with my hand still in my pocket?

Think of the way you try to select text today when you’re trying to edit. Think about the way you use a mouse and how slow that task is. What if you were typing and selecting text simultaneously, without having to move your hand over to the mouse? All these things become much richer and much more natural once you move away from devices and decode the actual nerves. There literally is no interaction with a machine today—computer, robot, et cetera—that this technology doesn’t ultimately completely turn upside down.

Some would say we are already too intertwined with machines. Does anything give you pause about the direction we're going?

We made a massive couple of steps backwards with the mobile Internet. [Given] always-available information in our faces with a painfully restricted ability to go interact with all of those things, I think humanity steps backwards. We’ve lost control. It’s like the machines started programming us more and more.

The reason we called the company CTRL-Labs is because we’re trying to give people control in terms of moving it forward, changing the promise of interactions between people and machines. Let’s turn the machine back into a tool. I want us programming machines in the deepest possible sense.

This conversation has been edited for length and clarity.