top of page

Brain cell / computer chip hybrid learns to play video games. Mind blown!

Melbourne scientists connected human neurons to a computer chip and taught them video games. The result could reshape AI, energy use and consciousness.



(Clusters of neurons growing on a microscopic computer chip)
(Clusters of neurons growing on a microscopic computer chip)

(first published on my substack where you can get #NerdNews, marvellous maths and general geekery)


Melbourne bio-startup teaches neurons to play Doom among other things reopening the machine consciousness debate.

 

One of the joys of composing #NerdNews is exposure to interesting stories from around the world. This is probably the most mind-blowing AI story I've seen in a long time.

 

Right here in Melbourne, scientists have taught human brain cells to play a video game.

 

Not only that. The brain cells were trained by a computer chip.

 

Take a breathe sit down. I’ll walk you through this.

 

Let’s start with some basics.

 

3 ways of learning.

 

Learning 101. Humans train computers.

 

Imagine a scenario where a human being was training a computer to play a game like noughts and crosses. They'd write a program or an algorithm. They'd watch the computer play.


When the computer made a good, or bad, move, the programmer could adjust the code and improve the computer's play.

 

That's how a human could teach a computer to play noughts and crosses.

 

Learning 201. Computers train themselves.

 

Now imagine a program that played a game, looked at the result and then went back and modified itself depending on whether it won or lost.

 

That’s what we mean when you hear terms like reinforcement learning, one branch of machine learning.

 

The most famous example was AlphaZero from DeepMind which played not one, not two, but millions of games of chess against parallel versions of itself.

 

In just four hours it went from only understanding the rules of the game to being better than any program humans had ever written.

 

The computer taught itself to play.

 

Learning 301. Computers and neurons train each other.

 

What we're seeing here is stage three. A hybrid loop where software and living neurons influence each other in real time.

 

What exactly did Melbourne's Cortical Labs do?

 

They attached a group of cells, neurons like we have in our brain, to a high-density electrode chip that could both record input from the neurons and send electrical signals back to them.

 

Now these neurons will naturally fire between each other. It's what neurons do. But every time they fired, the computer chip could send a charge back in.

 

And if it sent a charge at a particular time in a particular way, that would reinforce the firing that had just happened.

 

Imagine the neurons are connected to a chip that's trying to play a game like noughts and crosses. If the natural firing of the neurons produces a good move — the first move in the middle, for example — the chip would feed back and strengthen that, making that neural pathway more likely to fire.

 

Say the neurons made a bad move, placing the first move in a corner. The chip doesn't fire back, reducing the strength of that pathway.

 

Over time the interaction between the neurons and the computer chip could begin to play the game well.

 

The computer chip will have “trained” the neurons.

 

A certain pong.

 

In peer-reviewed research from 2021 Cortical showed these neuron-chip systems could learn to play a simplified version of the classic video game Pong.

 

Yes, you heard me. Brain cells on a computer chips, playing a video game.

 

Each chip used more than 800,000 human neurons grown across the electrode array.

 

People were impressed, but some responded, in jest, with the old computer programmer’s response to any mooted advance;

 

“yeah but can it run doom” ancient programming geek burn

 

Boom boom boom shake the Doom!

 

Cynical geeks asked, and it seems Cortical has delivered.

 

Just last month came a demonstration of the DishBrain platform playing Doom. Certain corners of the internet melted.

 

Sure, at a rudimentary level, but nonetheless a hybrid neuron-chip system playing a complex videogame.

 

This time the demonstration used only about a quarter as many neurons as the Pong system, roughly 200,000 cells.

 

Strikingly, the training took mere weeks.

 

The speed occurred because Cortical Labs has now developed a software interface that lets developers control these neuron-chip systems using the popular programming language Python.

 

Using that interface, an independent developer named Sean Cole managed to get the neurons interacting with Doom in around a week.

 

That’s a dramatic contrast with the original Pong work, which represented years of painstaking scientific effort to build and train the system.

 

Even more striking, the Doom experiment was created by someone with relatively little experience working directly with biological neural systems. No offence Sean!

 

The resulting system didn’t play Doom at anywhere near human level. But it did perform better than a randomly firing player, suggesting the neurons were learning meaningful patterns in how to interact with the game.

 

“What’s exciting here is not just that a biological system can play Doom, but that it can cope with complexity, uncertainty, and real-time decision-making. That’s much closer to the kinds of challenges future biological or hybrid computers will need to handle.” Andrew Adamatzky University of the West of England.

 

So is ChatGPT a gamer now?

 

Before we go any further, it’s worth noting that this is not the kind of AI most people think of when they hear terms like ChatGPT or large language models.

 

Those systems are built from vast neural networks running on GPUs and trained on oceans of text.

 

What Cortical Labs are doing here is something very different. This is closer to reinforcement learning and control systems, but with the learning happening partly inside living neurons rather than purely inside software.

 

These cells could, among other applications, eventually power thinking humanoid robots, ultra-low-energy AI systems, and biological controllers for prosthetics or robotic limbs.

 

Did someone say “ultra-low-energy AI systems”?

 

One fascinating application here could be energy consumption.

 

We've spoken before in #NerdNews about the voracious demands these AI systems place on electricity and water.

 

It's only early days, but Cortical Labs claim these hybrid neuron-silicon systems could operate at one hundredth, perhaps even one thousandth, the power consumption of some conventional AI hardware.

 

The human brain runs on about 20 watts, roughly the energy in a blueberry muffin each day. Modern GPTs devour vastly more power. Could hybrid neuron-silicon computers become the face of green AI?

 

One Hard Problem.

 

Earlier we talked about three ways machines might learn. Humans training computers. Computers training themselves. And now computers and neurons learning together.

 

“We’re looking to harness these cells for intelligence.”— Brett Kagan, Chief Scientific Officer, Cortical Labs

 

Which leads to one of the coolest questions in this whole field.

 

When, if ever, will machines become sentient. Maybe even conscious?

 

It’s a tough question. We don’t even really understand consciousness in humans yet.

 

But broadly speaking there are three camps.

 

The first camp says yes. With enough complexity and enough connections, machines will eventually develop something like thoughts, emotions, maybe even a sense of self.

 

The second camp says we simply don’t know. Consciousness might emerge, or it might not.

 

And the third camp, a minority, still flies the flag for wetware. That consciousness requires biology and mere silicon alone will never give rise to it. That the physical substrate matters.

 

Personally, I’m in that third camp.

 

Only time will tell.

 

But experiments like this raise an intriguing possibility.

 


What if the first faint glimmers of machine consciousness don’t appear in pure software at all?

 

What if they emerge in systems that combine living neurons and silicon. Wetware and hardware learning together.

 

Something part brain. Something part machine.

 

Think about that next time you think about the nature of thinking.

 

 

 

Hey I'm now also on substack.

 


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page