The Zuck bets billions on AI superintelligence
- Adam Spencer
- 17 hours ago
- 3 min read
Meta is offering ubergeeks eight-figure salaries to build ‘superintelligent AI’. Should we be excited, terrified - or a bit of both?

We’re talking serious smarts
Ever since scientists coined the phrase ‘AI’ in the 1950s, we have hypothesised about computer intelligence – from 2001: A Space Odyssey to peer-reviewed articles of the highest scientific calibre.
AI intelligence is now back in the news with Meta's recent announcement that CEO Mark Zuckerberg is personally recruiting a 50-person ‘superintelligence’ team, offering compensation packages up to eight figures.
The star recruit? Scale AI’s 28-year-old founder Alexandr Wang, with Meta investing $14.3 billion for nearly half of his company.
Will we reach Artificial General Intelligence (AGI) – a platform smarter than most humans at most things? Or Artificial Super Intelligence (ASI) – a platform smarter than every human at everything?
Within the yes camp, ambitious but possible timetables suggest AGI by 2030 and ASI by 2040, a mere 15 years from now!
Are we there yet (or ever?)
The question of if, or when, we will reach AI superintelligence is one of the most fundamental and divisive questions in modern science.
On one hand, OpenAI (maker of ChatGPT) CEO Sam Altman declares, "We are now confident we know how to build AGI," while Dario Amodei from major rival Anthropic insists, "We are rapidly running out of truly convincing blockers".
But these players' obvious commercial desire to build hype around generative AI cannot be ignored.
On the other hand, the winner of the prestigious Turing Award in 2018, and now Meta's Chief AI Scientist Yann LeCun, sounds a note of caution calling LLMs like ChatGPT "an off-ramp to AGI".
Let’s examine some of the arguments for and against our reaching ASI.
Three reasons to say ‘hi’ to ASI
Follow the money. Companies like Google, Microsoft, and Meta earn hundreds of billions annually and are building data centres for $10 billion training runs.
Second, scaling has been remarkably consistent. AI models have improved predictably with size and compute investment, though recent signs of diminishing returns in traditional scaling suggest the path to superintelligence may require new approaches beyond simply building bigger models.
Third, geopolitical competition makes it inevitable. The U.S. vs China AI arms race means someone will push through regardless of risks – the military and economic advantages are too massive to ignore.
Three reasons why ASI won’t fly
Inability to align AI with human motivations could derail progress. Superintelligence may be uncontrollable, forcing societies to pause or ban advances before we reach ASI – or worse, suffer civilisational collapse trying.
Second, intelligence may have fundamental limits that simply cannot be breached by continuing to scale current AI architectures. There could be deep mathematical limitations on intelligence growth. Or perhaps there are irreducible human faculties that AI simply cannot replicate?
Third, other societal challenges could derail everything. In the face of climate collapse and associated economic decay, these incredibly energy and water intense projects may stall long before we crack superintelligence – regardless of our ambitions.
My thoughts? For what it is worth I think there is every chance that we will get there. But that’s not guaranteed to be a good thing.
Schrödinger’s glass: half full AND half empty
If the optimists are right, we could see scientific breakthroughs that dwarf everything in human history – cures for diseases, unlimited renewable energy, and abundance beyond imagination.
If the pessimists are right, we face mass unemployment, weaponised AI, or worse.
Sam Harris's recent Making Sense podcast with Daniel Kokotajlo explores these negative scenarios in detail – worth a listen but maybe not immediately before bed.
The sentience wildcard
As I have said before, regardless of whether we reach superintelligence and sentience or not, there's a good chance we'll reach a point where we simply can't tell if AI is sentient and superintelligent or not.
That uncertainty alone will bring fascinating ethical challenges we are nowhere near ready to handle.
One thing is for certain. We have well and truly emerged from the decades-long AI winter to grapple with questions until recently considered purely hypothetical, but that are now excitingly and dangerously real.
Strap yourselves in for the ride.
-Adam S
Comentarios