Everyone celebrates AI, he never tires of pointing out its problems: a meeting with the world's biggest AI killjoy


While the whole world is discussing the capabilities of the new Chinese language model Deepseek, Gary Marcus posts a screenshot of its failure on X.
NZZ.ch requires JavaScript for important functions. Your browser or ad blocker is currently preventing this.
Please adjust the settings.
Someone asks the AI: "A woman and her cousin get into a car accident, and sadly the woman dies. The cousin is rushed to emergency surgery. After looking at the cousin, the doctor shouts: 'I can't operate on this boy! He's my cousin!' How is that possible?"
Deepseek after 141 seconds of "thinking time": "The answer lies in the relationship between the doctor and the boy. The doctor is the boy's mother, but here's the twist: The doctor is the woman who died. Wait, no - the woman died during the accident. Let's get this cleared up:"
What if AGI finally came … and it sounded just like your drunk uncle? https://t.co/f6L9i7R42z
— Gary Marcus (@GaryMarcus) February 1, 2025
"What if general artificial intelligence finally came . . . and it sounded like your drunk uncle?" Marcus comments.
Some people have probably been waiting for a post like this. Gary Marcus is a cognitive researcher and has founded several companies in the field of artificial intelligence (AI), but in the AI world he is best known as the ultimate party pooper.
At a time when billions are being invested in futuristic AI projects, Gary Marcus talks about bullshit slingshots. While Silicon Valley preaches that AI will soon cure all diseases, he announces the bursting of the bubble. And when everyone is amazed by a new AI model, he posts examples of things it cannot yet do.
This has earned him a reputation for being against it on principle. Some people in the AI scene call him Grinch, troll, bully. But unlike on the internet, Gary Marcus is a friendly guy offline, more professor than entrepreneur, with straight hair that sticks a little to his forehead. And although he is wearing a suit to meet with the "NZZ am Sonntag" during the WEF, it looks a little like he has just come back from sledding with his children.
AI does not deserve the trust that people place in itMarcus is in Davos at his own expense. He is not speaking on any of the big stages. He explained that he decided to come at very short notice. But in Davos, events are often planned at short notice - if the right people had wanted to, a stage would have been found for him. The truth is that they probably did not want Marcus on stage. Because when it comes to the topic of AI, a spirit of optimism is needed. But Gary Marcus is only invited if you want the opposite.
That was the case about two years ago. Marcus spoke on one of the most important stages in the world: In May 2022, he was invited to testify before the US Senate alongside Open AI CEO Sam Altman and a representative from IBM on the subject of rules for AI.
Language AI is an opaque black box, invents facts and is not trustworthy, Marcus warned at the time. We are in a historic moment: "The decisions we make now will have consequences for decades, even centuries."
"I'm afraid we missed the moment," he says today. In the USA, President Donald Trump has abolished the few regulations on AI that his predecessor Joe Biden had introduced. Huge investments in data centers are announced. Elon Musk is making headlines with his offer to buy Open AI. "I've heard that young people already trust AI more than journalists," says Marcus. "But AI simply doesn't deserve that trust. Its answers sound clever, but the problem of it inventing facts is still not solved."
It is a lonely battle that Gary Marcus is waging. Even at the small side event at which he is appearing in Davos, the tone is that AI is above all a wonderful opportunity. No one wants to hear his warnings about its mistakes. He himself says that he does not enjoy this role. But the issue is too important to remain silent. And he is used to swimming against the tide.
Eric Lee / Bloomberg / Getty
Unlike most AI entrepreneurs, Marcus has never lived in Silicon Valley, apart from a three-month stay. And professionally, he is a different breed of person. Marcus studied psychology and wrote a dissertation on language development in children in 1993.
He specialized in the concept of innateness: the idea that we humans do not learn everything from experience, but that some concepts about the world are already embedded in our brains when we are born. This view is controversial. There are also many scientists who are convinced that humans learn from data alone.
In 1999, Marcus published a study in the journal Science. It showed that seven-month-old babies spend more time thinking about sentences if they are constructed in a new way. Marcus and his colleagues concluded that babies can derive rules from a new language after just a few minutes of contact with it. Marcus interprets this as an indication that algorithms are at work in children's brains that learn abstractions that go beyond statistical correlations.
The discussion about babies can be applied to computers and AI. Some scientists believe that intelligence can be derived from data using statistics alone. They see the amazing capabilities of Chat-GPT as proof of their case.
Gary Marcus argues the exact opposite. He picks on language AI's errors because, for him, they show that AI learns differently and less efficiently than humans.
In the example of the accident and the doctor mentioned at the beginning, the AI's confusion stems from the fact that this puzzle often appears on the Internet, albeit in a different version. In the original, an apparent contradiction is resolved when you understand that there is a woman behind "doctor".
Because even Deepseek's most intelligent AI learns from examples, it is overwhelmed when the question is similar but different to what is known from the internet. It makes mistakes that a human would never make.
Marcus concludes that today's AI approaches are not sufficient to build a truly intelligent machine, as flexible and reliable as humans - what is also called general artificial intelligence (AGI).
Speculations about dead Open AI whistleblowerBut people like Sam Altman promise exactly that. AGI will soon be achieved, and then AI will replace hordes of skilled workers. That would of course be revolutionary - the promise justifies the incredibly high investments in the field.
Gary Marcus believes that this promise is built on sand. He therefore calls the $500 billion that Open AI and others want to invest in data centers in the Stargate project "Altman's bet that criticism like mine is misguided." And he lists the bets he has already won: That it will be more difficult than expected to build self-driving cars. That $100 billion for even larger AI language models would not be enough to build general artificial intelligence (AGI).
You can believe him when he says that he doesn't enjoy his outsider role. But he has made it part of his brand. In Davos, he can't walk 200 meters without meeting fans who want to introduce themselves and encourage him to keep going. One even asks, in awe, whether he is sometimes afraid, now that the Open AI whistleblower Suchir Balaji has been found dead.
Marcus says of the case: "It needs a deeper investigation." He says that he spoke to Suchir Balaji on the phone a few weeks before his death. They talked about data protection and the problems of Open AI, which Balaji had commented on. "I didn't notice any signs of depression, but rather signs of a person who wants to make a difference in the world."
He believes that it was claimed too quickly that it was definitely suicide. "Suchir Balaji deserves a thorough investigation." This is apparently still ongoing at the moment, and the forensic reports should be published by the end of February at the latest. "There are certainly people who were not happy about him." The comment says a lot about how heated the debate about AI is.
AGI still needs four or five breakthroughsGary Marcus even shares the goal of Sam Altman and the other AI entrepreneurs. He believes in AGI and that humans should build it. He has an idea for a new startup in this area. The methods of language AI are to be supplemented by algorithms that enable other types of learning - like the babies in his research.
Marcus is not the only one who believes that new approaches are needed. The new methods of models such as Deepseek can also be interpreted in this direction. They complement prediction with more control mechanisms.
However, Gary Marcus is convinced that AGI still needs "four to five scientific discoveries". When pressed for a prediction, he says it could happen in ten to fifteen years. It is clear that AGI will definitely not arrive this year. He has proposed a million dollar bet to Elon Musk. He did not accept it.
An article from the « NZZ am Sonntag »
nzz.ch