Interview: Cognitive Scientist Gary Marcus’ Lifelong Disillusionment with A.I.

Interview: Cognitive Scientist Gary Marcus’ Lifelong Disillusionment with A.I.

Interview: Cognitive Scientist Gary Marcus’ Lifelong Disillusionment with A.I.

Gary Marcus speaking at Web Summit Vancouver 2025 on May 27. Don Mackinnon/AFP via Getty Images

For someone who’s spent nearly his entire life working in artificial intelligence, Gary Marcus—researcher, author and cognitive scientist—isn’t exactly a cheerleader for the technology. In fact, he’s one of generative A.I.’s most outspoken skeptics, frequently questioning both its current capabilities and its potential to cause harm. “After 40 years of doing this, I’ve never gotten over the sense that we’re not really doing it right,” Marcus told Observer.

That hasn’t stopped him from engaging deeply with the field. Marcus studied under psychologist Steven Pinker, founded two machine learning startups and briefly led A.I. initiatives at Uber. A professor emeritus at New York University, he even graduated high school early after building a Latin-to-English translator.

At the crux of his disillusionment is the field’s current obsession with large language models (LLMs) and the neural networks that power them. Marcus argues these systems are fundamentally limited and should be supplemented with symbolic A.I.—a more traditional, logic-based approach rooted in rules and reasoning. “I think we’re very early in the history of A.I.,” said Marcus, who has also called for regulation to address the technology’s emerging risks.

Observer caught up with the prominent skeptic at Web Summit Vancouver this week to discuss all things A.I. The following conversation has been edited for length and clarity.

Observer: How did you first become aware of and interested in A.I.? 

Gary Marcus: I was ten. It’s a very specific moment. I learned how to program on basically a simulation of a computer that was on paper. It was part of my after-school program for gifted kids, and that night I explained to the media how it worked. So, my media career and my A.I. career began that day.

How did you tell the media?

It was a show called Ray’s Way, which I didn’t know too much about, but they thought it’d be fun to have a human interest story. They nominated me to do the explanation, you know, cute kid explaining what a computer is. And I saw it on TV that night. There’s no archival footage that we can find, but I did actually see it on my father’s little black and white TV later that night.

READ MORE:  The 11 Key Executives Running Tesla Behind Elon Musk’s Spotlight

When did you feel the most optimistic about A.I.? 

I’ve never been fully optimistic about it. I don’t think we’ve made as much progress as I’d hoped that we would. I would have guessed that 40 years after I was looking at this, that a lot of A.I. would have been solved problems. We have a lot of tools now that are very useful. But I think the bigger questions of, like, how do you represent and acquire knowledge? You still don’t have really good answers to those. So I’ve never reached a point of being really satisfied. I always want it to go better.

After working in a field for so long that didn’t receive recognition for years and years, how did you perceive the ChatGPT moment with A.I. breaking into the mainstream? 

I started writing about it right away in my Substack and I said immediately that it’s going to excite people but it has limits. I wrote an essay in December of 2022, right after ChatGPT came out, on Christmas Day called What to Expect When You’re Expecting… GPT-4, and I made a series of predictions based on my understanding of how these things work and all the work I’ve done before. I said everybody’s going to be really excited at first, but they’re going to become disillusioned. It’s going to hallucinate, it’s going to make dumb errors.

You go back, and everything that I said was true—not only of GPT-4, but of every model that’s come since. They all still hallucinate. They all still make stupid errors. You still can’t trust them. They’re all kind of useful for misinformation, which is something I said.

READ MORE:  BlackRock CEO Larry Fink Warns on Volatility Despite US-China Tariff Truce

You’ve suggested that large language models (LLMs) aren’t the right way to pursue A.I. What would be the ideal form of pursuing it, and is anyone doing that?

I wrote a paper in 2020 called The Next Decade in A.I. that described what I thought we should do, what I still think we should do. It starts with neuro-symbolic A.I., which combines neural networks with classical A.I. approaches. There’s been a kind of historical division between the field, we need a reconciliation.

AlphaFold is actually an example of neuro-symbolic A.I. We still don’t have a systematic way to do it, that’s just a start—that’s a stepping stone that we need to do first. We also need to have ways for systems to build cognitive models of the world as they see things happen.

When do you think artificial general intelligence (A.G.I.) will be achieved? 

I would say definitely not this decade. Maybe the next decade and maybe longer. Science is hard. We don’t know the answers that long before we get there. You can think of a historical analogy: in the early 20th century, everybody wanted to figure out the molecular basis of genes and they all pursued the wrong hypothesis. They all thought the genes were some kind of protein, and they looked at lots of proteins and they were all wrong. And then it was only in the late ’40s, when Oswald Avery did the right experiment to rule out proteins, that things moved.

It was only a few years later that [James] Watson and [Francis] Crick, with Rosalind Franklin’s data, figured out structured DNA. Then we really were off to the races. It was like 30 years wandering the desert. When people have a bad idea, It’s hard to predict when they will move past that bad idea.

READ MORE:  NASA's Mars Perseverance rover captures new selfie featuring a Martian dust devil

As someone who’s spoken to the U.S. Senate on A.I. oversight and recently wrote about the budget bill’s proposed 10-year moratorium on state A.I. regulation, what are you thinking about the state of A.I. regulation right now in the U.S.? 

It’s completely fallen apart. It’s totally depressing. I spoke to the Senate in May of 2023 and we had strong bipartisan support and the support of Sam Altman to have serious A.I. regulation, and nothing happened. Altman has changed his position. The Senate has become much less friendly. It is a huge step back and things don’t look good right now. I think bad things will happen as a function of A.I.—cybercrime, discrimination in employment, etc. etc. There’s very little protection for U.S. citizens right now.

You’ve previously proposed a regulatory process for A.I. that would operate similarly to how the U.S. Food and Drug Administration (FDA) approves drugs. Is that still a solution you support?

I still absolutely think we should have it. And I think that even more so because we’ve seen things like some of the most recent models maybe can help people make biological weapons. Is that okay, is it worth the gain? Society should have a vote, the government should have a vote, on whether you release something that is going to materially increase the chance of some serious harm in society. It’s insane that the companies can do what they want without any any checks and balances.

Lastly, as a long-time academic, how do you feel about federal funding cuts to science in the U.S. potentially impacting the country’s innovation in A.I. going forward?

Best thing that ever happened to China. I mean, it’s really bad for the United States. We have  historically, for many years, been basically the leader in science, and we’re just destroying that for no good reason.



Source link

Back To Top