top of page

Why We Should Treat AI Experts Like Chatbots, as Confident, but Not Always Correct

  • Writer: Sofia Ng
    Sofia Ng
  • Sep 15
  • 2 min read

One of the reasons I enjoy reading New Scientist is that it doesn’t shy away from questioning bold claims, even when they come from the biggest names. Philip Ball’s article (2 July 2025) did just that, calling out the growing problem of AI experts believing their own hype.


A small animal standing on a stage in font of many animals.

The piece opens with Demis Hassabis, CEO of Google DeepMind and Nobel laureate, declaring on 60 Minutes that with the help of AI like AlphaFold, “the end of all disease is within reach, maybe within the next decade or so.”

If you raised an eyebrow at that, you’re not alone. Drug discovery experts like Derek Lowe (who has spent decades in medicinal chemistry) responded with disbelief. To people actually working on the problem, such sweeping claims aren’t just optimistic, they’re unrealistic.


The Problem of Overconfidence

We’ve seen this pattern before:

  • Elon Musk talking about Martian colonies.

  • Sam Altman forecasting AGI just around the corner.

  • Geoffrey Hinton suggesting radiologists should stop training because AI would replace them.

In each case, the authority of the speaker made the statements sound credible. But as Ball argues, this authority often rests on skin-deep understanding outside their core expertise. The result? Hype that shapes media headlines, government policy, and even career choices, without the grounding of reality.


Experts Who Sound Like Their Machines

Here’s the irony: some AI experts now mirror the behaviour of the very systems they build. Like large language models, they can generate eloquent, confident-sounding claims that don’t stand up to scrutiny.

Take Daniel Kokotajlo, a former OpenAI researcher, who recently described AIs as “lying” and “knowing” they were false. The anthropomorphic language makes for a dramatic soundbite, but it risks misleading the public about what LLMs actually are (statistical systems, not scheming agents).

The danger here isn’t just exaggeration, it’s misplaced trust. If leading voices convince governments or industries that AI will do the “heavy lifting,” complacency follows.


Why fund radiology training if Hinton says AI will replace radiologists?


Why invest in disease prevention if Hassabis suggests AI will cure it all?


Why This Matters for Business & Automation

For those of us working with AI in automation and workplace systems, the lesson is clear: treat confident claims with healthy scepticism. Just because a system, or its creator, sounds certain doesn’t mean the claim is accurate.

This is especially relevant in automation projects. Whether you’re adopting Power Platform tools, integrating AI into workflows, or considering an LLM for customer support, the risk of overestimating capabilities is real. Overselling AI leads to poor planning, unmet expectations, and ultimately, mistrust in the technology.


My Takeaway

Ball ends with a provocative suggestion: perhaps we should treat expert pronouncements the same way we treat chatbot answers - plausible, often useful, but always needing fact-checking.

I think that’s a healthy stance. AI hype can be exciting and easy to be wrapped up into, but progress doesn’t come from grand pronouncements. It comes from the steady, sometimes unglamorous work of testing, validating, and building systems that deliver real-world value.


Until then, whenever an “AI prophet” makes a bold prediction, maybe the best response is a polite nod followed by careful fact-checking.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Contact Us

QUESTIONS?

WE'RE HERE TO HELP

  • LinkedIn

© 2023 by Ava Technology Solutions. Proudly created with Wix.com

bottom of page