top of page
Untitled design (28).png

When Speed Trumps Truth: The AI Hallucination Problem and Why Trust Needs a New Home


In a sign of the times, Deloitte is set to partially refund the Australian federal government for a $440,000 report riddled with errors—errors that, it turns out, were the result of relying on generative AI to help write it.

Yes, you read that right: one of the world’s largest consultancy firms used artificial intelligence to draft a report on Australia’s welfare compliance system and delivered a product that was filled with inaccuracies, fictional references, and fabricated citations.

The Department of Employment and Workplace Relations (DEWR), which commissioned the review, has confirmed that Deloitte will repay the final instalment of the contract—an implicit admission that the work didn’t meet basic standards. Meanwhile, a senator bluntly described the problem not as artificial intelligence failure, but as a “human intelligence problem.” Ouch.


The Mirage of Machine Intelligence


Generative AI—particularly large language models (LLMs)—has dazzled the world with its fluency and speed. But speed isn’t everything. When you ask an LLM to generate a government report—or any document where factual accuracy and source integrity matter—you risk getting what AI researchers politely call “hallucinations.”

Let’s be clear: these aren’t just typos or formatting issues. We're talking about completely made-up citations, invented studies, and statements that look polished and professional but crumble under scrutiny.

In this Deloitte case, academic watchdog Dr. Christopher Rudge exposed that the AI-generated content didn’t just fudge a few facts—it created an entire illusion of credibility. Revisions to the report included replacing fake references… with more fake references. As Dr. Rudge put it, “the original claim made in the body of the report wasn’t based on any one particular evidentiary source.” In other words, it looked official but was effectively foundationless.


LLMs: Great for Speed, Not for Truth—and Certainly Not for Trust You just need to know that

There’s no denying the power of LLMs to churn out human-like text in seconds. For brainstorming, drafting, summarizing, or even writing code, they’re near-magical tools. But when the stakes are high—government policy, healthcare decisions, legal arguments. Speed is not the metric that matters most. Truth is.

And trust? That’s even harder to automate.

What Deloitte’s misstep reveals is a growing and dangerous trend: organizations treating LLMs as authoritative sources, when in fact they are excellent at sounding right, but not built to be right.

Beyond Speed: A Niche for Truth and Trust

This is exactly the kind of gap that more targeted solutions—like TruthTech—are built to address.

In fields like law, journalism, public policy, science, or sustainability reporting, trust isn’t optional. It’s the baseline. And when the cost of getting it wrong is reputational—or even legal—relying on tools that “sound right” rather than are right is a risk few can afford.

LLMs are impressive. I use them myself when writing in a second language or drafting ideas quickly. But let’s not confuse fast output with sound thinking. Delegating the thinking process to generative AI is where things start to fall apart. As the Deloitte case shows, you may gain speed—but lose credibility.

TruthTech doesn’t try to replace human reasoning. It doesn’t promise creativity, or fluency, or all the bells and whistles. Its goal is simpler: to support humans with traceable, verifiable information—so the thinking can happen on solid ground.

Because in the end, it’s not about how smart your tools look—it’s about whether people can trust what you produce.

If you work in publishing, journalism, academia, law, advocacy, or corporate reporting—and truth and trust are non-negotiable—this is where targeted, transparent tech makes the difference.

Don’t let speed undermine substance. Your credibility is worth more. Learn about TruthTech

bottom of page