Posted Tuesday, 2 December 2025
Banksia Park students achieve two years of maths learning in one
Banksia Park Primary School students have achieved an incredible two years of additional learning in maths, thanks to …
Scitech is closing at 1pm on Friday 12 December.
Scitech is closing at 1pm on Friday 12 December.
Scitech is closing at 1pm on Friday 12 December.
Scitech is closing at 1pm on Friday 12 December.
Scitech’s CEO John Chappell wrote a piece for WAtoday reflecting on the government's new AI Plan, a recent experience he had with AI and conspiracy theories, and advocates for a “trust, but verify” approach.
Ronald Reagan, former US President, famously used the phrase “trust, but verify” when negotiating a nuclear arms deal with the Soviet Union. I was surprised this week when my preferred AI chat tool used the same phrase with me to describe the reliability of its answers.
While AI has many benefits – including recently helping me get to the bottom of a conspiracy theory – there are plenty of risks associated with its use.
On Tuesday, the Australian Government released its National AI Plan, with actions to capture the opportunities, share the benefits and keep Australians safe. It recognises that AI will strengthen our economic capability and help us remain globally competitive.
Mitigating the potential risks of AI are also a core focus – as the plan states “we cannot seize the innovation and economic opportunities of AI if people do not trust it.” The Plan proposes a flexible and responsive approach, updating existing laws to address issues relating to privacy, copyright, consumer rights, and criminal abuse as they arise.
It also calls for more support and training in AI and digital literacy. Part of this training must include education on the shortfalls of AI and reinforce the need for critical thinking. Even with updates to legal frameworks proposed by the Plan, the accuracy of AI hinges on the discernment of the user.
This fact was highlighted to me last week, when I used AI to explore a conspiracy theory. It helped confirm my understanding of the science and shortcomings of the conspiracy argument. But, of course, when we use AI, it gets to know how we think and what we believe. So, I asked how it would have answered if it knew me to be a believer in this particular conspiracy. It said it would have answered the same, but with some nuance to soften the blow when pointing out facts that didn’t accord with my beliefs.
I then asked AI about its role in spreading misinformation and disinformation. It told me that when asked a question, it will present the strongest case it can to assist the person asking. It conceded that in these cases, the responses will sound confident and well-reasoned, even if wrong. A more subtle concern is that even when AI systems try to be balanced, they may inadvertently create false equivalences. By presenting “both sides” of questions where the evidence is quite lopsided, they can give fringe positions more credibility than they merit.
When we add AI hallucinations to the equation, where AI provides made-up references and data sources, the picture becomes more troubling. Why does this happen? Because AI systems are fundamentally trained to produce plausible text, not true text.
The uncomfortable truth is that AI doesn’t change the burden on the user to verify the information provided. It just raises the stakes by increasing the volume and polish of potentially unreliable content.
It’s important that Australia has a plan for AI and I applaud the Commonwealth Government for developing the National AI Plan and establishing the Australian AI Safety Institute. Like many of my colleagues, I have become increasingly reliant on AI for tasks such as research and document editing. I have also had my own moments of being caught out when AI has given me unreliable information.
As part of the Australian Government’s renewed investment and legal frameworks around AI, we need to make sure we are empowering Australians with the digital literacy and critical thinking skills to catch out AI hallucinations, check facts, and identify bias. Remember, trust, but verify.
As AI put it to me at the end of our conversation, “critical thinking remains non-automatable.”
This article was first published on WAtoday.
Upon clicking the "Book Now" or "Buy Gift Card" buttons a new window will open prompting contact information and payment details.