
Matt Kuperholz doesn’t deal in hypotheticals. With more than 25 years working in artificial intelligence – and nearly twice that in risk and safety – he’s seen firsthand what happens when powerful tools like AI are deployed without strategy, governance, or human-centred design. Speaking to safety leaders at the OHS Leaders Summit, Gold Coast, Kuperholz challenged the room to reframe how they think about AI, and more importantly, how they use it.
Exponential Disruption, Human Pace
Kuperholz opened by reminding us that we are living on the steep curve of an exponential trajectory – one that most organisations, and most people, still struggle to grasp.
“The biggest failing of humankind,” he said, quoting physicist Albert Bartlett, “is our inability to understand the exponential function.”
He pointed to Moore’s Law, the silicon chip, and even the birthday card that plays a song – a device with more computing power than the entire world had in the 1970s.
“Digital technology doubles, and doubles again. But culture – and especially safety culture – isn’t built to change at that pace.”
And yet change is happening, whether we’re ready or not.
What AI Is and What It Isn’t
Rather than get lost in buzzwords, Kuperholz offered a pragmatic framework for understanding AI: it’s not magic. It’s a goal-seeking system with feedback. That could be a plant, an algorithm, or a robot.
“The AI we’re talking about today doesn’t think. It doesn’t understand. It predicts.”
Large language models like GPT, he explained, are essentially scaled-up regression engines. They predict the next word based on vast amounts of data – not comprehension.
“It’s more than a stochastic parrot, but it’s still an alien intelligence. It doesn’t know what it’s saying.”
This makes it powerful, but also risky. You wouldn’t trust a hallucinating expert – so you can’t deploy an LLM without the right controls, especially in high-stakes environments like safety.
The Real Value Is in Action
AI, Kuperholz emphasised, is not about novelty or convenience. It’s a tool. And like any tool, it must be used with purpose.
“You don’t use a hammer just because you’ve got one. You use it to build something. Same with AI.”
For safety professionals, that means using AI to identify risks earlier, to triage information more intelligently, and to enable better decision-making – not just faster reporting.
He shared examples from mining, logistics, retail, and even racehorses. AI models have been used to:
- Predict fatigue from time-stamped GPS and payroll data.
- Identify manual handling risks in drilling operations.
- Optimise rostering based on injury patterns.
- Detect psychosocial hazards from patterns in timesheets, performance reviews, and wellbeing data.
- Track changes in horse locomotion to prevent injury.
“But even if your AI answers every question perfectly,” Kuperholz cautioned, “you still have to act on it. Insight alone doesn’t improve safety. Action does.”
Ethics, Bias, and the Weight of Responsibility
Kuperholz didn’t shy away from the harder questions. AI, he said, is inherently biased – because the world it learns from is biased. That’s not a reason to avoid it, but a reason to build safeguards.
“Every model is biased. The question is: what bias are we prepared to accept, and what are we going to do about it?”
He outlined key questions for AI governance in safety:
- Is it legal?
- Is it ethical?
- Can we explain how it works?
- Is it auditable?
- Is it safe and controllable?
“Technology isn’t good or bad – but it’s also not neutral. It reflects the choices we make.”
Avoiding the Echo Chamber
A final warning: don’t let your AI become an island. If your system only learns from your data, you risk missing broader patterns – or worse, repeating the same blind spots.
“If you find a critical safety insight, aren’t you beholden to share it?”
For Kuperholz, safety isn’t just a competitive advantage – it’s a shared responsibility. That means industry collaboration, shared learnings, and ethical leadership.
From Readiness to Maturity
So how do you start, or keep going, when AI is evolving faster than most organisations can react?
Kuperholz’s advice: anchor to what doesn’t change. Start with ethics. Educate the line. Speak a common language. Build processes that outlast platforms. Focus on real business value. And always – always – design for action.
“A good process doesn’t start with the tech. It starts with the problem, looks at the data, validates the results, and deploys with intent.”
Because in the end, AI in safety isn’t about prediction. It’s about protection.
Share: