By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
On a latest episode of “Regulating AI” podcast, host Sanjay Puri sat down with Professor Edward Santow, Co-Director of the Human Technology Institute (HTI) at the University of Technology, Sydney, to unpack the complex relationship between artificial intelligence, governance, and human rights.
Santow, a former Australian Human Rights Commissioner, didn’t begin his career in technology. “I was a human rights lawyer working here in Sydney and I didn’t have a particular background in technology and certainly not in artificial intelligence,” he explained. But a troubling pattern in cases involving young people of color exposed him to the darker side of algorithmic decision making. His organization offered free legal services to youth facing frequent police checks for minor offenses and almost all of them had dark skin.
READ: Equitable AI implementation must be deliberate, Nishtar and Sands say on global health (
The discovery of the Suspect Target Management Plan, an algorithmic policing tool, was a turning point. The system disproportionately flagged Aboriginal and Torres Strait Islander youth, despite them comprising less than three percent of New South Wales’ population. “What really opened my eyes was,” Santow recalled, “how a system… was designed to be more data-driven, can go terribly wrong and go terribly wrong at scale.”
That early experience, more than a decade ago, set him on a path to explore both the risks and opportunities of AI. Santow acknowledges the “fever dream” of AI: while it can ingrain bias and injustice, it also has the power to expand inclusion, particularly for people with disabilities. “… We really want to supercharge the positive uses of AI,” he said, “AI that can really make our world better and more inclusive while being clear-eyed about what the risks and threats are and combat those.”
Today, the global AI race is dominated by three blocs: the EU’s risk-based regulation, the U.S. push for rapid innovation, and China’s centralized control. Australia, Santow argued, must carve its own niche. “… We tend not to be overly constricted by ideology. We’re really practical people,” he noted, adding that Australia should lean on its democratic values and reputation for fairness.
READ: AI for Good 2025: Frederic Werner urges everyone to ‘join the movement’ (
One example of this pragmatic approach is Australia’s voluntary AI safety standard, which Santow helped develop. Unlike vague ethics principles, the standard offers practical tools for businesses to classify AI systems as high, medium, or low risk and take concrete steps to mitigate harm. “What we were hearing particularly from the private sector was that industry wanted practical guidance, no more fluffy principles,” Santow explained.
Corporate governance is another key area. He criticized the “AI guru model,” where responsibility sits with a single specialist. Instead, he recommends a chief AI officer who acts as an ambassador across departments, ensuring that boards, executives, and frontline staff understand AI’s implications.
For Santow, effective governance means listening not only to regulators but also to workers. From nurses coping with false alarms in hospitals to retail staff navigating new systems, employees often hold vital insights into how AI should be integrated.
As Australia shapes its role in the global AI landscape, Santow’s message is clear: innovation and regulation are not opposing forces. “We don’t need to make that trade-off,” he stressed. “Instead, what we need to do is be more competent. We need to make sure that when we are adopting and developing AI, that we do it in a way that is effective, that is well considered.”
