“The increased availability of AI systems has led to greater interest in a wide range of applications—however, no technology should be blindly trusted,” said Dr. Rumman Chowdhury, ex-Twitter AI ethics lead.
As artificial intelligence systems grow more powerful and accessible, questions around their security, reliability, and ethical use are taking center stage. At this year’s RSA Conference in San Francisco, a unique event aims to confront those concerns head-on — by pitting human intuition against machine logic.
Humane Intelligence, a nonprofit known for its hands-on evaluations of AI’s societal impact, is teaming up with cybersecurity firm HydroX AI to host “Human vs. Machine,” an interactive red teaming session on April 28 at the Moscone Center.
The two-hour event, scheduled from 1:10 to 3:10 p.m. Pacific time, invites participants to test the limits of large language models (LLMs) through 20 structured challenges. These will probe for vulnerabilities across topics such as hallucinations, political sensitivity, bomb threats, and prompt injections—issues that continue to spark global debate about the safety and ethics of generative AI.
In technical terms, “red teaming” is a cybersecurity exercise that simulates real-world attacks to test an organization’s security with a goal to identify vulnerabilities and improve security.
“Red teaming is one of the most effective ways to make AI safer. But it only works when we open up the process to diverse perspectives,” said Dr. Rumman Chowdhury, CEO and co-founder of Humane Intelligence. “This session is designed not just to test systems, but to teach people how to think adversarially and responsibly about AI.”
READ: HP acquires parts of AI startup Humane, retires AI Pin (February 19, 2025)
The event is part of RSAC’s Learning Labs, known for offering practical, hands-on training in emerging areas of tech and cybersecurity. Unlike traditional competitions, this red teaming session emphasizes collaborative learning over scoring points. Experts from Humane Intelligence, HydroX AI, and AIA will kick off the session with briefings on adversarial testing workflows and best practices in AI security.
Victor Bian, COO of HydroX AI, emphasized the urgency of developing real-world skills.
“We aim to make our RSAC Learning Lab a great opportunity for professionals to gain practical, hands-on skills in AI security,” said Bian. “By showcasing how humans and machines can work together to uncover vulnerabilities, we’re equipping attendees with actionable strategies to protect and strengthen AI systems.”
This session builds on Humane Intelligence’s growing leadership in the red teaming space, following their involvement in high-profile events like DEFCON 31’s generative AI testing challenge. As AI technologies continue to permeate every sector—from healthcare to finance—so too does the need for scalable and inclusive evaluation frameworks.
READ: California lawmaker Ted Lieu discusses AI regulation and legislative efforts (April 7, 2025)
Dr. Chowdhury, who previously held lead roles at Twitter as well as the U.S. government as the Science Envoy for AI, offered an exclusive comment to The American Bazaar, underscoring the importance of human judgment in an AI-driven world.
“The increased availability of AI systems has led to greater interest in a wide range of applications—however, no technology should be blindly trusted,” she said. “The development of better test and evaluation systems, both AI and human-driven, can provide better insights into risks. Our event — ‘Human vs Machine’— tests the nuances of what AI can help automate, but where human oversight is still needed. While AI is helpful, contextual awareness can only be provided by people.”
As lawmakers, technologists, and civil society grapple with the pace of AI development, events like “Human vs. Machine” aim to equip stakeholders with the tools—and mindset—needed to secure an increasingly intelligent future.

