Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: ICLR 2025 Workshop on Human-AI Coevolution

A Retrospective & Forward-Looking View of AI Safety & Security

Zhuo Li

[ ]
2025 Invited Talk
in
Workshop: ICLR 2025 Workshop on Human-AI Coevolution

Abstract:

This presentation offers both a reflective analysis of AI safety and security developments to date and an exploration of emerging challenges on the horizon. Drawing from extensive experience across major technology platforms, the talk will examine how approaches to AI safety have evolved in response to technological advancements, regulatory frameworks, and shifting public expectations.

a. The first segment will highlight key milestones in the development of AI safety protocols, including lessons learned from implementation across diverse global contexts. I will share insights from real-world scenarios that illustrate both successful strategies and critical gaps in current practices. b. The second segment will turn toward future considerations, with particular attention to emerging threat models, governance frameworks, and technical solutions. As AI systems become increasingly integrated into critical infrastructure and decision-making processes, the presentation will address how organizations can proactively build safety and security measures from the ground up rather than as afterthoughts. c. The talk will conclude with practical recommendations for stakeholders across the AI ecosystem, including developers, executives, policymakers, and end users. These actionable insights reflect the work being pioneered at HydroX AI, where we enable AI safety and build safe AI as our core mission. At HydroX AI's innovation hub, we blend creativity with security through state-of-the-art tools and expert guidance designed to empower responsible AI development.

Chat is not available.