General-purpose AI systems are now deployed to billions of users, but they pose risks related to bias, fraud, privacy, copyright, CBRN, NCII, and more. To assess these risks, we need independent and community-driven evaluations, audits and red teaming, and responsible disclosure.
Our workshop on the future of third-party AI evaluation dives into these topics with experts on:
Third-party evaluations, red teaming, and disclosure in the real world
How to design technical infrastructure for identifying and reporting flaws in AI systems
The legal and policy infrastructure for advancing a healthy AI evaluation ecosystem