Session: Hack-Proof AI: Building Security Through Collaboration

Securing AI applications and services is a complex and fragmented endeavor. Security practitioners often face the chaos of constantly shifting standards and threats that are inconsistent and siloed, making the task of assessing and mitigating AI-driven risks daunting, even for the most experienced organizations.

The Coalition for Secure AI (CoSAI) unites diverse stakeholders from industry and academia to tackle the critical challenges of implementing secure AI systems. With founding members like Amazon, Anthropic, Cisco, Cohere, GenLab, Google, IBM,  Intel, Microsoft, NVIDIA, OpenAI, PayPal, Wiz, etc. Our mission is to drive the democratization, collaboration, and operational guidance needed to secure AI, providing accessible frameworks and architecture guidance to enhance security, and promoting uniform standards.

This interactive talk will delve into CoSAI’s strategic approach to building a secure AI toolkit for security practitioners. We will explore critical workstreams designed to address current gaps in AI security:

  • Software Supply Chain Security for AI Systems
    • Enhancing AI security by providing guidance on evaluating the provenance of AI models, managing third-party risks, and assessing the full lifecycle of AI applications.
    • Expanding on established security principles like SSDF (Secure Software Development Framework) and SLSA (Supply-chain Levels for Software Artifacts) to adapt them for AI and traditional software.
  • Preparing Defenders for a Changing Security Landscape
    • Developing a framework for defenders to identify emerging AI models and overcome cybersecurity threats posed by current and necessary AI investments.
  • AI Risk Governance
    • Creating a comprehensive risk and controls taxonomy, checklist, and scorecard to guide practitioners in readiness assessments, management, monitoring, and reporting on the security of their AI operations.
  • Securing and Adopting Secure Systems
    • Researching and developing secure design patterns for AI-based agent systems, including updates to AI agent threat models, conceptual high-level secure design patterns, best practices in role infrastructure design, integration, and user-based roles.

This session will also explore CoSAI’s role in advancing the secure use of AI, particularly in key sectors like government and infrastructure. Attendees will have the opportunity to engage with founding members and discover how they can contribute to CoSAI’s goals. As an open-source initiative, CoSAI welcomes technical input from everyone. We invite you to support the group in creating a secure AI toolkit that benefits developers and organizations globally.

Presenters: