Why We’re Launching a Trusted AI Safety Knowledge Certification Program
Published 04/26/2025
Written by Anna Campbell Schorr, Training Program Director, Cloud Security Alliance.
Over the years, we’ve witnessed security paradigms evolve—from the early days of perimeter defense, to the rise of Zero Trust, and most recently, the challenges introduced by Artificial Intelligence (AI). AI is rapidly becoming a cornerstone of the enterprise landscape: according to The State of AI and Security Survey Report by the Cloud Security Alliance (CSA), 69% of organizations are already using AI products, with another 29% planning to adopt them. This transformation is driving demand for new skills, frameworks, and mindsets—such as applying ethical principles to AI behavior, reengineering workflows to harness AI's potential, and critically evaluating generative AI outputs for transparency, accuracy, fairness, bias, and potential harm.
To meet this need, we’re proud to introduce Trusted AI Safety Knowledge, a certification program jointly developed by CSA and Northeastern University (NU). This new credential is designed to equip professionals with the expertise to navigate the full AI lifecycle—responsibly building, securing, deploying, maintaining, and ultimately decommissioning AI systems.
This marks the next chapter in CSA’s commitment to advancing industry competence—building on the momentum of our Zero Trust initiative and the successful launch of the Certificate of Competence in Zero Trust (CCZT). More than a milestone, this initiative marks the beginning of a long-term effort to ensure that AI practitioners are equipped not just with the right tools, but with the right mindset for the road ahead.
Why Now?
AI is no longer a research topic—it’s a foundational technology transforming every industry. But with its promise comes risk. The same algorithms that can detect cancer or optimize logistics can also perpetuate bias, leak sensitive data, or act in ways that are difficult to explain or control.
To address these challenges, we must consider both AI safety and AI security—closely related but distinct domains, both essential for the responsible development of AI. AI safety focuses on ensuring systems behave ethically, reliably, and in alignment with human values. AI security protects these systems from threats such as attacks, misuse, and unauthorized access. Together, they are critical to building trustworthy, resilient AI and must be integrated across the entire AI lifecycle.
Through the CSA AI Safety Initiative, we understand that AI safety, security, and responsibility must go hand in hand. Organizations need leaders who understand the full spectrum: from implementing technical guardrails to navigating ethical dilemmas and evolving regulations. This is why CSA and NU are creating a Trusted AI Safety Knowledge program.
A New Kind of Certification
The Trusted AI Safety Knowledge program is a commitment to building AI systems that are safe, secure, and responsible. The program will deliver a modular training path, with topics ranging from AI architecture and lifecycle risks to ethics, governance, and cloud security in AI environments. Whether learners pursue an initial certificate of knowledge or a professional-level certification, they’ll gain practical tools to assess risk, implement controls, and drive responsible AI adoption.
We’re proud to be partnering with Northeastern University, whose Institute for Experiential AI and Experimental Digital Global Education (EDGE) team bring deep expertise in responsible AI and applied, real-world learning. This collaboration ensures that our certification not only reflects academic rigor but is built for today’s fast-paced, high-stakes AI environment.
For the Pioneers
This program is designed for the practitioners already on the front lines designing, developing, and deploying AI systems, such as AI developers, IT and security professionals, auditors, and governance leads who recognize that “move fast and break things” is no longer an option. The Trusted AI Safety Knowledge program is for those who believe that innovation and responsibility must be tied together.
Just as Zero Trust challenged the assumption of implicit access, our approach to AI is rooted in a similar principle: we must question, verify, and align—at every step of the AI lifecycle. That’s the core of what the Trusted AI Safety Knowledge program is all about. AI safety, like security, isn’t a single layer or afterthought—it must be architected into the system from the start to ensure safe, secure, and responsible outcomes.
“AI will be the most transformative technology of our lifetimes, bringing with it both tremendous promise and significant peril. Through collaborative partnerships like this, we can collectively reduce the risk of these technologies being misused by taking steps necessary to educate and instill best practices when managing the full lifecycle of AI capabilities, ensuring—most importantly— that they are designed, developed, and deployed to be safe and secure.”
- Jen Easterly, Former Director, Cybersecurity and Infrastructure Security Agency
How Can I Get Involved?
Learn more about this innovative certification program by reading the Program Overview.
While the certification is currently in development, this is your opportunity to get involved early. Take an active role in development by signing up to be a beta tester and gaining early access to the training modules. Register your interest and stay informed by submitting your information here.
I’m incredibly grateful to our team, our partners at NU, and the many volunteers who have contributed and will contribute to this effort. As with every CSA initiative, this is just the beginning. We hope you’ll join us on the journey.
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Getting Started with Kubernetes Security: A Practical Guide for New Teams
Published: 04/25/2025
Phishing Tests: What Your Provider Should Be Telling You
Published: 04/24/2025
Understanding Zero Trust Security Models - A Beginners Guide
Published: 04/24/2025
Unlocking the Distillation of AI and Threat Intelligence Models
Published: 04/23/2025