ChaptersCircleEventsBlog

Working Group

AI Technology and Risk

Explore the latest AI tech, predict risks, and ensure innovation meets security in the realm of AI.
View Current Projects
AI Technology and Risk
The AI Technology and Risk Committee is focused on staying abreast of the latest technological advancements in AI while simultaneously identifying, understanding, and forecasting associated risks, threats, and vulnerabilities. This technical committee aims to act as both a knowledge hub and a proactive risk management entity, bridging the gap between innovation and security in the realm of AI.

Working Group Leadership

Josh Buker
Josh Buker

Josh Buker

Research Analyst, CSA

Working Group Co-Chairs

Mark Yanalitis Headshot Missing
Mark Yanalitis

Mark Yanalitis

Chris Kirschke
Chris Kirschke

Chris Kirschke

Cloud Portfolio Information Security Officer at Albertsons Companies

Security Leader with over 20+ years of experience across Financial Services, Streaming, Retail and IT Services with a heavy focus on Cloud, DevSecOps and Threat Modeling. Advises multiple security startups on Product Strategy, Alliances and Integrations. Sits on multiple Customer Advisory Boards helping to drive security product roadmaps, integrations and feature developments. Avid hockey player, backpacker and wine collector in his spare t...

Read more

Publications in ReviewOpen Until
Agentic AI Red Teaming GuideApr 27, 2025
AI Consensus Assessments Initiative Questionnaire (AI-CAIQ)Apr 28, 2025
Secure Agentic System Design - A Trait-Based ApproachMay 15, 2025
Managing Privileged Access in a Cloud-First WorldMay 23, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here.

No scheduled meetings for this working group in the next 60 days.

See Full Calendar for this Working Group

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

Agentic AI Red Teaming Guide

Open Until: 04/27/2025

Red teaming for Agentic AI requires a specialized approach due to several critical factors. Agentic AI systems demand more ...

AI Consensus Assessments Initiative Questionnaire (AI-CAIQ)

Open Until: 04/28/2025

The AI Consensus Assessment Initiative Questionnaire (AI-CAIQ) is an extension of the Cloud Security Allia...

Secure Agentic System Design - A Trait-Based Approach

Open Until: 05/15/2025

This paper addresses the security challenges unique to agentic AI systems. As AI transitions from passive tools to autonomo...

Managing Privileged Access in a Cloud-First World

Open Until: 05/23/2025

Managing privileged access has become increasingly critical due to the complexity and ubiquity of distributed IT environmen...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact sales@cloudsecurityalliance.org to learn how your organization could participate and take a seat at the forefront of AI safety best practices.