The schedule reflects a deliberate dramatic arc. The morning builds intellectual pressure through keynotes and a formal debate that forces the room to commit. The afternoon releases that energy into applied labs, structured workshops, and competitions. The closing synthesizes everything into a published artifact.
Meaningful conversation happens in-between the talks. Shorter, focused, TED Talk like sessions keep audience attention and engagement high and longer breaks create the networking time where the real conference happens.
01
September 15–17. A multi-day engineering competition. Participants compete remotely on who can get an LLM to perform security tasks most effectively. Results are revealed on stage September 18.
02
September 17. A day-before executive event for 16–20 senior participants. One sponsor hosts. Intimate, unhurried relationship-building among the most senior attendees and sponsors.
03
September 18. The full-day program below.
Invite Only, Chatham House Rule. Senior security leaders. Structured conversation that surfaces the day’s core tensions.
Doors open. Attendee check-in and informal networking.
Duke leadership welcome. The intellectual stakes for the day established.
One authoritative voice: the definitive overview of where AI security stands in September 2026.
What autonomous AI looks like in production. The attack surface. What defenders face today.
The governance and oversight landscape, liability gaps, cognitive liberty in AI-augmented environments.
Formal adversarial debate. Pre-vote, two sides (10 min each), rebuttals, post-vote. The swing is published.
Technical:
AI Red Teaming Lab: Securing the Agentic Stack
Management:
AI Governance Tabletop: AI-Ready Security Team
Technical:
Vibe Code Security Audit: Cyber career skills assessment.
Management:
Supply Chain Risk Management: CISO Leadership.
Live synthesis: debate votes, workshop outputs.
Awards for CTF and Hackathon winners. Informal reception.
The afternoon parallel tracks are the conference’s applied engine. Below are options for the Board’s consideration, organized by track.
1. AI Red Teaming Lab. Participants attack a live AI system under controlled conditions – prompt injection, model manipulation, data exfiltration, adversarial evasion. A facilitator walks the room through escalating attack scenarios.
2. Vibe Code Security Audit. Give participants an AI-generated codebase – the output of “vibe coding” – and have them find where the AI got it wrong. SQL injection, hardcoded secrets, broken auth, insecure defaults.
3. Autonomous Breach Response War Game. Two teams respond to the same simulated breach in real time. One team has full autonomous AI tooling (AI-driven SOAR, LLM triage, automated containment). The other runs a traditional SOC playbook.
4. Supply Chain Poisoning Simulation. Teams receive an AI agent dependency tree – model providers, tool integrations, data pipelines, API chains – and race to find the poisoned node before a timer runs out. A mini-CTF focused specifically on supply chain risk.
5. Securing the Agentic Stack. A structured workshop on the architectural requirements for securing agentic AI workloads – identity and least privilege for AI agents, runtime controls, memory isolation, tool-use governance. Practitioners work through a reference architecture and stress-test it against attack scenarios.
6. Cyber Careers Assessment. A 60-minute open session where students and early-career professionals take a structured personality and aptitude assessment designed to identify the cybersecurity domains where they would be most effective and most satisfied.
1. Incident Response Tabletop. A scenario-driven exercise: an autonomous AI agent deployed in your security stack causes a material incident – it auto-quarantines a production server based on a false positive, triggering a revenue-impacting outage. Teams work through the response: Who is accountable? What does the board need to know in 24 hours? What does the SEC disclosure look like?
2. Autonomous AI Governance Workshop. You just became CISO at a company that has deployed autonomous AI across the enterprise with no governance framework. The board wants a 90-day plan. Each table is assigned a specific governance framework constraint. Tables build their plans within that regulatory lens.
3. AI-Ready Security Team Workshop. A structured session on workforce strategy: which security roles are most augmented by AI, which are most displaced, and what does the hiring profile look like in 2028? Teams build a “future org chart” for a mid-market security organization with autonomous tooling.
4. CISO Leadership Scenario. Give a table of CISOs a high-pressure scenario (e.g., “Your board just mandated 50% SOC headcount reduction via AI automation – you have 90 days”). Each table develops a response plan, then presents to the room. A panel of sitting CISOs critiques each plan. The format forces candor because the audience evaluates the work in real time.
5. Supply Chain Risk Management. A management-oriented companion to the technical supply chain simulation. Teams assess a fictional company’s AI vendor portfolio: which third-party AI integrations create material risk, what does the contractual liability landscape look like, and how do you build an AI-specific third-party risk program?