The Month of AI Bugs
August 2025
An initiative to raise awareness of security vulnerabilities in agentic AI systems.
powered by Embrace the Red
This project is dedicated to analyzing security vulnerabilities in AI systems, focusing on agentic coding agents. Our goal is to raise awareness around critical risks like prompt injection and the dangers of over-reliance on LLM output.
We believe in transparency and proactive defense. Many vulnerabilities highlighted here have been responsibly disclosed and fixed by vendors. However, we also aim to shed light on cases where vendors are unresponsive to encourage accountability and timely action.
With the advent of offensive AI, the industry must adapt. This means shortening triage and fix windows for vulnerabilities and adopting AI for proactive defense. This initiative is guided by the "Embrace the Red" philosophy:
"Learn the hacks, stop the attacks."
This information is provided for educational purposes to raise awareness and understanding of novel AI security vulnerabilities. We do not endorse or encourage any illegal activities. If you happen to find security vulnerabilities, responsibly disclose them to the vendor - many also pay bug bounties.