GitHub's Secure Code Game has launched Season 4, targeting agentic AI security through a deliberately vulnerable terminal bot called ProdBot. The game is free, open source, and requires zero prior AI or coding experience. Over 10,000 developers have completed previous seasons. This one is built around a single goal: use natural language to make ProdBot expose the contents of password.txt. Five levels, five capability upgrades, five distinct attack surfaces.

The threat model here is real, not academic. The OWASP Top 10 for Agentic Applications 2026, produced with input from over 100 security researchers, lists agent goal hijacking, memory poisoning, and tool misuse as critical risks. A Dark Reading poll found 48% of cybersecurity professionals expect agentic AI to be the top attack vector by end of 2026. Cisco's State of AI Security 2026 report puts the problem plainly: 83% of organizations plan to deploy agentic AI, but only 29% feel ready to do so securely. ProdBot is a controlled environment to close that gap.

What makes this worth reading in full is the level design. Each upgrade to ProdBot, from sandboxed bash execution to web browsing to MCP server connections to multi-agent orchestration across six specialized agents, mirrors how real production AI tools actually evolve. The article references CVE-2026-25253, a CVSS 8.8 remote code execution flaw in OpenClaw dubbed ClawBleed, as the kind of real-world consequence these patterns produce. The game does not tell you which vulnerability lives at which level. That tension is the point.

[READ ORIGINAL →]