Simulating an AppSec Team with AI

Simulating an AppSec Team with AI


In the world of cybersecurity, some practitioners build tools, and then there are visionaries who reimagine how those tools and the teams behind them could function altogether. Srajan Gupta belongs to the latter category. A seasoned security engineer and researcher, Srajan’s recent work in AI-powered application security reflects not just a technical achievement, but a paradigm shift in how the industry might approach scale, intelligence, and autonomy in AppSec.

His latest contribution, AI Security Crew, is a publicly available, open-source framework that simulates the capabilities of an entire application security team using distributed, autonomous AI agents. But this isn’t just another security automation tool. It’s a deliberate architectural statement about how artificial intelligence can be composed, structured, and directed to serve complex security workflows, not merely supplement them.

An Engineer Grounded in Systems, Not Silos
Srajan’s journey to this point is grounded in experience across high-velocity product environments, where he has worked in multiple security domains, including threat modeling, infrastructure hardening, identity management, and cloud-native fraud detection. What he saw consistently in those roles was a misalignment between how security was practiced and how software was built.

“Security often gets bolted on, scoped out, or backlogged,” he reflects. “There’s an opportunity for AI to integrate security deeper into engineering, in a way that’s proactive, accessible, and composable.”

This insight didn’t come from theory alone; it came from lived friction. Design reviews delay launches. Manual code inspections missed architectural threats. Security tooling rarely spoke the language of developers or adapted to the context of a specific system. These gaps, repeated across multiple organizations and product teams, led Srajan to ask: What if we could simulate the workflows of a modern security team without the human bottlenecks?

Inside the Framework: What AI Security Crew Does
At its core, AI Security Crew is a modular, multi-agent system, a network of role-specific AI agents, each trained and optimized to perform key AppSec functions. Rather than operate in isolation, these agents communicate via a shared memory context, coordinating their efforts to analyze, critique, and provide recommendations across a system’s lifecycle.

Each agent has a clearly defined role, modeled after a real-world security team:

  • Code Review Agent: Examines codebases, pull requests, or diffs to detect insecure patterns, architectural inconsistencies, or violations of secure coding standards.
  • Exploiter Agent: Takes on the role of an attacker attempting to exploit system weaknesses, validate threat models, and simulate potential real-world abuse scenarios.
  • Remediation Agent: Translates findings into actionable security fixes, suggesting code patches, configuration changes, or policy improvements tailored to the engineering context.
  • Reporter Agent: Consolidates findings into clear, developer-friendly reports, mapping each issue to its severity, impact, and recommended next steps for resolution.
  • Manager Agent: Oversees agent coordination, prioritizes risks, ensures progress alignment across tasks, and maintains consistency between security policies and engineering velocity, just like a real security team lead.

By structuring the framework this way, AI Security Crew doesn’t just automate tasks, it emulates the structure, dynamics, and collaboration of a modern application security team operating within a fast-moving engineering organization.

Why This Approach Matters
The industry has long leaned on tools that detect symptoms, vulnerabilities in code, outdated dependencies, and suspicious log entries, but they struggle with identifying causal patterns, architectural flaws, or threat propagation paths across systems. Human security engineers excel at these higher-order analyses, but they are expensive, rare, and often overwhelmed.

AI Security Crew points to a third way: building autonomous systems that reason about security the way humans do, but faster, at scale, and with memory.

The novelty lies not in using AI per se (many tools do), but in how the AI is organized and operationalized. Most AI-infused security tools treat models as assistants: something to ask questions or summarize output. AI Security Crew treats each agent as a team member, assigned to a function, accountable for its output, and empowered to contribute to a larger understanding of a system’s risk.

This is a fundamental shift from linear automation to structured autonomy, a direction that holds promise far beyond application security.

Research Meets Implementation
AI Security Crew isn’t just a standalone tool; it’s built on a solid foundation of applied research. In his peer-reviewed paper, “Architectural Analysis Using AI Agents”, published in IJFMR, Srajan lays out a theoretical model for how intelligent agents can deconstruct and assess software architecture. The paper explores how agents can detect systemic weaknesses, contextual design flaws, and deviations from intended architecture.

This work informed the multi-agent collaboration model used in AI Security Crew, allowing the system not just to analyze but to understand architecture as a system of interconnected risks. It’s rare to see a project that spans from conceptual theory to working implementation, and rarer still to see one that’s open-sourced and available for the community to extend.

Tool, Framework, or Philosophy?
While AI Security Crew is fully functional as a tool, its broader value may lie in what it represents: a new way to think about scaling security practices using AI. Rather than trying to replace humans, it amplifies them. Rather than centralizing intelligence, it distributes it. Rather than hardcode rules, it allows for contextual interpretation and interaction.

This architecture creates a foundation for more dynamic security tooling. Teams can plug AI Security Crew into CI/CD pipelines, use it as a decision-support system during design reviews, or even run it as a background process that continuously validates architectural integrity.

Its modularity means it can evolve, new agents can be added, existing ones tuned, and memory and coordination mechanisms improved. In this sense, it’s not just a framework, it’s a living system designed to adapt as both AI and software development evolve.

The Bigger Picture
Srajan’s work with AI Security Crew sits at the convergence of multiple trends: the rise of AI agents, the need for scalable security, and the growing complexity of cloud-native, distributed systems. In tackling these challenges, he has not only developed an innovative tool but sparked a conversation around how AI can meaningfully collaborate in technical disciplines where nuance, judgment, and structure matter deeply.

As security leaders, researchers, and developers continue to explore the boundaries of AI in engineering, frameworks like AI Security Crew show that you don’t need to wait for enterprise products to adopt transformative ideas. Sometimes, all it takes is one engineer asking the right questions and building the right team, even if that team happens to be made of AI.



Source link

Posted in

Swedan Margen

I focus on highlighting the latest in business and entrepreneurship. I enjoy bringing fresh perspectives to the table and sharing stories that inspire growth and innovation.

Leave a Comment