AI-powered static code analysis tool combining CodeQL, AST analysis, and LLM reasoning.
vulnerability_analyzer/
├── tools/ # LangChain Tools (thin wrappers)
├── agents/ # Autonomous vulnerability analysis agent
├── core/ # Pure business logic (LangChain-independent)
├── prompts/ # LLM prompt templates
├── output/ # Generated reports and visualizations
└── tests/ # Sample vulnerable code for testing
- Separation of Concerns: Core logic independent from LangChain
- Autonomous Agent: ReAct-based agent dynamically chooses analysis strategy
- Hybrid Approach: Static analysis (CodeQL) + LLM context understanding
- Tool-Based Architecture: Agent orchestrates multiple specialized tools
- Primary: Detect vulnerabilities that static analyzers miss (context-aware)
- Secondary: Filter false positives from static analysis
- Tertiary: Provide actionable remediation suggestions
Copy .env.example to .env and update the values to match your local CodeQL
installation:
cp .env.example .env
- Project structure setup
- AST parser implementation
- CFG generator implementation
- CodeQL integration (Docker-based)
- LangChain agent design (ReAct pattern)
- LLM prompt engineering
- Support for CodeQL-free analysis (CFG-only mode)
- Visualization and reporting
See AGENT_SETUP.md for complete setup instructions.
from langchain_openai import ChatOpenAI
from agents import VulnerabilityAnalysisAgent
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = VulnerabilityAnalysisAgent(llm=llm, verbose=True)
result = agent.analyze_file("file.js")agent = VulnerabilityAnalysisAgent(
llm=llm,
use_codeql=False # No Docker required!
)
result = agent.analyze_file("file.js")📖 See USING_WITHOUT_CODEQL.md for details on CFG-only analysis.