Multi-agent AI platform implementing Active Inference for autonomous, mathematically-principled intelligent systems
Experience FreeAgentics immediately without any configuration:
git clone https://github.com/greenisagoodcolor/FreeAgentics.git
cd FreeAgentics
make install
make dev🎯 That's it! The system will start up and guide you through each step:
Step 1: Installation (2-3 minutes)
$ make installYou'll see:
- ✅ Python virtual environment created
- ✅ Python dependencies installed (FastAPI, PyMDP, etc.)
- ✅ Node modules installed (Next.js, React, etc.)
- ✅ Development tools configured
Step 2: Start Development Environment (30 seconds)
$ make devYou'll see:
- 🔥 Backend starting on http://localhost:8000
- ⚛️ Frontend starting on http://localhost:3000
- ✅ WebSocket connections established
- 📊 In-memory database ready
Step 3: Access the Application Open http://localhost:3000 in your browser. You should see:
- 🎨 Clean UI with dark theme
- 💬 Prompt bar at the bottom
- 📊 Empty metrics panel (no agents yet)
- 🌐 Empty knowledge graph visualization
Step 4: Create Your First Agent
Type in the prompt bar: "Create an agent to explore the environment"
You'll observe:
- Conversation starts - Two agents (Advocate & Analyst) appear
- Real-time updates - Messages stream as agents discuss
- Agent creation - A new explorer agent appears in the grid
- Knowledge graph updates - Nodes and connections form
Step 5: Explore Core Features
- Multi-Agent Chat: Watch the Advocate and Analyst discuss your request
- Grid World: See your explorer agent move around the environment
- Knowledge Graph: Click nodes to see agent beliefs and relationships
- Metrics Panel: Monitor agent performance and system health
Demo Features Ready:
- ✅ Create and manage Active Inference agents
- ✅ Watch agents explore the grid world
- ✅ View the knowledge graph build in real-time
- ✅ Test the conversation interface
- ✅ Explore all UI components
For real OpenAI responses and persistent data:
Step 1: Configure API Keys
cp .env.example .envStep 2: Edit .env file
# Add your OpenAI API key:
OPENAI_API_KEY=sk-your-key-here
# Optional: Add PostgreSQL for persistence
DATABASE_URL=postgresql://user:password@localhost/freeagenticsStep 3: Restart with Real Providers
make devYou'll notice:
- 🤖 Real AI responses instead of mock data
- 💾 Persistent database (if configured)
- 🧠 Actual LLM-generated agent behaviors
- 📈 More sophisticated knowledge graph growth
Example Prompts to Try:
-
Basic Agent Creation (Demo Mode)
"Create an agent to explore the environment"- Expected: Explorer agent appears and starts moving
-
Business Planning (Requires API Key)
"Help me create a sustainable business plan"- Expected: Agents discuss and analyze business strategies
-
Theoretical Discussion (Best with API Key)
"Have two agents discuss active inference theory"- Expected: Deep conversation about mathematical principles
-
Complex Task (Requires API Key)
"Design a multi-agent system for climate monitoring"- Expected: Multiple specialized agents created with specific roles
✅ WebSocket Connections: Real-time bidirectional communication ✅ Agent Conversations: Natural dialogue between AI agents ✅ Knowledge Graph Growth: Nodes and edges forming as agents interact ✅ Grid World Actions: Agents moving and exploring autonomously ✅ Belief Updates: Agent mental states evolving based on observations ✅ Goal-Directed Behavior: Agents following user-specified objectives
Issue: Port conflicts
make kill-ports # Kill processes on ports 3000 and 8000
make dev # RestartIssue: Dependencies missing
make clean # Remove all dependencies
make install # Fresh install
make dev # Start againIssue: WebSocket errors
make status # Check service health
# Look for "WebSocket: Connected" statusIssue: Not sure what's happening
make logs # View backend logs
# Check for error messages or warningsFreeAgentics creates AI agents using Active Inference - a mathematical framework from cognitive science. Unlike chatbots or scripted AI, our agents make decisions by minimizing free energy, leading to emergent, intelligent behavior.
FreeAgentics implements a complete cognitive architecture where:
- Natural language goals are converted to GMN specifications via LLMs
- GMN specs create PyMDP Active Inference agents
- PyMDP agents take actions and update their beliefs
- Agent actions update the knowledge graph
- Knowledge graph provides context for the next LLM generation
- Multi-Agent Conversations: Watch AI agents discuss and collaborate in real-time
- Active Inference: Agents use PyMDP for probabilistic reasoning and decision-making
- Knowledge Graph: Live visualization of agent beliefs, goals, and relationships
- GMN Generation: Convert natural language into formal agent specifications
- Real-time Updates: WebSocket integration for instant feedback
- Zero Setup Demo: Experience everything without API keys or configuration
- Python 3.9+
- Node.js 18+
- Git
make install # Install all dependencies
make dev # Start development servers
make test # Run tests
make stop # Stop all servers
make status # Check environment status
make clean # Clean build artifacts
make reset # Full reset (removes dependencies)Demo Mode (No API Key):
curl -X POST "http://localhost:8000/api/v1/prompts/demo" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Create an agent that explores unknown environments",
"agent_name": "Explorer"
}'With LLM (Requires API Key):
curl -X POST "http://localhost:8000/api/v1/prompts" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Create an agent that balances exploration and exploitation",
"agent_name": "Optimizer",
"llm_provider": "openai"
}'Start a conversation between multiple Active Inference agents:
curl -X POST "http://localhost:8000/api/v1/agent-conversations" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Discuss strategies for sustainable energy",
"agent_count": 3,
"conversation_turns": 5
}'The knowledge graph automatically updates as agents interact:
curl "http://localhost:8000/api/knowledge-graph"- Goal Prompt → LLM generates GMN specification
- GMN → Creates PyMDP Active Inference model
- PyMDP → Agent takes actions based on beliefs
- Actions → Update knowledge graph
- Knowledge Graph → Provides context for next iteration
This creates a continuous learning loop where agents become more intelligent over time!
FreeAgentics automatically detects when no configuration is provided and switches to demo mode:
- SQLite in-memory database - No installation needed
- Demo WebSocket endpoint - Auto-connects to
/api/v1/ws/demo - Mock LLM providers - Realistic AI responses without API keys
- In-memory caching - No Redis required
- Auto-generated auth tokens - Skip complex authentication setup
- Real-time WebSocket - Full functionality including live updates
Copy the comprehensive example file and customize as needed:
cp .env.example .env
# Edit .env with your preferencesKey Settings:
# For real AI responses
OPENAI_API_KEY=sk-your-key-here
# For persistent data
DATABASE_URL=postgresql://user:pass@host:port/database
# For production caching
REDIS_URL=redis://localhost:6379/0The .env.example file includes detailed documentation for all 100+ available settings.
For production with vector storage:
# Install PostgreSQL with pgvector extension
# Ubuntu/Debian:
sudo apt install postgresql postgresql-contrib
sudo -u postgres psql -c "CREATE EXTENSION IF NOT EXISTS vector;"
# Set DATABASE_URL in .env:
DATABASE_URL=postgresql://username:password@localhost:5432/freeagenticsNote: SQLite works fine for development and small deployments.
/
├── agents/ # Active Inference agents
├── api/ # FastAPI backend
├── web/ # Next.js frontend
├── inference/ # PyMDP integration
├── database/ # SQLAlchemy models
└── tests/ # Test suite
make status # Check environment and service status
make kill-ports # Stop conflicting processes
make clean # Remove build artifacts
make install # Reinstall dependencies
make dev # Start fresh- Connection refused: Check
NEXT_PUBLIC_WS_URLin.env(leave empty for demo mode) - Authentication errors: Demo mode doesn't require auth. For dev mode, ensure valid JWT token
- Connection drops: Check browser console, enable debug logging with
ENABLE_WEBSOCKET_LOGGING=true - Testing WebSocket:
wscat -c ws://localhost:8000/api/v1/ws/demo
See WebSocket API Documentation and WebSocket Testing Guide for detailed debugging.
Service Won't Start:
# Check if ports are in use
make kill-ports && make dev
# Verify dependencies
make status
# Full reset if needed
make reset && make install && make devFrontend Not Loading:
- Ensure backend is running: http://localhost:8000/health
- Check frontend port: usually http://localhost:3000
- Look for port conflicts in terminal output
API/Database Errors:
- Demo mode should work without any setup
- If using custom config, verify
.envfile settings - Check logs in terminal for specific error messages
Performance Issues:
- Demo mode uses in-memory database (data resets on restart)
- For persistent data, set
DATABASE_URLin.envfile - Reduce
MAX_AGENTS_PER_USERin.envif needed
- Check
make statusoutput - Look for error messages in terminal
- Verify http://localhost:8000/health returns OK
- Try demo mode first (no configuration needed)
MIT License - see LICENSE file for details.