The LunarTech AI Interviewer System is a real-time, AI-powered interview platform that conducts mock interviews for Data Science positions. The system uses LiveKit for real-time communication, Google's Gemini AI for natural language processing, and provides comprehensive analysis and transcript generation.
The system consists of two main components:
The main entry point that sets up the LiveKit session, creates the interview room, and orchestrates the interview process.
- Loads environment variables for API keys and configuration
- Sets up logging for debugging
- Configures LiveKit WebSocket URL, API key, and secret
LIVEKIT_WS_URL=ws://localhost:7880 # LiveKit WebSocket URL
LIVEKIT_API_KEY=devkey # LiveKit API key
LIVEKIT_API_SECRET=secret # LiveKit API secret
LIVEKIT_ROOM_NAME=interview-room-{uuid} # Optional room name
LIVEKIT_URL=http://localhost:3000 # Frontend URL for joining
GOOGLE_API_KEY=your_google_api_key # Google AI API key
TAVILY_API_KEY=your_tavily_api_key # Optional: For web searchContains a predefined Data Science job description including:
- Required skills (Python/R, ML, SQL, etc.)
- Experience requirements (2+ years)
- Salary range ($95,000-$130,000)
- Benefits and work arrangements
async def entrypoint(ctx: agents.JobContext):
# Creates LiveKit room with:
# - 10-minute empty timeout
# - Maximum 2 participants (interviewer + candidate)
# - Unique room name generation- LLM: Google Gemini 2.5 Flash with native audio dialog
- Voice: "Puck" voice model
- STT: Google Speech-to-Text with latest_long model
- Temperature: 0.7 for balanced creativity
The system captures conversation in real-time using two event handlers:
- User Input Transcription: Captures candidate responses
- Conversation Item Added: Captures interviewer responses
Each entry includes:
- Timestamp
- Speaker role (user/assistant)
- Transcribed text
- Speaker ID
- Room creation and participant invitation
- Agent initialization with job description
- Structured question sequence:
- Name and background
- Interest in position
- Experience with data science/ML/AI
- Career goals
- Availability
- Follow-up questions based on responses
- Interview conclusion via
end_interviewfunction
Implements the core AI interviewer agent that conducts interviews, manages conversation flow, and generates comprehensive analysis.
def __init__(self, *args, **kwargs):
# Flexible constructor supporting:
# - InterviewAgent(name, jd)
# - InterviewAgent(jd=jd)
# - InterviewAgent(name=name, jd=jd)name: Candidate name (default: "Candidate")jd: Job description (default: "Undefined Position")interview_start_time: UTC timestamp of interview startinterview_transcript: Raw conversation logquestions_asked: Tracking of asked questionsis_interview_completed: Interview completion statusinterview_summary: Structured summary data
- Triggered when agent enters the room
- Records interview start time in UTC
- Logs all messages to internal transcript
- Maintains conversation history with timestamps
Purpose: Concludes the interview and generates comprehensive documentation
Process:
- Saves interview data to JSON and text files
- Triggers AI analysis of the conversation
- Provides natural conclusion message
- Schedules graceful session shutdown
Output Files:
interview_{name}_{timestamp}_transcript.json: Complete interview datainterview_{name}_{timestamp}_summary.txt: Human-readable summary
Purpose: Handles file generation and data persistence
Generated Files:
-
JSON Transcript: Structured data including:
- Candidate information
- Interview metadata (duration, status)
- Complete conversation transcript
- Summary notes
-
Text Summary: Human-readable format with:
- Interview overview
- Chronological transcript
- Interviewer notes
Purpose: Advanced AI analysis using Google Gemini
Analysis Components:
-
Candidate Assessment:
- Interest level (Low/Medium/High)
- Readiness for role (Not Ready/Somewhat Ready/Ready/Very Ready)
- Experience level (Junior/Mid-level/Senior)
-
Skills Analysis:
- Technical skills mentioned
- Soft skills demonstrated
-
Evaluation:
- Key strengths summary
- Areas for improvement
- Overall assessment and recommendation
- Notable quotes extraction
AI Analysis Output:
interview_{name}_{timestamp}_AI_ANALYSIS.json: Structured AI analysisinterview_{name}_{timestamp}_AI_ANALYSIS.txt: Enhanced human-readable report
Analysis Prompt Structure: The AI receives the complete transcript and job description, then extracts:
{
"candidate_name": "string",
"interest_level": "string",
"readiness": "string",
"experience_level": "string",
"technical_skills": ["array"],
"soft_skills": ["array"],
"key_strengths": "string",
"areas_for_improvement": "string",
"overall_assessment": "string",
"notable_quotes": ["array"]
}Purpose: Optional web search capability using Tavily API
Features:
- Real-time information lookup during interviews
- Fact-checking capabilities
- Industry-specific information retrieval
Environment Setup → Room Creation → Agent Initialization → Session Start
Welcome Message → Structured Questions → Follow-up Questions → Natural Conversation
End Interview Function → Data Saving → AI Analysis → File Generation → Session Cleanup
Basic Transcript → AI Analysis → Enhanced Reports → File Persistence
- LiveKit: Real-time communication platform
- Google Gemini: AI language model for conversation and analysis
- Google STT: Speech-to-text conversion
- Tavily: Web search API (optional)
livekit: Real-time communicationgoogle-genai: Google AI integrationtavily-python: Web search capabilitiesasyncio: Asynchronous programmingjson: Data serializationdatetime: Timestamp managementdotenv: Environment variable management
-
interview_{name}_{timestamp}_transcript.json- Complete interview metadata
- Raw conversation transcript
- Interview summary data
-
interview_{name}_{timestamp}_summary.txt- Human-readable interview summary
- Chronological conversation log
- Basic interviewer notes
-
interview_{name}_{timestamp}_AI_ANALYSIS.json- Complete AI analysis data
- Structured candidate assessment
- Enhanced metadata
-
interview_{name}_{timestamp}_AI_ANALYSIS.txt- Professional interview report
- AI-generated insights
- Structured assessment sections
- Complete transcript with enhanced formatting
interview_analysis_failed_{timestamp}.txt- Generated when AI analysis fails
- Contains error details and timestamp
Create a .env file with required API keys and configuration.
pip install livekit-agents google-generativeai tavily-python python-dotenvEnsure LiveKit server is running and accessible.
The system generates join URLs for browser-based interview participation.
- AI analysis failure doesn't prevent basic transcript generation
- Missing API keys result in feature-specific warnings
- Session cleanup prevents resource leaks
- Comprehensive logging throughout the system
- Error tracking with stack traces
- Fallback file generation for failed operations
- Environment variable storage for sensitive data
- No hardcoded credentials in source code
- Local file storage for interview data
- UTC timestamps for consistency
- Structured data formats for easy processing
- Dynamic job description loading
- Position-specific question sets
- Industry-tailored analysis
- Sentiment analysis
- Communication pattern recognition
- Performance benchmarking
- ATS (Applicant Tracking System) integration
- Calendar scheduling
- Email notification system
- Live coaching suggestions
- Real-time performance metrics
- Dynamic question adaptation
# Create interviewer agent
interviewer = InterviewAgent(jd=job_description)
# Start interview session
session = AgentSession(llm=model, stt=speech_to_text)
await session.start(room=room, agent=interviewer)# With specific candidate name
interviewer = InterviewAgent(name="John Doe", jd=job_description)
# End interview with custom notes
await interviewer.end_interview("Excellent technical skills, needs soft skill development")This documentation provides a comprehensive overview of the LunarTech AI Interviewer System, covering both the technical implementation and practical usage aspects of the codebase.