A comprehensive tool for evaluating code submissions in hackathons, coding competitions, and educational assessments.
- Multi-language Support: Evaluates code in Python, JavaScript/TypeScript, Java, C/C++, Go, Rust, and more
- Comprehensive Analysis:
- Code quality and structure
- Documentation completeness
- Functionality implementation
- Innovation and creativity
- Configurable Scoring: Customizable weights and thresholds
- Detailed Reports: JSON output with actionable feedback
- CLI Interface: Easy to integrate into automated workflows
-
Basic Evaluation:
python main.py /path/to/submission
-
With Custom Config:
python main.py /path/to/submission --config config.json
-
Save Results:
python main.py /path/to/submission --output results.json --verbose
- File structure and organization
- Code complexity and readability
- Naming conventions
- Comments and inline documentation
- Error handling
- README completeness
- Setup/installation instructions
- Additional documentation files
- Inline code documentation
- Code executability
- Test coverage
- Configuration management
- Dependency management
- Creative problem-solving approaches
- Additional features beyond requirements
- Use of advanced technologies/patterns
- Excellent: 90-100%
- Good: 75-89%
- Satisfactory: 60-74%
- Needs Improvement: Below 60%
Customize evaluation criteria in config.json:
{
"weights": {
"code_quality": 0.3,
"documentation": 0.25,
"functionality": 0.3,
"innovation": 0.15
},
"thresholds": {
"excellent": 90,
"good": 75,
"satisfactory": 60,
"needs_improvement": 40
}
}usage: main.py [-h] [--config CONFIG] [--output OUTPUT]
[--problem-statement PROBLEM_STATEMENT] [--verbose]
submission_path
positional arguments:
submission_path Path to the submission directory to evaluate
options:
-h, --help show this help message and exit
--config CONFIG, -c CONFIG
Path to configuration file (JSON format)
--output OUTPUT, -o OUTPUT
Output file for evaluation results (JSON format)
--problem-statement PROBLEM_STATEMENT, -p PROBLEM_STATEMENT
Path to problem statement or PRD file
--verbose, -v Enable verbose output
============================================================
AI JURY EVALUATION REPORT
============================================================
Submission: /path/to/submission
Timestamp: 2024-01-15T10:30:45.123456
OVERALL SCORE: 78.5/100.0 (78.5%)
DETAILED SCORES:
Code Quality: 82.0/100
Documentation: 75.0/100
Functionality: 80.0/100
Innovation: 70.0/100
- name: Evaluate Submission
run: |
python ai-jury/main.py ./submission --output evaluation.json#!/bin/bash
for submission in submissions/*/; do
python main.py "$submission" --output "results/$(basename $submission).json"
done- Python:
.py - JavaScript/TypeScript:
.js,.ts - Java:
.java - C/C++:
.c,.cpp - Go:
.go - Rust:
.rs - Ruby:
.rb - PHP:
.php - C#:
.cs - Kotlin:
.kt - Swift:
.swift
0: Excellent submission (≥70%)1: Good submission (50-69%)2: Needs improvement (<50%)3: Evaluation error
MIT License - see LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
For issues and feature requests, please open an issue on the project repository.