Skip to content

sureSundar/aijury

Repository files navigation

AI Jury 🤖⚖️

A comprehensive tool for evaluating code submissions in hackathons, coding competitions, and educational assessments.

Features

  • Multi-language Support: Evaluates code in Python, JavaScript/TypeScript, Java, C/C++, Go, Rust, and more
  • Comprehensive Analysis:
    • Code quality and structure
    • Documentation completeness
    • Functionality implementation
    • Innovation and creativity
  • Configurable Scoring: Customizable weights and thresholds
  • Detailed Reports: JSON output with actionable feedback
  • CLI Interface: Easy to integrate into automated workflows

Quick Start

  1. Basic Evaluation:

    python main.py /path/to/submission
  2. With Custom Config:

    python main.py /path/to/submission --config config.json
  3. Save Results:

    python main.py /path/to/submission --output results.json --verbose

Evaluation Criteria

Code Quality (30% weight)

  • File structure and organization
  • Code complexity and readability
  • Naming conventions
  • Comments and inline documentation
  • Error handling

Documentation (25% weight)

  • README completeness
  • Setup/installation instructions
  • Additional documentation files
  • Inline code documentation

Functionality (30% weight)

  • Code executability
  • Test coverage
  • Configuration management
  • Dependency management

Innovation (15% weight)

  • Creative problem-solving approaches
  • Additional features beyond requirements
  • Use of advanced technologies/patterns

Scoring System

  • Excellent: 90-100%
  • Good: 75-89%
  • Satisfactory: 60-74%
  • Needs Improvement: Below 60%

Configuration

Customize evaluation criteria in config.json:

{
  "weights": {
    "code_quality": 0.3,
    "documentation": 0.25,
    "functionality": 0.3,
    "innovation": 0.15
  },
  "thresholds": {
    "excellent": 90,
    "good": 75,
    "satisfactory": 60,
    "needs_improvement": 40
  }
}

Command Line Options

usage: main.py [-h] [--config CONFIG] [--output OUTPUT] 
               [--problem-statement PROBLEM_STATEMENT] [--verbose]
               submission_path

positional arguments:
  submission_path       Path to the submission directory to evaluate

options:
  -h, --help            show this help message and exit
  --config CONFIG, -c CONFIG
                        Path to configuration file (JSON format)
  --output OUTPUT, -o OUTPUT
                        Output file for evaluation results (JSON format)
  --problem-statement PROBLEM_STATEMENT, -p PROBLEM_STATEMENT
                        Path to problem statement or PRD file
  --verbose, -v         Enable verbose output

Example Output

============================================================
AI JURY EVALUATION REPORT
============================================================
Submission: /path/to/submission
Timestamp: 2024-01-15T10:30:45.123456

OVERALL SCORE: 78.5/100.0 (78.5%)

DETAILED SCORES:
  Code Quality:    82.0/100
  Documentation:   75.0/100
  Functionality:   80.0/100
  Innovation:      70.0/100

Integration Examples

GitHub Actions

- name: Evaluate Submission
  run: |
    python ai-jury/main.py ./submission --output evaluation.json

Batch Processing

#!/bin/bash
for submission in submissions/*/; do
    python main.py "$submission" --output "results/$(basename $submission).json"
done

Supported File Types

  • Python: .py
  • JavaScript/TypeScript: .js, .ts
  • Java: .java
  • C/C++: .c, .cpp
  • Go: .go
  • Rust: .rs
  • Ruby: .rb
  • PHP: .php
  • C#: .cs
  • Kotlin: .kt
  • Swift: .swift

Exit Codes

  • 0: Excellent submission (≥70%)
  • 1: Good submission (50-69%)
  • 2: Needs improvement (<50%)
  • 3: Evaluation error

License

MIT License - see LICENSE file for details.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

Support

For issues and feature requests, please open an issue on the project repository.

About

AI JURY

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •