Skip to content

Charleshzhang/inference

 
 

Repository files navigation

MLPerf Inference is a benchmark suite for measuring how fast systems can train models to a target quality metric.

Please see the MLPerf Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.

About

Reference implementations of inference benchmarks

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.4%
  • C++ 17.4%
  • Jupyter Notebook 3.1%
  • Shell 3.0%
  • CSS 2.1%
  • Dockerfile 0.5%
  • Other 0.5%