Skip to content

Evaluation Setup #38

@alon-albalak

Description

@alon-albalak

We need a few items to evaluate models on this benchmark:

  1. Run a PRM on each example in the benchmark and save the predictions
  2. Compare the predictions with the ground truth and generate a score (e.g. accuracy)

For step 1, we really need 2 separate frameworks:

  1. Discriminative models (models that directly output a score)
  2. Generative models (models that are sampled from to generate the final decision)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions