Reasoning Gym is a community-created Python library of procedural dataset generators and algorithmically verifiable reasoning environments for training reasoning models with reinforcement learning (RL). The goal is to generate virtually infinite training data with adjustable complexity.
It currently provides more than 100 tasks over many domains, including but not limited to algebra, arithmetic, computation, cognition, geometry, graph theory, logic, and many common games.
Some tasks have a single correct answer, while others, such as Rubik‘s Cube and Countdown, have many correct solutions. To support this, we provide a standard interface for procedurally verifying solutions.
In GALLERY.md, you can find example outputs of all datasets available in reasoning-gym
.
The reasoning-gym
package requires Python >= 3.10.
Install the latest published package from PyPI via pip
:
pip install reasoning-gym
Note that this project is currently under active development, and the version published on PyPI may be a few days behind main
.
For development setup, see CONTRIBUTING.md.
import reasoning_gym
data = reasoning_gym.create_dataset('leg_counting', size=10, seed=42)
for i, x in enumerate(data):
print(f'{i}: q="{x['question']}", a="{x['answer']}"')
print('metadata:', x['metadata'])
# use the dataset's `score_answer` method for algorithmic verification
assert data.score_answer(answer=x['answer'], entry=x) == 1.0
Output:
0: q="How many legs are there in total if you have 1 sea slug, 1 deer?", a="4"
metadata: {'animals': {'sea slug': 1, 'deer': 1}, 'total_legs': 4}
1: q="How many legs are there in total if you have 2 sheeps, 2 dogs?", a="16"
metadata: {'animals': {'sheep': 2, 'dog': 2}, 'total_legs': 16}
2: q="How many legs are there in total if you have 1 crab, 2 lobsters, 1 human, 1 cow, 1 bee?", a="42"
...
Instructions for running the evaluation scripts are provided in eval/README.md.
Evaluation results of different reasoning models will be tracked in the reasoning-gym-eval repo.
The training/
directory has full details of the training runs we carried out with RG for the paper. In our experiments, we utilise custom Dataset code to dynamically create RG samples at runtime, and to access the RG scoring function for use as a training reward. See training/README.md
to reproduce our runs.
For a more plug-and-play experience, it may be easier to build a dataset ahead of time. See scripts/hf_dataset/
for a simple script allowing generation of RG data and conversion to a HuggingFace dataset. To use the script, build your dataset configurations in the YAML. You can find a list of tasks and configurable parameters in the dataset gallery. Then run save_hf_dataset.py
with desired arguments.
The script will save each dataset entries as a row with question
, answer
, and metadata
columns. The RG scoring functions expect the entry object from each row along with the model response to obtain reward values. Calling the scoring function is therefore simple:
from reasoning_gym import get_score_answer_fn
for entry in dataset:
model_response = generate_response(entry["question"])
rg_score_fn = get_score_answer_fn(entry["metadata"]["source_dataset"])
score = rg_score_fn(model_response, entry)
# do something with the score...
Please see CONTRIBUTING.md.
If you have ideas for dataset generators please create an issue here or contact us in the #reasoning-gym
channel of the GPU-Mode discord server.
Following is a list of awesome projects building on top of Reasoning Gym:
- Verifiers: Reinforcement Learning with LLMs in Verifiable Environments
- (NVIDIA) ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models
- Atropos - Nous Research's LLM RL Gym
If you use this library in your research, please cite the paper:
@misc{stojanovski2025reasoninggymreasoningenvironments,
title={REASONING GYM: Reasoning Environments for Reinforcement Learning with Verifiable Rewards},
author={Zafir Stojanovski and Oliver Stanley and Joe Sharratt and Richard Jones and Abdulhakeem Adefioye and Jean Kaddour and Andreas Köpf},
year={2025},
eprint={2505.24760},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.24760},
}