We present two libraries to help the broader community customize their language models: tinker
and tinker-cookbook
.
tinker
is a training SDK for researchers and developers to fine-tune language models. You send API requests to us and we handle the complexities of distributed training.tinker-cookbook
includes realistic examples of fine-tuning language models. It builds on the Tinker API and provides common abstractions to fine-tune language models.
- Obtain a Tinker API token and export it as environment variable
TINKER_API_KEY
. You will only be able to do this after you have access to Tinker. Sign up for waitlist at thinkingmachines.ai/tinker. After you have access, you can create an API key from your console: tinker-console.thinkingmachines.ai. - Install tinker python client via
pip install tinker
- We recommend installing
tinker-cookbook
in a virtual env either withconda
oruv
. For running most examples, you can install viapip install -e .
.
Refer to the docs to start from basics. Here we introduce a few Tinker primitives - the basic components to fine-tune LLMs:
service_client = tinker.ServiceClient()
training_client = service_client.create_lora_training_client(
base_model="meta-llama/Llama-3.2-1B", rank=32,
)
training_client.forward_backward(...)
training_client.optim_step(...)
training_client.save_state(...)
training_client.load_state(...)
sampling_client = training_client.save_weights_and_get_sampling_client(name="my_model")
sampling_client.sample(...)
See tinker_cookbook/recipes/sl_loop.py and tinker_cookbook/recipes/rl_loop.py for minimal examples of using these primitives to fine-tune LLMs.
Besides these primitives, we also offer Tinker Cookbook (a.k.a. this repo), a library of a wide range of abstractions to help you customize training environments.
tinker_cookbook/recipes/sl_basic.py
and tinker_cookbook/recipes/rl_basic.py
contain minimal examples to configure supervised learning and reinforcement learning.
We also include a wide range of more sophisticated examples in the tinker_cookbook/recipes/
folder:
- Math reasoning: improve LLM reasoning capability by rewarding it for answering math questions correctly.
- Preference learning: showcase a three-stage RLHF pipeline: 1) supervised fine-tuning, 2) learning a reward model, 3) RL against the reward model.
- Tool use: train LLMs to better use retrieval tools to answer questions more accurately.
- Prompt distillation: internalize long and complex instructions into LLMs.
- Multi-Agent: optimize LLMs to play against another LLM or themselves.
These examples are located in each subfolder, and their README.md
files will walk you through the key implementation details, the commands to run them, and the expected performance.
Tinker cookbook includes several utilities. Here's a quick overview:
- renderers converts tokens from/to structured chat message objects
- hyperparam_utils helps calculate hyperparameters suitable for LoRAs
- evaluation provides abstractions for evaluating Tinker models and inspect_evaluation shows how to integrate with InspectAI to make evaluating on standard benchmarks easy.
This project is built in the spirit of open science and collaborative development. We believe that the best tools emerge through community involvement and shared learning.
We welcome PR contributions after our private beta is over. If you have any feedback, please email us at [email protected].