This repository contains code for our paper, Toward In-Context Teaching: Adapting Examples to Students' Misconceptions.
@inproceedings{ross2024incontext,
title = "Toward In-Context Teaching: Adapting Examples to Students' Misconceptions",
author = "Alexis Ross and Jacob Andreas",
booktitle = "ACL 2024",
publisher = "Association for Computational Linguistics",
url= "https://arxiv.org/abs/2405.04495",
}-
Clone the repository.
git clone https://github.com/alexisjihyeross/adaptive_teaching cd adaptive_teaching -
Create a Conda environment.
conda create -n pedagogy python=3.7
-
Activate the environment.
conda activate pedagogy
-
Download the requirements.
pip3 install -r requirements.txt
-
Set Environment Variables
Experiments with GPT-based models require setting OPENAI environment variables.
export OPENAI_API_KEY={KEY} export OPENAI_ORGANIZATION={KEY} export PYTHONPATH=./
-
Run Experiments
The scripts below contain code for evaluating AToM, GPT4-based teachers, and other baselines with the synthetic learners in the AdapT evaluation framework.
-
Functions:
scripts/run_functions.sh -
Fractions:
scripts/run_fractions.sh -
Verbs:
scripts/run_verbs.sh
For example, to run experiments for functions, you could use the following command:
bash scripts/run_functions.sh
The code defaults to logging with wandb. Set the
WANDB_PROJECTvariable in these scripts to determine which wandb projects results are logged to. -
-
View Results
You can use the following command to download results from wandb:
python src/analyze.py --entity ${ENTITY} --project ${PROJECT}
The script scripts/run_human.sh contains the script for running a server for human experiments.
By default, it runs the experiments in the paper: 22 experimental conditions (11 target concepts, 2 student types), 5 seeds each, for 3 different teachers: Random, ATOM, and GPT4.
Results are saved locally to results/human/experiments.