Skip to content

ai4co/parco

Repository files navigation

PARCO

arXiv Slack License: MIT HuggingFace Dataset HuggingFace Models

Code repository for "PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization"

Autoregressive policy (AR) and Parallel Autoregressive (PAR) decoding

PARCO Model

🚀 Usage

Installation

We use uv for fast installation and dependency management:

uv venv
source .venv/bin/activate
uv sync --all-extras

To download the data and checkpoints from HuggingFace automatically, you can use:

python scripts/download_hf.py

Quickstart Notebooks

We made examples for each problem that can be trained under two minutes on consumer hardware. You can find them in the examples/ folder:

Train your own model

You can train your own model using the train.py script. For example, to train a model for the HCVRP problem, you can run:

python train.py experiment=hcvrp

you can change the experiment parameter to omdcpdp or ffsp to train the model for the OMDCPDP or FFSP problem, respectively.

Note on legacy FFSP code: the initial version we made was not yet integrated in RL4CO, so we left it the parco/tasks/ffsp_old folder, so you can still use it.

Testing

You may run the test.py script to evaluate the model, e.g. with greedy decoding:

python test.py --problem hcvrp --decode_type greedy --batch_size 128

(note: we measure time with single instance -- batch size 1, but larger makes the overall evaluation faster), or with sampling:

python test.py --problem hcvrp --decode_type sampling --batch_size 1 --sample_size 1280

Other scripts

  • Data generation: We also include scripts to re-generate data manually (reproducible via random seeds) with python scripts/generate_data.py.

  • OR-Tools: We additionally include a script to solve the problem using OR-Tools with python scripts/run_ortools.py.

🤩 Citation

If you find PARCO valuable for your research or applied projects:

@article{berto2024parco,
    title={{PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization}},
    author={Federico Berto and Chuanbo Hua and Laurin Luttmann and Jiwoo Son and Junyoung Park and Kyuree Ahn and Changhyun Kwon and Lin Xie and Jinkyoo Park},
    year={2024},
    journal={arXiv preprint arXiv:2409.03811},
    note={\url{https://github.com/ai4co/parco}}
}

We will also be happy if you cite the RL4CO framework that we used to create PARCO:

@article{berto2024rl4co,
    title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},
    author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},
    year={2024},
    journal={arXiv preprint arXiv:2306.17100},
    note={\url{https://github.com/ai4co/rl4co}}
}

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages