Framework to model game theory problems and apply reinforcement learning to optimise solution. It is designed to help model problems involving many players. This is rather low level modelling framework and in many cases Python equivalent would be more handy however this one maybe helpful if you want Rust compiler to help you develop liable code.
amfiteatr_core(github) - crate for core traits and generic implementations without reinforcement learning.amfiteatr_rl(github) - crate extending core features to provide interface and simple implementations of reinforcement learning (using neural networks backed by Torch (tch)).amfiteatr_net_ext(github) - currently providing early proof of concept for using TCP socket to provide communications between entities in game modelamfiteatr_classic(github) - crate providing structures for simulating classic game theory games (like prisoners' dilemma).amfiteatr_examples- repository with some examples of using the library. Hopefully it will be expanded in the future.
Since version 1.90 Rust by default uses linker ldd, which cannot be
used in with libtorch.
To use libtorch one needs to use ld, therefore in the
crates there are .cargo/config.toml files to force usage of ld.
If you include these crates in projects, you may need to do the same
on workspace level. Or on global level (~/.cargo/config.toml).
It is my education and research project. Many elements will change or vanish in the future and some breaking changes may occur often. I will be adding features and documentation in time and I will try to simplify interfaces that currently seems to be inconvenient.
TL;DR It's early and unstable stage.
Currently, I develop some projects using this library, that can show current possibilities
brydz_model- Simulation and reinforcement learning model for contract bridge card game. Can be used as example of implementing 4 player game.brydz_dd- early project of Double Dummy solver for contract bridge (card faced up analysis of optimal game solution). Warning it uses alpha-beta algorithm variants and on current level of optimisation it cannot be used to solve full 52 card problems.