Skip to content

paninski-lab/beast

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

beast

Behavioral analysis via self-supervised pretraining of transformers

beast is a package for pretraining vision transformers on unlabeled data to provide backbones for downstream tasks like pose estimation, action segmentation, and neural encoding.

Installation

Step 1: Install ffmpeg

First, check to see if you have ffmpeg installed by typing the following in the terminal:

ffmpeg -version

If not, install:

sudo apt install ffmpeg

Step 2: Create a conda environment

First, install anaconda.

Next, create and activate a conda environment:

conda create --yes --name beast python=3.10
conda activate beast

Step 3: Download the repo from github and install

Move to your home directory (or wherever you would like to download the code) and install:

cd ~
git clone https://github.com/paninski-lab/beast
cd beast
pip install -e .

Usage

beast comes with a simple command line interface. To get more information, run

beast -h

Extract frames

Extract frames from a directory of videos to train beast with.

beast extract --input <video_dir> --output <output_dir> [options]

Type "beast extract -h" in the terminal for details on the options.

Train a model

You will need to specify a config path; see the configs directory for examples.

beast train --config <config_path> [options]

Type "beast train -h" in the terminal for details on the options.

Run inference

Inference on a single video or a directory of videos:

beast predict --model <model_dir> --input <video_path> [options]

Inference on (possibly nested) directories of images:

beast predict --model <model_dir> --input <video_path> [options]

Type "beast predict -h" in the terminal for details on the options.

About

Behavioral analysis via self-supervised pretraining of transformers

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •