You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our paper SkeletonDiffusion, nonisotropic diffusion is performed extracting correlations from the adjacency matrix of the human skeleton. If you are working on a problem described by an adjacency matrix or the correlations between components of your problem (for nus human body joints) are available, you can try training your diffusion model with our nonisotropic Gaussian diffusion implementation.
@@ -159,7 +164,7 @@ We follow the same dataset creation pipeline as https://github.com/BarqueroGerma
159
164
160
165
Download the *SMPL+H G* files for **22 datasets**: ACCAD, BMLhandball, BMLmovi, BMLrub, CMU, DanceDB, DFaust, EKUT, EyesJapanDataset, GRAB, HDM05, HUMAN4D, HumanEva, KIT, MoSh, PosePrior (MPI_Limits), SFU, SOMA, SSM, TCDHands, TotalCapture, and Transitions. Then, move the **tar.bz2** files to `./datasets/raw/AMASS` (DO NOT extract them).
161
166
162
-
Now, download the 'DMPLs for AMASS' from [here](https://smpl.is.tue.mpg.de), and the 'Extended SMPL+H model' from [here](https://mano.is.tue.mpg.de/). Move both extracted folders (dmpls, smplh) to `./datasets/annotations/AMASS/body_models`. Then, run:
167
+
Now, download the 'DMPLs for AMASS' from [here](https://smpl.is.tue.mpg.de), and the 'Extended SMPL+H model' from [here](https://mano.is.tue.mpg.de/). Move both extracted folders (dmpls, smplh) to `./datasets/annotations/AMASS/bodymodels`. Then, run:
Train our Nonisotropic diffusion on top of the previously trained latent space and autoencoder, you will need a 48GB GPU (A40). This part of the training is quite slow, due to necessaity of encoding and decoding latent embeddings via the recurrent autoencoder.
262
+
Train our Nonisotropic diffusion on top of the previously trained latent space and autoencoder, you will need a 48GB GPU (A40).
263
+
258
264

259
265
266
+
267
+
### About Training Time
268
+
The diffusion part of the training is quite slow, due to necessity of encoding and decoding latent embeddings via the recurrent autoencoder. If you want to reduce the training time, you can train less performant models by relaxing the diffusion training objective (See Appendix E.4 and results for AMASS).
269
+
270
+
271
+
| Model | Training Time (AMASS) | APD $\uparrow$ || CMD $\downarrow$|| str mean $\downarrow$ | str RMSE $\downarrow$ |
To train the model with _k=1_ (without loss relaxation) append ```model.train_pick_best_sample_among_k=1``` to your training arguments:
279
+
```bash
280
+
python train_diffusion.py model.train_pick_best_sample_among_k=1 <your training arguments>
281
+
```
282
+
283
+
To train the model by choosing the sample to backpropagate the loss in laten space with k=50 (_k=50 latent argmin_) append ```model.similarity_space=latent_space``` to your training arguments:
284
+
```bash
285
+
python train_diffusion.py model.similarity_space=latent_space <your training arguments>
To resume training from an experiment repository and a saved checkpoint, you can run the corresponding train script and append a few arguments:
307
+
308
+
```bash
309
+
python train_<model>.py if_resume_training=True load=True output_log_path=<path to experiemnt repository> load_path=<path to .pt checkpoint><your other arguments>
310
+
```
311
+
312
+
For an example checkpoint _./output/hmp/amass/diffusion/June30_11-35-08/checkpoints/checkpoint_144.pt_, you would run:
313
+
```bash
314
+
python train_diffusion.py if_resume_training=True load=True output_log_path=<./output/hmp/amass/diffusion/June30_11-35-08 load_path=./output/hmp/amass/diffusion/June30_11-35-08/checkpoints/checkpoint_144.pt <your other training arguments of the previous call>
315
+
```
316
+
317
+
### Running our Implementation as Isotropic
318
+
319
+
Our Nonisotropic implemetation supports also isotropic diffusion (our _isotropic_ ablations of Table 7.). This may be useful to you if you want to use our codebase for other projects and want to reduce classes/complexity.
320
+
321
+
To run our nonisotropic diffusion as isotropic with a suitable choice of covariance matrix:
322
+
323
+
```bash
324
+
python train_diffusion.py model=skeleton_diffusion_run_code_as_isotropic <your training arguments>
325
+
```
326
+
327
+
To run the isotropic diffusion codebase as in BeLFusion or lucidrain:
328
+
```bash
329
+
python train_diffusion.py model=isotropic_diffusion <your training arguments>
330
+
```
331
+
332
+
For the same random initialization and environment, both trainings return exactly the same weights.
0 commit comments