Skip to content

Commit bbcf2a8

Browse files
authored
[docs] Add pipelines to table (huggingface#9282)
update pipelines
1 parent 4cfb216 commit bbcf2a8

File tree

1 file changed

+17
-16
lines changed

1 file changed

+17
-16
lines changed

docs/source/en/api/pipelines/overview.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -30,63 +30,64 @@ The table below lists all the pipelines currently available in 🤗 Diffusers an
3030

3131
| Pipeline | Tasks |
3232
|---|---|
33-
| [AltDiffusion](alt_diffusion) | image2image |
33+
| [aMUSEd](amused) | text2image |
3434
| [AnimateDiff](animatediff) | text2video |
3535
| [Attend-and-Excite](attend_and_excite) | text2image |
36-
| [Audio Diffusion](audio_diffusion) | image2audio |
3736
| [AudioLDM](audioldm) | text2audio |
3837
| [AudioLDM2](audioldm2) | text2audio |
38+
| [AuraFlow](auraflow) | text2image |
3939
| [BLIP Diffusion](blip_diffusion) | text2image |
40+
| [CogVideoX](cogvideox) | text2video |
4041
| [Consistency Models](consistency_models) | unconditional image generation |
4142
| [ControlNet](controlnet) | text2image, image2image, inpainting |
43+
| [ControlNet with Flux.1](controlnet_flux) | text2image |
44+
| [ControlNet with Hunyuan-DiT](controlnet_hunyuandit) | text2image |
45+
| [ControlNet with Stable Diffusion 3](controlnet_sd3) | text2image |
4246
| [ControlNet with Stable Diffusion XL](controlnet_sdxl) | text2image |
4347
| [ControlNet-XS](controlnetxs) | text2image |
4448
| [ControlNet-XS with Stable Diffusion XL](controlnetxs_sdxl) | text2image |
45-
| [Cycle Diffusion](cycle_diffusion) | image2image |
4649
| [Dance Diffusion](dance_diffusion) | unconditional audio generation |
4750
| [DDIM](ddim) | unconditional image generation |
4851
| [DDPM](ddpm) | unconditional image generation |
4952
| [DeepFloyd IF](deepfloyd_if) | text2image, image2image, inpainting, super-resolution |
5053
| [DiffEdit](diffedit) | inpainting |
5154
| [DiT](dit) | text2image |
52-
| [GLIGEN](stable_diffusion/gligen) | text2image |
55+
| [Flux](flux) | text2image |
56+
| [Hunyuan-DiT](hunyuandit) | text2image |
57+
| [I2VGen-XL](i2vgenxl) | text2video |
5358
| [InstructPix2Pix](pix2pix) | image editing |
5459
| [Kandinsky 2.1](kandinsky) | text2image, image2image, inpainting, interpolation |
5560
| [Kandinsky 2.2](kandinsky_v22) | text2image, image2image, inpainting |
5661
| [Kandinsky 3](kandinsky3) | text2image, image2image |
62+
| [Kolors](kolors) | text2image |
5763
| [Latent Consistency Models](latent_consistency_models) | text2image |
5864
| [Latent Diffusion](latent_diffusion) | text2image, super-resolution |
59-
| [LDM3D](stable_diffusion/ldm3d_diffusion) | text2image, text-to-3D, text-to-pano, upscaling |
65+
| [Latte](latte) | text2image |
6066
| [LEDITS++](ledits_pp) | image editing |
67+
| [Lumina-T2X](lumina) | text2image |
68+
| [Marigold](marigold) | depth |
6169
| [MultiDiffusion](panorama) | text2image |
6270
| [MusicLDM](musicldm) | text2audio |
71+
| [PAG](pag) | text2image |
6372
| [Paint by Example](paint_by_example) | inpainting |
64-
| [ParaDiGMS](paradigms) | text2image |
65-
| [Pix2Pix Zero](pix2pix_zero) | image editing |
73+
| [PIA](pia) | image2video |
6674
| [PixArt-α](pixart) | text2image |
67-
| [PNDM](pndm) | unconditional image generation |
68-
| [RePaint](repaint) | inpainting |
69-
| [Score SDE VE](score_sde_ve) | unconditional image generation |
75+
| [PixArt-Σ](pixart_sigma) | text2image |
7076
| [Self-Attention Guidance](self_attention_guidance) | text2image |
7177
| [Semantic Guidance](semantic_stable_diffusion) | text2image |
7278
| [Shap-E](shap_e) | text-to-3D, image-to-3D |
73-
| [Spectrogram Diffusion](spectrogram_diffusion) | |
7479
| [Stable Audio](stable_audio) | text2audio |
80+
| [Stable Cascade](stable_cascade) | text2image |
7581
| [Stable Diffusion](stable_diffusion/overview) | text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution |
76-
| [Stable Diffusion Model Editing](model_editing) | model editing |
7782
| [Stable Diffusion XL](stable_diffusion/stable_diffusion_xl) | text2image, image2image, inpainting |
7883
| [Stable Diffusion XL Turbo](stable_diffusion/sdxl_turbo) | text2image, image2image, inpainting |
7984
| [Stable unCLIP](stable_unclip) | text2image, image variation |
80-
| [Stochastic Karras VE](stochastic_karras_ve) | unconditional image generation |
8185
| [T2I-Adapter](stable_diffusion/adapter) | text2image |
8286
| [Text2Video](text_to_video) | text2video, video2video |
8387
| [Text2Video-Zero](text_to_video_zero) | text2video |
8488
| [unCLIP](unclip) | text2image, image variation |
85-
| [Unconditional Latent Diffusion](latent_diffusion_uncond) | unconditional image generation |
8689
| [UniDiffuser](unidiffuser) | text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation |
8790
| [Value-guided planning](value_guided_sampling) | value guided sampling |
88-
| [Versatile Diffusion](versatile_diffusion) | text2image, image variation |
89-
| [VQ Diffusion](vq_diffusion) | text2image |
9091
| [Wuerstchen](wuerstchen) | text2image |
9192

9293
## DiffusionPipeline

0 commit comments

Comments
 (0)