@@ -38,8 +38,40 @@ In order to get started, we recommend taking a look at two notebooks:
3838
3939If you want to run the code yourself 💻, you can try out:
4040- [ Text-to-Image Latent Diffusion] ( https://huggingface.co/CompVis/ldm-text2im-large-256 )
41- - [ Unconditional Latent Diffusion] ( https://huggingface.co/CompVis/ldm-celebahq-256# )
41+ ```
42+ # !pip install diffusers transformers
43+ from diffusers import DiffusionPipeline
44+
45+ model_id = "CompVis/ldm-text2im-large-256"
46+
47+ # load model and scheduler
48+ ldm = DiffusionPipeline.from_pretrained(model_id)
49+
50+ # run pipeline in inference (sample random noise and denoise)
51+ prompt = "A painting of a squirrel eating a burger"
52+ images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6)["sample"]
53+
54+ # save images
55+ for idx, image in enumerate(images):
56+ image.save(f"squirrel-{idx}.png")
57+ ```
4258- [ Unconditional Diffusion with discrete scheduler] ( https://huggingface.co/google/ddpm-celebahq-256 )
59+ ```
60+ # !pip install diffusers
61+ from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
62+
63+ model_id = "google/ddpm-celebahq-256"
64+
65+ # load model and scheduler
66+ ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
67+
68+ # run pipeline in inference (sample random noise and denoise)
69+ image = ddpm()["sample"]
70+
71+ # save image
72+ image[0].save("ddpm_generated_image.png")
73+ ```
74+ - [ Unconditional Latent Diffusion] ( https://huggingface.co/CompVis/ldm-celebahq-256 )
4375- [ Unconditional Diffusion with continous scheduler] ( https://huggingface.co/google/ncsnpp-ffhq-1024 )
4476
4577If you just want to play around with some web demos, you can try out the following 🚀 Spaces:
0 commit comments