@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high
134134
135135Let's have a look at the image
136136
137- ![ lion_ref] ( https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_refined.png )
137+ | Original Image | Ensemble of Denoisers Experts |
138+ | ---| ---|
139+ | ![ lion_base] ( https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_base.png ) | ![ lion_ref] ( https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_refined.png )
138140
139141If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose):
140142
141- ![ lion_base] ( https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_base.png )
142-
143143<Tip >
144144
145145The ensemble-of-experts method works well on all available schedulers!
146146
147147</Tip >
148148
149- #### Refining the image output from fully denoised base image
149+ #### 2.) Refining the image output from fully denoised base image
150150
151151In standard [ ` StableDiffusionImg2ImgPipeline ` ] -fashion, the fully-denoised image generated of the base model
152152can be further improved using the [ refiner checkpoint] ( huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9 ) .
@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag
179179image = refiner(prompt = prompt, image = image[None , :]).images[0 ]
180180```
181181
182+ | Original Image | Refined Image |
183+ | ---| ---|
184+ | ![ ] ( https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png ) | ![ ] ( https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png ) |
185+
182186### Image-to-image
183187
184188``` py
@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars"
197201image = pipe(prompt, image = init_image).images[0 ]
198202```
199203
200- | Original Image | Refined Image |
201- | ---| ---|
202- | ![ ] ( https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png ) | ![ ] ( https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png ) |
203-
204204### Loading single file checkpoints / original file format
205205
206206By making use of [ ` ~diffusers.loaders.FromSingleFileMixin.from_single_file ` ] you can also load the
@@ -210,13 +210,13 @@ original file format into `diffusers`:
210210from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
211211import torch
212212
213- pipe = StableDiffusionXLPipeline.from_pretrained (
214- " stabilityai/stable-diffusion-xl-base-0.9 " , torch_dtype = torch.float16, variant = " fp16" , use_safetensors = True
213+ pipe = StableDiffusionXLPipeline.from_single_file (
214+ " ./sd_xl_base_0.9.safetensors " , torch_dtype = torch.float16, variant = " fp16" , use_safetensors = True
215215)
216216pipe.to(" cuda" )
217217
218- refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained (
219- " stabilityai/stable-diffusion-xl-refiner-0.9 " , torch_dtype = torch.float16, use_safetensors = True , variant = " fp16"
218+ refiner = StableDiffusionXLImg2ImgPipeline.from_single_file (
219+ " ./sd_xl_refiner_0.9.safetensors " , torch_dtype = torch.float16, use_safetensors = True , variant = " fp16"
220220)
221221refiner.to(" cuda" )
222222```
0 commit comments