-
Notifications
You must be signed in to change notification settings - Fork 6.1k
[fix] multi t2i adapter set total_downscale_factor #4621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix] multi t2i adapter set total_downscale_factor #4621
Conversation
The documentation is not available anymore as the PR was closed or merged. |
9d8f9bd
to
082ad8c
Compare
expected_slice = np.array([0.4902, 0.5539, 0.4317, 0.4682, 0.6190, 0.4351, 0.5018, 0.5046, 0.4772]) | ||
assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 | ||
|
||
def test_inference_batch_consistent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, I believe this is the best way currently to correctly batch the list of images in the adapter test.
i.e. [image, image] -> [[image], [image]] instead of [image, image] -> [[image, image]]
Modifying the original mixin will touch too many existing tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok for me!
if xs.shape[1] % self.num_adapter != 0: | ||
raise ValueError( | ||
f"Expecting multi-adapter's input have number of channel that cab be evenly divisible " | ||
f"by num_adapter: {xs.shape[1]} % {self.num_adapter} != 0" | ||
) | ||
x_list = torch.chunk(xs, self.num_adapter, dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously the pipeline was squashing all the different images for the different adapters into one tensor and then re-splitting here. Sorry, I should have caught this in code review
is_multi_adapter = isinstance(self.adapter, MultiAdapter) | ||
if is_multi_adapter: | ||
adapter_input = [_preprocess_adapter_image(img, height, width).to(device) for img in image] | ||
n, c, h, w = adapter_input[0].shape | ||
adapter_input = torch.stack([x.reshape([n * c, h, w]) for x in adapter_input]) | ||
if isinstance(self.adapter, MultiAdapter): | ||
if not isinstance(image, list): | ||
raise ValueError( | ||
"MultiAdapter is enabled, but `image` is not a list. Please pass a list of images to `image`." | ||
) | ||
|
||
if len(image) != len(self.adapter.adapters): | ||
raise ValueError( | ||
f"MultiAdapter requires passing the same number of images as adapters. Given {len(image)} images and {len(self.adapter.adapters)} adapters." | ||
) | ||
|
||
adapter_input = [] | ||
|
||
for one_image in image: | ||
one_image = _preprocess_adapter_image(one_image, height, width) | ||
one_image = one_image.to(device=device, dtype=self.adapter.dtype) | ||
adapter_input.append(one_image) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the squashing all images into one tensor. Just process them independently
if isinstance(self.adapter, MultiAdapter): | ||
if not isinstance(image, list): | ||
raise ValueError( | ||
"MultiAdapter is enabled, but `image` is not a list. Please pass a list of images to `image`." | ||
) | ||
|
||
if len(image) != len(self.adapter.adapters): | ||
raise ValueError( | ||
f"MultiAdapter requires passing the same number of images as adapters. Given {len(image)} images and {len(self.adapter.adapters)} adapters." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these checks be moved to check_inputs()
maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah def
# We do not support saving pipelines with multiple adapters. The multiple adapters should be saved as their | ||
# own independent pipelines | ||
|
||
def test_save_load_local(self): | ||
... | ||
|
||
def test_save_load_optional_components(self): | ||
... | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do support saving multi ControlNets, though. Is there a significant difference in the saving and loading semantics in between a ControlNet and a T2I adapter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we open an issue to add this feature in case someone from the community wants to give it a try?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure thing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking clean and sharp!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
082ad8c
to
389d75e
Compare
* [fix] multi t2i adapter set total_downscale_factor * move image checks into check inputs * remove copied from
* [fix] multi t2i adapter set total_downscale_factor * move image checks into check inputs * remove copied from
re: #4427