Skip to content

onnx inpainting error #917

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pythoninoffice opened this issue Oct 20, 2022 · 6 comments
Closed

onnx inpainting error #917

pythoninoffice opened this issue Oct 20, 2022 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@pythoninoffice
Copy link

pythoninoffice commented Oct 20, 2022

Describe the bug

With the latest code, I was able to convert the SD1.4 checkpoint into onnx and successfully run txt2img and img2img using the new onnx pipelines. However the onnx inpainting isn't working.

Thank you!

Reproduction

from diffusers import OnnxStableDiffusionInpaintPipeline
import io, requests, PIL

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(io.BytesIO(response.content)).convert("RGB")

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))



prompt = "a cat sitting on a bench"
denoiseStrength = 0.8
steps = 25
scale = 7.5

pipe = OnnxStableDiffusionInpaintPipeline.from_pretrained("./onnx", provider="DmlExecutionProvider")
image = pipe(prompt, image=init_image, mask_image=mask_image,
                         strength=denoiseStrength, num_inference_steps=steps,
                         guidance_scale=scale).images[0]
image.save("inp.png")

Logs

2022-10-19 21:49:48.9222990 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:53.8425385 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:54.7589366 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:56.2920566 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2022-10-19 21:49:57.3330294 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]2022-10-19 21:50:01.5112469 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

  0%|                                                                                                                                                                 | 0/26 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "E:\PythonInOffice\amd_sd_img2img\inp.py", line 22, in <module>
    image = pipe(prompt, image=init_image, mask_image=mask_image,
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion_inpaint.py", line 352, in __call__
    noise_pred = self.unet(
  File "E:\PythonInOffice\amd_sd_img2img\diffusers\src\diffusers\onnx_utils.py", line 46, in __call__
    return self.model.run(None, inputs)
  File "E:\PythonInOffice\amd_sd_img2img\diffuers_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Conv node. Name:'Conv_168' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1866)\onnxruntime_pybind11_state.pyd!00007FFBF0CDA4CA: (caller: 00007FFBF0CDBACF) Exception(3) tid(4a5c) 80070057 The parameter is incorrect.

System Info

diffuser version: 2a0c823

@pythoninoffice pythoninoffice added the bug Something isn't working label Oct 20, 2022
@patrickvonplaten
Copy link
Contributor

cc @anton-l could you take this one? :-)

@anton-l
Copy link
Member

anton-l commented Oct 21, 2022

Hi @pythoninoffice! With the release of https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting we've decided to make the onnx inpainting pipeline work with finetuned checkpoints only (as the non-finetuned SD1.4 checkpoint wasn't nearly as good in terms of inpainting quality).

The way to load those weights is:

from diffusers import OnnxStableDiffusionInpaintPipeline

pipeline = OnnxStableDiffusionInpaintPipeline.from_pretrained(
    "runwayml/stable-diffusion-inpainting", revision="onnx", provider="CPUExecutionProvider"
)

The pytorch-to-onnx conversion script also supports custom finetuned checkpoints trained in a similar fashion, i.e. with 9 input channels for the UNet (4 latent + 2 mask + 3 masked image channels)

Alternatively, you can still use the pytorch inpainting pipeline that handles non-finetuned SD checkpoints: StableDiffusionInpaintPipelineLegacy

@pythoninoffice
Copy link
Author

pythoninoffice commented Oct 21, 2022

Hi @anton-l appreciate your response! I can confirm the onnx inpainting pipeline works with the runwayml/stable-diffusion-inpainting checkpoints which seem to be SD v1.2 model based on the Model card/description.

I also downloaded https://huggingface.co/runwayml/stable-diffusion-v1-5 unfortunately it doesn't work with the new pipeline either (same error). So it seems like SD1.2 is the only version compatible with onnx inpainting?

Do you know if there's any way to convert a non-finetuned checkpoint to fine-tuned checkpoint? I'm pretty new to diffusers so please bear with me if the question doesn't make sense. Thank you!

@patrickvonplaten
Copy link
Contributor

Gently ping @anton-l

@anton-l
Copy link
Member

anton-l commented Oct 27, 2022

@pythoninoffice in the current release only runwayml/stable-diffusion-inpainting is compatible with onnx inpainting, since it needs a finetuned model.
As for models that are not finetuned for inpainting, both runwayml/stable-diffusion-v1-5 and the CompVis/stable-diffusion-v1-N checkpoints are compatible with the pytorch pipeline StableDiffusionInpaintPipelineLegacy, but it doesn't have an onnx counterpart.

@anton-l anton-l closed this as completed Nov 7, 2022
@pythoninoffice
Copy link
Author

@anton-l Thanks for the answer and apologies for my slow response!
A quick follow-up question - is there any way to convert an existing model into a finetuned one, so that we can use it for OnnxStableDiffusionInpaintPipeline? Thanks!

PhaneeshB pushed a commit to nod-ai/diffusers that referenced this issue Mar 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants