Skip to content

Video generation with stable diffusion #1962

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
2 tasks done
feizc opened this issue Jan 10, 2023 · 14 comments
Closed
2 tasks done

Video generation with stable diffusion #1962

feizc opened this issue Jan 10, 2023 · 14 comments

Comments

@feizc
Copy link

feizc commented Jan 10, 2023

Model/Pipeline/Scheduler description

Hey,

Thanks for sharing.

Please check out my modified version of video generation with stable diffusion:
https://github.com/feizc/Video-Stable-Diffusion

Open source status

  • The model implementation is available
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

https://github.com/feizc/Video-Stable-Diffusion

@patrickvonplaten
Copy link
Contributor

I think this could make a cool community pipeline. If anybody is interested in opening a PR for a community pipeline: #841

@aandyw
Copy link
Contributor

aandyw commented Jan 18, 2023

I'd be interested in taking this up if no one has taken it yet.

@patrickvonplaten
Copy link
Contributor

This would be very nice @pie31415 😍

@basab-gupta
Copy link

I hope it's not too late, but I would love to hop onto this as well if it's okay?

@aandyw
Copy link
Contributor

aandyw commented Feb 3, 2023

@basab-gupta Sorry, been having some trouble adapting the pipeline. I'll try to get a draft up soon.

@aandyw
Copy link
Contributor

aandyw commented Feb 9, 2023

@feizc @patrickvonplaten Pipeline implemented. Let me know if you have any feedback or suggestions for the implementation.

@aandyw
Copy link
Contributor

aandyw commented Feb 9, 2023

I hope it's not too late, but I would love to hop onto this as well if it's okay?

Would you like to take over this pipeline implementation? Not sure if I'll have enough time to figure out how to rework everything.

@patrickvonplaten
Copy link
Contributor

@basab-gupta in case you have time, feel free to give this implementation a try :-)

@zhouliang-yu
Copy link

I have a question related to video generation.
is there any off-the-shelf video generation model that can do this:
given a text prompt, and the first frame of the video, the model can generate the frame in the future.
for example, given a picture in the kitchen, and text prompt "make me a chicken soup", the model take the visual and text signal and generate the video of making chicken soup, basing on the first frame we provided

@silvererudite
Copy link

@patrickvonplaten with this new addition to diffusers here https://huggingface.co/docs/diffusers/main/en/api/pipelines/text_to_video does this solve the needs of this issue or if there's any other way to contribute, I'd love to know.

@patrickvonplaten
Copy link
Contributor

Hey @silvererudite ,

Yes I think the new text-to-video model is probs a bit more powerful than the one proposed here. But they are lots of others ways to contribute! Could you maybe check: https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md ? :-)

@aandyw
Copy link
Contributor

aandyw commented Mar 23, 2023

@patrickvonplaten Should this issue be closed then if there is already an existing pipeline?

@patrickvonplaten
Copy link
Contributor

Yes, I'll close it - hope that's ok/understandable for everybody!

@a-r-r-o-w
Copy link
Member

Marking as closed as multiple video models are supported now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants