May we ask for a GPU support ?
#1
by
fffiloni
- opened
This Whisper to StableDiffusion space blew up yesterday evening on Twitter, and keeps going, with the support of many, @osanseviero , @akhaliq , @abidlabs , Emad Mostaque himself ๐ฅ
Inference time is currently about 10~15 minutes, very frustrating for an audio to images demo ๐
I guess it would benefit from a GPU support, what do you think @victor , @radames ?
Let me know if you too think it would the right move, and if i need to tweak some lines of code before any change ๐ค
Could I run this on my local GPU?
@addicted-bv yes, you can try it :)
What would need to be changed to run it on a local GPU?
We do not need GPU anymore, since i wired this space to the official SD space for image diffusion :)
fffiloni
changed discussion status to
closed