add pipeline
after you merge this, you can call the pipeline using the following code
from transformers import pipeline
pipe = pipeline(model="vikhyatk/moondream1", trust_remote_code=True)
pipe("Image_path",question="question")
you can also test before merging using the following code
from transformers import pipeline
pipe = pipeline(model="vikhyatk/moondream1",revision="refs/pr/6",trust_remote_code=True)
pipe("Image_path",question="question")
or you can use
from PIL import Image
from transformers import pipeline
pipe = pipeline(model="vikhyatk/moondream1",revision="refs/pr/6",trust_remote_code=True)
im = Image.open("Image_path")
pipe(im,question="question")
how do you get streaming text from pipeline?
I'll try to adapt the pipeline to support text streaming soon.
until then i'll leave this blogpost here for anyone who wants to learn about building custom architectures and wants to help with the text streaming feature: https://huggingface.co./blog/not-lain/custom-architectures-with-huggingface
transformers stream generator library works great for pure transformers but actually when Iooked at the source code not straight forward at all! But perhaps makes a difficult thing under the hood appear easy to the likes of myself!
https://pypi.org/project/transformers-stream-generator/0.0.4/
https://github.com/sujitvasanth/streaming-LLM-chat
the second is my really simple streaming implementation using stream-generation library.
@not-lain have you posted you pipeline changes at his github? https://github.com/vikhyat/moondream I raised it as an issue to hopefully get it done
no I have not, but thanks for raising that issue,I really loved the model which is why I added a pipeline method to it for easier access.
All code in this pull request is open source, and I will leave you guys to handle the rest
There's a weird bug I'm running into with the tokenizer in my deploy script, will have this merged once I get that resolved! Sorry for the delay.