Dear Developers: we ask some base question!

#1
by jiomelove - opened

Dear Developers:

Thank you to the BAAI team for open-sourcing the Bunny model. I've been actively exploring it these past few days. I have a few doubts regarding the deployment of the model, and I hope to get answers from the BAAI official technical team. Nevertheless, I am extremely grateful! The first question is: I want to know the GPU running conditions required for several versions of the model. For example, the Bunny-v1_0-3B full parameter version and the bunny-phi-2-siglip-lora version. so can you provide a list for comparison and clarification? What are the officially recommended GPU models and VRAM sizes?The second question is: Can this model integrate the controller, Web-UI server, and Model Worker directly into one bash command ? Currently, it seems that three separate bash commands need to be executed to start the controller, WebUI, and model inference. This seems to be considered for "microservices architecture" or "distributed system architecture". Is my understanding correct?If we deploy using Docker containers and use Kubernetes as the container visual management framework, can an official post be provided to explain in more detail the standard deployment process?

                                                                                                                       by Isaac Wei Ran                                                                                                                  
                                                                                                                       Guangzhou, China, 7th March 2024
Beijing Academy of Artificial Intelligence org

Close this as the same issue is discussed in GitHub.

BoyaWu10 changed discussion status to closed

Sign up or log in to comment