生产中使用API问题
#51
by
fffff123
- opened
求教生产用大规模用bge-m3的话,这个用本身的哪个方法进行向量化呢,例如单个和批量向量化一个文本或多个文本呢?demo中的例子向量化方法感觉不大好用吧
The encode
function of BGEM3FlagModel
supports batch inference on multiple GPUs. To further accelerate the inference, you can use the TEI tool: https://github.com/huggingface/text-embeddings-inference .
The
encode
function ofBGEM3FlagModel
supports batch inference on multiple GPUs. To further accelerate the inference, you can use the TEI tool: https://github.com/huggingface/text-embeddings-inference .
where specifically?
The
encode
function ofBGEM3FlagModel
supports batch inference on multiple GPUs. To further accelerate the inference, you can use the TEI tool: https://github.com/huggingface/text-embeddings-inference .where, specifically?