https://huggingface.co./thenlper/gte-small with ONNX weights to be compatible with Transformers.js.
Usage (Transformers.js)
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
npm i @xenova/transformers
You can then use the model to compute embeddings like this:
import { pipeline } from '@xenova/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/gte-small');
// Compute sentence embeddings
const sentences = ['That is a happy person', 'That is a very happy person'];
const output = await extractor(sentences, { pooling: 'mean', normalize: true });
console.log(output);
// Tensor {
// dims: [ 2, 384 ],
// type: 'float32',
// data: Float32Array(768) [ -0.053555335849523544, 0.00843878649175167, ... ],
// size: 768
// }
// Compute cosine similarity
import { cos_sim } from '@xenova/transformers';
console.log(cos_sim(output[0].data, output[1].data))
// 0.9798319649182318
You can convert this Tensor to a nested JavaScript array using .tolist()
:
console.log(output.tolist());
// [
// [ -0.053555335849523544, 0.00843878649175167, 0.06234041228890419, ... ],
// [ -0.049980051815509796, 0.03879701718688011, 0.07510733604431152, ... ]
// ]
By default, an 8-bit quantized version of the model is used, but you can choose to use the full-precision (fp32) version by specifying { quantized: false }
in the pipeline
function:
const extractor = await pipeline('feature-extraction', 'Xenova/gte-small', { quantized: false });
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named onnx
).
- Downloads last month
- 1,860
Model tree for Xenova/gte-small
Base model
thenlper/gte-small