jupyterjazz commited on
Commit
a8ff6b6
1 Parent(s): 8ea62b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -201,7 +201,7 @@ embeddings = F.normalize(embeddings, p=2, dim=1)
201
  </p>
202
  </details>
203
 
204
- 1. The easiest way to starting using `jina-embeddings-v3` is to use Jina AI's [Embeddings API](https://jina.ai/embeddings/).
205
  2. Alternatively, you can use `jina-embeddings-v3` directly via transformers package.
206
 
207
  ```python
@@ -220,9 +220,9 @@ texts = [
220
  'Folge dem weißen Kaninchen.' # German
221
  ]
222
 
223
- # When calling the `encode` function, you can choose a task_type based on the use case:
224
  # 'retrieval.query', 'retrieval.passage', 'separation', 'classification', 'text-matching'
225
- # Alternatively, you can choose not to pass a task_type, and no specific LoRA adapter will be used.
226
  embeddings = model.encode(texts, task_type='text-matching')
227
 
228
  # Compute similarities
@@ -230,7 +230,7 @@ print(embeddings[0] @ embeddings[1].T)
230
  ```
231
 
232
  By default, the model supports a maximum sequence length of 8192 tokens.
233
- However, if you want to truncate your input texts to a shorter length, you can pass the `max_length` parameter to the encode function:
234
  ```python
235
  embeddings = model.encode(
236
  ['Very long ... document'],
@@ -238,8 +238,8 @@ embeddings = model.encode(
238
  )
239
  ```
240
 
241
- In case you want to use Matryoshka embeddings and switch to a different embedding dimension,
242
- you can adjust the embedding dimension by passing the `truncate_dim` parameter to the encode function:
243
  ```python
244
  embeddings = model.encode(
245
  ['Sample text'],
 
201
  </p>
202
  </details>
203
 
204
+ 1. The easiest way to start using `jina-embeddings-v3` is Jina AI's [Embeddings API](https://jina.ai/embeddings/).
205
  2. Alternatively, you can use `jina-embeddings-v3` directly via transformers package.
206
 
207
  ```python
 
220
  'Folge dem weißen Kaninchen.' # German
221
  ]
222
 
223
+ # When calling the `encode` function, you can choose a `task_type` based on the use case:
224
  # 'retrieval.query', 'retrieval.passage', 'separation', 'classification', 'text-matching'
225
+ # Alternatively, you can choose not to pass a `task_type`, and no specific LoRA adapter will be used.
226
  embeddings = model.encode(texts, task_type='text-matching')
227
 
228
  # Compute similarities
 
230
  ```
231
 
232
  By default, the model supports a maximum sequence length of 8192 tokens.
233
+ However, if you want to truncate your input texts to a shorter length, you can pass the `max_length` parameter to the `encode` function:
234
  ```python
235
  embeddings = model.encode(
236
  ['Very long ... document'],
 
238
  )
239
  ```
240
 
241
+ In case you want to use **Matryoshka embeddings** and switch to a different dimension,
242
+ you can adjust it by passing the `truncate_dim` parameter to the `encode` function:
243
  ```python
244
  embeddings = model.encode(
245
  ['Sample text'],