ptrdvn commited on
Commit
020c4a4
·
verified ·
1 Parent(s): 20a6460

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -166,6 +166,9 @@ We include scripts to do this in vLLM, LMDeploy, and OpenAI (hosted for free on
166
 
167
  Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`.
168
 
 
 
 
169
  ```python
170
  from vllm import LLM, SamplingParams
171
  import numpy as np
@@ -208,10 +211,18 @@ print(expected_vals)
208
  # [6.66570732 1.86686378 1.01102923]
209
  ```
210
 
 
 
 
 
 
211
  ### LMDeploy
212
 
213
  Install [LMDeploy](https://github.com/InternLM/lmdeploy) using `pip install lmdeploy`.
214
 
 
 
 
215
  ```python
216
  # Un-comment this if running in a Jupyter notebook, Colab etc.
217
  # import nest_asyncio
@@ -266,10 +277,16 @@ print(expected_vals)
266
  # [6.66415229 1.84342025 1.01133205]
267
  ```
268
 
 
 
 
269
  ### OpenAI (Hosted on Huggingface)
270
 
271
  Install [openai](https://github.com/openai/openai-python) using `pip install openai`.
272
 
 
 
 
273
  ```python
274
  from openai import OpenAI
275
  import numpy as np
@@ -325,6 +342,8 @@ print(expected_vals)
325
  # [6.64866580, 1.85144404, 1.010719508]
326
  ```
327
 
 
 
328
  # Evaluation
329
 
330
  We perform an evaluation on 9 datasets from the [BEIR benchmark](https://github.com/beir-cellar/beir) that none of the evaluated models have been trained upon (to our knowledge).
 
166
 
167
  Install [vLLM](https://github.com/vllm-project/vllm/) using `pip install vllm`.
168
 
169
+ <details>
170
+ <summary>Show vLLM code</summary>
171
+
172
  ```python
173
  from vllm import LLM, SamplingParams
174
  import numpy as np
 
211
  # [6.66570732 1.86686378 1.01102923]
212
  ```
213
 
214
+ </details>
215
+
216
+
217
+
218
+
219
  ### LMDeploy
220
 
221
  Install [LMDeploy](https://github.com/InternLM/lmdeploy) using `pip install lmdeploy`.
222
 
223
+ <details>
224
+ <summary>Show LMDeploy code</summary>
225
+
226
  ```python
227
  # Un-comment this if running in a Jupyter notebook, Colab etc.
228
  # import nest_asyncio
 
277
  # [6.66415229 1.84342025 1.01133205]
278
  ```
279
 
280
+ </details>
281
+
282
+
283
  ### OpenAI (Hosted on Huggingface)
284
 
285
  Install [openai](https://github.com/openai/openai-python) using `pip install openai`.
286
 
287
+ <details>
288
+ <summary>Show OpenAI + Huggingface Inference code</summary>
289
+
290
  ```python
291
  from openai import OpenAI
292
  import numpy as np
 
342
  # [6.64866580, 1.85144404, 1.010719508]
343
  ```
344
 
345
+ </details>
346
+
347
  # Evaluation
348
 
349
  We perform an evaluation on 9 datasets from the [BEIR benchmark](https://github.com/beir-cellar/beir) that none of the evaluated models have been trained upon (to our knowledge).