dfurman commited on
Commit
a3b9a02
1 Parent(s): eece6e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -61,7 +61,7 @@ LLaMA is a foundational model, and as such, it should not be used for downstream
61
  ```
62
  ### GPU Inference in fp16
63
 
64
- This requires a GPU with at least xxGB of VRAM.
65
 
66
  ### First, Load the Model
67
 
 
61
  ```
62
  ### GPU Inference in fp16
63
 
64
+ This requires a GPU with at least 15GB of VRAM.
65
 
66
  ### First, Load the Model
67