Update README.md
Browse files
README.md
CHANGED
@@ -61,7 +61,7 @@ LLaMA is a foundational model, and as such, it should not be used for downstream
|
|
61 |
```
|
62 |
### GPU Inference in fp16
|
63 |
|
64 |
-
This requires a GPU with at least
|
65 |
|
66 |
### First, Load the Model
|
67 |
|
|
|
61 |
```
|
62 |
### GPU Inference in fp16
|
63 |
|
64 |
+
This requires a GPU with at least 15GB of VRAM.
|
65 |
|
66 |
### First, Load the Model
|
67 |
|