Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,14 @@
|
|
1 |
---
|
2 |
license: other
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
3 |
+
inference: false
|
4 |
---
|
5 |
+
|
6 |
+
# Quantised GGMLs of alpaca-lora-65B
|
7 |
+
|
8 |
+
Merged, unquantised HF repo of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b).
|
9 |
+
|
10 |
+
# Original model card not provided
|
11 |
+
|
12 |
+
No model card was provided in [changsung's original repository](https://huggingface.co/chansung/alpaca-lora-65b).
|
13 |
+
|
14 |
+
Based on the name, I assume this is the result of fine tuning using the original GPT 3.5 Alpaca dataset. It is unknown as to whether the original Stanford data was used, or the [cleaned tloen/alpaca-lora variant](https://github.com/tloen/alpaca-lora).
|