Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ datasets:
|
|
9 |
|
10 |
|
11 |
## Introduction
|
12 |
-
AMD-Llama-135m is a language model trained on AMD MI250
|
13 |
|
14 |
## Model Details
|
15 |
|
|
|
9 |
|
10 |
|
11 |
## Introduction
|
12 |
+
AMD-Llama-135m is a language model trained on AMD Instinct MI250 accelerators. Based on LLama2 model architecture, this model can be smoothly loaded as LlamaForCausalLM with huggingface transformers. Furthermore, we use the same tokenizer as LLama2, enabling it to be a draft model of speculative decoding for LLama2 and CodeLlama.
|
13 |
|
14 |
## Model Details
|
15 |
|