unclecode commited on
Commit
eb82804
1 Parent(s): 30da40d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -1
README.md CHANGED
@@ -7,10 +7,70 @@ tags:
7
  - transformers
8
  - unsloth
9
  - llama
10
- - gguf
11
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** unclecode
 
7
  - transformers
8
  - unsloth
9
  - llama
10
+ - trl
11
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
12
  ---
13
 
14
+ ---
15
+ tags:
16
+ - function calling
17
+ - tool use
18
+ - llama
19
+ - llama3
20
+ - tinyllama
21
+ - instruct-tuned
22
+ - 4-bit quantization
23
+ - ggfu
24
+ license: apache-2.0
25
+ ---
26
+
27
+ # Function Calling and Tool Use LLaMA Models
28
+
29
+ This repository contains two main versions of LLaMA models fine-tuned for function calling and tool use capabilities:
30
+
31
+ 1. Fine-tuned version of the `LLama3-8b-instruct` model
32
+ 2. `tinyllama` - a smaller model version
33
+
34
+ For each version, the following variants are available:
35
+
36
+ - 16-bit quantized model
37
+ - 4-bit quantized model
38
+ - GGFU format for use with llama.cpp
39
+
40
+ ## Dataset
41
+
42
+ The models were fine-tuned using a modified version of the `ilacai/glaive-function-calling-v2-sharegpt` dataset, which can be found at [unclecode/glaive-function-calling-llama3](https://huggingface.co/datasets/unclecode/glaive-function-calling-llama3).
43
+
44
+ ## Usage
45
+
46
+ To learn how to use these models, refer to the Colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://tinyurl.com/ucfllm)
47
+
48
+ This is the first version of the models, and work is in progress to further train them with multi-tool detection and native tool binding support.
49
+
50
+ ## Library and Tools Support
51
+
52
+ A library is being developed to manage tools and add tool support for major LLMs, regardless of their built-in capabilities. You can find examples and contribute to the library at the following repository:
53
+
54
+ [https://github.com/unclecode/fllm](https://github.com/unclecode/fllm)
55
+
56
+ Please open an issue in the repository for any bugs or collaboration requests.
57
+
58
+ ## Other Models
59
+
60
+ Here are links to other related models:
61
+
62
+ - [unclecode/llama3-function-call-lora-adapter-240424](https://huggingface.co/unclecode/llama3-function-call-lora-adapter-240424)
63
+ - [unclecode/llama3-function-call-16bit-240424](https://huggingface.co/unclecode/llama3-function-call-16bit-240424)
64
+ - [unclecode/llama3-function-call-4bit-240424](https://huggingface.co/unclecode/llama3-function-call-4bit-240424)
65
+ - [unclecode/llama3-function-call-Q4_K_M_GGFU-240424](https://huggingface.co/unclecode/llama3-function-call-Q4_K_M_GGFU-240424)
66
+ - [unclecode/tinyllama-function-call-lora-adapter-250424](https://huggingface.co/unclecode/tinyllama-function-call-lora-adapter-250424)
67
+ - [unclecode/tinyllama-function-call-16bit-250424](https://huggingface.co/unclecode/tinyllama-function-call-16bit-250424)
68
+ - [unclecode/tinyllama-function-call-Q4_K_M_GGFU-250424](https://huggingface.co/unclecode/tinyllama-function-call-Q4_K_M_GGFU-250424)
69
+
70
+ ## License
71
+
72
+ These models are released under the Apache 2.0 license.
73
+
74
  # Uploaded model
75
 
76
  - **Developed by:** unclecode