Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

GGUF quantization made by Richard Erkhov.

Github

Discord

Request more models

Octopus-v2 - GGUF

Name Quant method Size
Octopus-v2.Q2_K.gguf Q2_K 1.08GB
Octopus-v2.IQ3_XS.gguf IQ3_XS 1.16GB
Octopus-v2.IQ3_S.gguf IQ3_S 1.2GB
Octopus-v2.Q3_K_S.gguf Q3_K_S 1.2GB
Octopus-v2.IQ3_M.gguf IQ3_M 1.22GB
Octopus-v2.Q3_K.gguf Q3_K 1.29GB
Octopus-v2.Q3_K_M.gguf Q3_K_M 1.29GB
Octopus-v2.Q3_K_L.gguf Q3_K_L 1.36GB
Octopus-v2.IQ4_XS.gguf IQ4_XS 1.4GB
Octopus-v2.Q4_0.gguf Q4_0 1.44GB
Octopus-v2.IQ4_NL.gguf IQ4_NL 1.45GB
Octopus-v2.Q4_K_S.gguf Q4_K_S 1.45GB
Octopus-v2.Q4_K.gguf Q4_K 1.52GB
Octopus-v2.Q4_K_M.gguf Q4_K_M 1.52GB
Octopus-v2.Q4_1.gguf Q4_1 1.56GB
Octopus-v2.Q5_0.gguf Q5_0 1.68GB
Octopus-v2.Q5_K_S.gguf Q5_K_S 1.68GB
Octopus-v2.Q5_K.gguf Q5_K 1.71GB
Octopus-v2.Q5_K_M.gguf Q5_K_M 1.71GB
Octopus-v2.Q5_1.gguf Q5_1 1.79GB
Octopus-v2.Q6_K.gguf Q6_K 1.92GB
Original model description:
---

license: apache-2.0 base_model: google/gemma-2b model-index: - name: Octopus-V2-2B results: [] tags: - function calling - on-device language model - android inference: false space: false spaces: false language: - en

Octopus V2: On-device language model for super agent

- Nexa AI Product - ArXiv - Video Demo

nexa-octopus

Introducing Octopus-V2-2B

Octopus-V2-2B, an advanced open-source language model with 2 billion parameters, represents Nexa AI's research breakthrough in the application of large language models (LLMs) for function calling, specifically tailored for Android APIs. Unlike Retrieval-Augmented Generation (RAG) methods, which require detailed descriptions of potential function arguments—sometimes needing up to tens of thousands of input tokens—Octopus-V2-2B introduces a unique functional token strategy for both its training and inference stages. This approach not only allows it to achieve performance levels comparable to GPT-4 but also significantly enhances its inference speed beyond that of RAG-based methods, making it especially beneficial for edge computing devices.

📱 On-device Applications: Octopus-V2-2B is engineered to operate seamlessly on Android devices, extending its utility across a wide range of applications, from Android system management to the orchestration of multiple devices.

🚀 Inference Speed: When benchmarked, Octopus-V2-2B demonstrates a remarkable inference speed, outperforming the combination of "Llama7B + RAG solution" by a factor of 36X on a single A100 GPU. Furthermore, compared to GPT-4-turbo (gpt-4-0125-preview), which relies on clusters A100/H100 GPUs, Octopus-V2-2B is 168% faster. This efficiency is attributed to our functional token design.

🐙 Accuracy: Octopus-V2-2B not only excels in speed but also in accuracy, surpassing the "Llama7B + RAG solution" in function call accuracy by 31%. It achieves a function call accuracy comparable to GPT-4 and RAG + GPT-3.5, with scores ranging between 98% and 100% across benchmark datasets.

💪 Function Calling Capabilities: Octopus-V2-2B is capable of generating individual, nested, and parallel function calls across a variety of complex scenarios.

Example Use Cases

ondevice

You can run the model on a GPU using the following code.

from transformers import AutoTokenizer, GemmaForCausalLM
import torch
import time

def inference(input_text):
    start_time = time.time()
    input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
    input_length = input_ids["input_ids"].shape[1]
    outputs = model.generate(
        input_ids=input_ids["input_ids"], 
        max_length=1024,
        do_sample=False)
    generated_sequence = outputs[:, input_length:].tolist()
    res = tokenizer.decode(generated_sequence[0])
    end_time = time.time()
    return {"output": res, "latency": end_time - start_time}

model_id = "NexaAIDev/Octopus-v2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = GemmaForCausalLM.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

input_text = "Take a selfie for me with front camera"
nexa_query = f"Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: {input_text} \n\nResponse:"
start_time = time.time()
print("nexa model result:\n", inference(nexa_query))
print("latency:", time.time() - start_time," s")

Evaluation

The benchmark result can be viewed in this excel, which is manually verified. All the queries in the benchmark test are sampled by Gemini.

ondevice ondevice

Note: One can notice that the query includes all necessary parameters used for a function. It is expected that query includes all parameters during inference as well.

Training Data

We wrote 20 Android API descriptions to used to train the models, see this file for details. The Android API implementations for our demos, and our training data will be published later. Below is one Android API description example

def get_trending_news(category=None, region='US', language='en', max_results=5):
    """
    Fetches trending news articles based on category, region, and language.

    Parameters:
    - category (str, optional): News category to filter by, by default use None for all categories. Optional to provide.
    - region (str, optional): ISO 3166-1 alpha-2 country code for region-specific news, by default, uses 'US'. Optional to provide.
    - language (str, optional): ISO 639-1 language code for article language, by default uses 'en'. Optional to provide.
    - max_results (int, optional): Maximum number of articles to return, by default, uses 5. Optional to provide.

    Returns:
    - list[str]: A list of strings, each representing an article. Each string contains the article's heading and URL.
    """

License

This model was trained on commercially viable data and is under the Nexa AI community disclaimer.

References

We thank the Google Gemma team for their amazing models!

@misc{gemma-2023-open-models,
  author = {{Gemma Team, Google DeepMind}},
  title = {Gemma: Open Models Based on Gemini Research and Technology},
  url = {https://goo.gle/GemmaReport},  
  year = {2023},
}

Citation

@misc{chen2024octopus,
      title={Octopus v2: On-device language model for super agent}, 
      author={Wei Chen and Zhiyuan Li},
      year={2024},
      eprint={2404.01744},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Contact

Please contact us to reach out for any issues and comments!

Downloads last month
29
GGUF
Model size
2.51B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .