Edit model card

Llama-3.1-Omni-FinAI-8B Model Card

Model Overview (Built with Llama)

Llama-3.1-Omni-FinAI-8B is a pre-trained large language model optimized for finance-specific fine-tuning applications. Based on the LLaMA 3.1 8B architecture, this model was pre-trained on 143 billion tokens of high-quality financial texts. Llama-3.1-Omni-FinAI-8B provides a foundation for further fine-tuning in specialized financial analysis tasks.

Model Details

  • Base Model: Llama-3.1-8B-Instruct
  • Training Data:
    • SEC 10-K, 10-Q, and 8-K filings
    • Reuters News data (RCV1, TRC2)
    • Finance-specific papers from Arxiv
    • Financial discussions from Reddit
    • Wikipedia
  • Primary Use Case: Pre-training for finance-specific fine-tuning, allowing users to leverage Llama-3.1-Omni-FinAI-8B's foundational financial language understanding.

Use Cases

Llama-3.1-Omni-FinAI-8B is designed as a base model for finance-specific fine-tuning tasks, supporting applications such as:

  • Sentiment Analysis
  • Stock Movement Prediction
  • QA Instruction
  • Summarization
  • Predictive Financial Analysis

Training Process

Llama-3.1-Omni-FinAI-8B was trained using the NVIDIA NeMo framework on 64 H100 GPUs, utilizing a diverse dataset that ensures robust performance for fine-tuning in finance-related applications.

Limitations

This model is pre-trained for finance-specific fine-tuning tasks and may require additional fine-tuning for specialized applications. Due to its large size, substantial computational resources are recommended for deployment.

License

This model is licensed under the Llama 3.1 Community License.

Citation

If you use the Llama-3.1-Omni-FinAI-8B model, please cite as follows:

Chiu, I-Chan and Hung, Mao-Wei and Chen, Zih-Ching and Chiu, Jun-wei and Lin, Yang-Hsien and Lee, Cheng-Kuang and Huang, Eddie TC and See, Simon, Omni-FinAI: Unlocking Financial Disclosure Insights (October 30, 2024). Available at SSRN: https://ssrn.com/abstract=5004298

Downloads last month
7
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for ichanchiu/Llama-3.1-Omni-FinAI-8B

Finetuned
(420)
this model