0xroyce commited on
Commit
12736a3
·
verified ·
1 Parent(s): 7f0ee16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -14,9 +14,11 @@ model_creator: 0xroyce
14
  model_type: LLaMA
15
  ---
16
 
17
- # 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit
18
 
19
- 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit is a fine-tuned version of the LLaMA-3.1-8B model, specifically optimized for tasks related to finance, economics, trading, psychology, and social engineering. This model leverages the LLaMA architecture and employs 4-bit quantization to deliver high performance in resource-constrained environments while maintaining accuracy and relevance in natural language processing tasks.
 
 
20
 
21
  ## Model Details
22
 
@@ -29,7 +31,7 @@ model_type: LLaMA
29
 
30
  ## Training
31
 
32
- 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit was fine-tuned on the [**"Financial, Economic, and Psychological Analysis Texts"** dataset](https://huggingface.co/datasets/0xroyce/Plutus), which is a comprehensive collection of 85 influential books out of a planned 398. This dataset covers key areas such as:
33
 
34
  - **Finance and Investment**: Including stock market analysis, value investing, and exchange-traded funds (ETFs).
35
  - **Trading Strategies**: Focused on technical analysis, options trading, and algorithmic trading methods.
@@ -50,11 +52,11 @@ This model is well-suited for a variety of natural language processing tasks wit
50
 
51
  ## Performance
52
 
53
- While specific benchmark scores for 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit are not provided, the model is designed to offer competitive performance within its parameter range, particularly for tasks involving financial, economic, and security-related data. The 4-bit quantization offers a balance between model size and computational efficiency, making it ideal for deployment in resource-limited settings.
54
 
55
  ## Limitations
56
 
57
- Despite its strengths, the 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit model has some limitations:
58
 
59
  - **Domain-Specific Biases**: The model may generate biased content depending on the input, especially within specialized financial, psychological, or cybersecurity domains.
60
  - **Inference Speed**: Although optimized with 4-bit quantization, real-time application latency may still be an issue depending on the deployment environment.
@@ -79,7 +81,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
79
 
80
  ## Ethical Considerations
81
 
82
- The 0xroyce/Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit model, like other large language models, can generate biased or potentially harmful content. Users are advised to implement content filtering and moderation when deploying this model in public-facing applications. Further fine-tuning is also encouraged to align the model with specific ethical guidelines or domain-specific requirements.
83
 
84
  ## Citation
85
 
 
14
  model_type: LLaMA
15
  ---
16
 
17
+ # Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit
18
 
19
+ Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit is a fine-tuned version of the LLaMA-3.1-8B model, specifically optimized for tasks related to finance, economics, trading, psychology, and social engineering. This model leverages the LLaMA architecture and employs 4-bit quantization to deliver high performance in resource-constrained environments while maintaining accuracy and relevance in natural language processing tasks.
20
+
21
+ ![Plutus Banner](https://iili.io/djQmWzu.webp)
22
 
23
  ## Model Details
24
 
 
31
 
32
  ## Training
33
 
34
+ Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit was fine-tuned on the [**"Financial, Economic, and Psychological Analysis Texts"** dataset](https://huggingface.co/datasets/0xroyce/Plutus), which is a comprehensive collection of 85 influential books out of a planned 398. This dataset covers key areas such as:
35
 
36
  - **Finance and Investment**: Including stock market analysis, value investing, and exchange-traded funds (ETFs).
37
  - **Trading Strategies**: Focused on technical analysis, options trading, and algorithmic trading methods.
 
52
 
53
  ## Performance
54
 
55
+ While specific benchmark scores for Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit are not provided, the model is designed to offer competitive performance within its parameter range, particularly for tasks involving financial, economic, and security-related data. The 4-bit quantization offers a balance between model size and computational efficiency, making it ideal for deployment in resource-limited settings.
56
 
57
  ## Limitations
58
 
59
+ Despite its strengths, the Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit model has some limitations:
60
 
61
  - **Domain-Specific Biases**: The model may generate biased content depending on the input, especially within specialized financial, psychological, or cybersecurity domains.
62
  - **Inference Speed**: Although optimized with 4-bit quantization, real-time application latency may still be an issue depending on the deployment environment.
 
81
 
82
  ## Ethical Considerations
83
 
84
+ The Plutus-Meta-Llama-3.1-8B-Instruct-bnb-4bit model, like other large language models, can generate biased or potentially harmful content. Users are advised to implement content filtering and moderation when deploying this model in public-facing applications. Further fine-tuning is also encouraged to align the model with specific ethical guidelines or domain-specific requirements.
85
 
86
  ## Citation
87