|
--- |
|
base_model: |
|
- miner41612/gemma-2-2b-finance-it |
|
datasets: |
|
- Mineru/kor-finance-sft |
|
language: |
|
- ko |
|
library_name: transformers |
|
license: gemma |
|
pipeline_tag: text-generation |
|
tags: |
|
- krx |
|
- finance |
|
- sft |
|
- trl |
|
extra_gated_heading: Access Gemma on Hugging Face |
|
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and |
|
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging |
|
Face and click below. Requests are processed immediately. |
|
extra_gated_button_content: Acknowledge license |
|
--- |
|
|
|
|
|
# Gemma 2 Finance model card |
|
|
|
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base) |
|
|
|
**Terms of Use**: [Terms][terms] |
|
|
|
**Authors**: miner41612 |
|
|
|
## Model Information |
|
|
|
입력 및 출력에 대한 요약 설명과 간략한 정의입니다. |
|
|
|
### Description |
|
|
|
Google의 Gemma 2 2b 모델을 금융 도메인 데이터셋을 정재한 데이터셋을 Continual Learning을 하여 학습 한 모델에 금융 도메인 Insturction 데이터 셋으로 학습 시킨 모델입낟. |
|
|
|
### Usage |
|
|
|
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: |
|
```sh |
|
pip install -U transformers |
|
``` |
|
|
|
Then, copy the snippet from the section that is relevant for your usecase. |
|
|
|
#### Running with the `pipeline` API |
|
|
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
pipe = pipeline( |
|
"text-generation", |
|
model="miner41612/gemma-2-2b-finance-it", |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device="cuda", # replace with "mps" to run on a Mac device |
|
) |
|
|
|
messages = [ |
|
{"role": "user", "content": "원가상환제도란?"}, |
|
] |
|
|
|
outputs = pipe(messages, max_new_tokens=256) |
|
assistant_response = outputs[0]["generated_text"][-1]["content"].strip() |
|
print(assistant_response) |
|
``` |
|
|
|
#### Running the model on a single / multi GPU |
|
|
|
```python |
|
# pip install accelerate |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("miner41612/gemma-2-2b-finance-it") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"miner41612/gemma-2-2b-finance-it", |
|
device_map="auto", |
|
) |
|
|
|
input_text = "원가상환제도란?" |
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
|
|
|
outputs = model.generate(**input_ids, max_new_tokens=32) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: |
|
|
|
```python |
|
messages = [ |
|
{"role": "user", "content": "원가상환제도란?"}, |
|
] |
|
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") |
|
|
|
outputs = model.generate(**input_ids, max_new_tokens=256) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
#### Quantized Versions through `bitsandbytes` |
|
|
|
<details> |
|
<summary> |
|
Using 8-bit precision (int8) |
|
</summary> |
|
|
|
```python |
|
# pip install bitsandbytes accelerate |
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
|
|
|
quantization_config = BitsAndBytesConfig(load_in_8bit=True) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("miner41612/gemma-2-2b-finance-it") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"miner41612/gemma-2-2b-finance-it", |
|
quantization_config=quantization_config, |
|
) |
|
|
|
input_text = "원가상환제도란?" |
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
|
|
|
outputs = model.generate(**input_ids, max_new_tokens=32) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
</details> |
|
|
|
<details> |
|
<summary> |
|
Using 4-bit precision |
|
</summary> |
|
|
|
```python |
|
# pip install bitsandbytes accelerate |
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
|
|
|
quantization_config = BitsAndBytesConfig(load_in_4bit=True) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("miner41612/gemma-2-2b-finance-it") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"miner41612/gemma-2-2b-finance-it", |
|
quantization_config=quantization_config, |
|
) |
|
|
|
input_text = "원가상환제도란?" |
|
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") |
|
|
|
outputs = model.generate(**input_ids, max_new_tokens=32) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
</details> |
|
|
|
#### Advanced Usage |
|
|
|
<details> |
|
<summary> |
|
Torch compile |
|
</summary> |
|
|
|
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the |
|
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile. |
|
|
|
Note that two warm-up steps are required before the full inference speed is realised: |
|
|
|
```python |
|
import os |
|
os.environ["TOKENIZERS_PARALLELISM"] = "false" |
|
|
|
from transformers import AutoTokenizer, Gemma2ForCausalLM |
|
from transformers.cache_utils import HybridCache |
|
import torch |
|
|
|
torch.set_float32_matmul_precision("high") |
|
|
|
# load the model + tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("miner41612/gemma-2-2b-finance-it") |
|
model = Gemma2ForCausalLM.from_pretrained("miner41612/gemma-2-2b-finance-it", torch_dtype=torch.bfloat16) |
|
model.to("cuda") |
|
|
|
# apply the torch compile transformation |
|
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) |
|
|
|
# pre-process inputs |
|
input_text = "원가상환제도란? " |
|
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") |
|
prompt_length = model_inputs.input_ids.shape[1] |
|
|
|
# set-up k/v cache |
|
past_key_values = HybridCache( |
|
config=model.config, |
|
max_batch_size=1, |
|
max_cache_len=model.config.max_position_embeddings, |
|
device=model.device, |
|
dtype=model.dtype |
|
) |
|
|
|
# enable passing kv cache to generate |
|
model._supports_cache_class = True |
|
model.generation_config.cache_implementation = None |
|
|
|
# two warm-up steps |
|
for idx in range(2): |
|
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) |
|
past_key_values.reset() |
|
|
|
# fast run |
|
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
For more details, refer to the [Transformers documentation](https://huggingface.co./docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). |
|
|
|
</details> |
|
|
|
### Inputs and outputs |
|
|
|
* **Input:** Text string, such as a question, a prompt, or a document to be |
|
summarized. |
|
* **Output:** Generated English-language text in response to the input, such |
|
as an answer to a question, or a summary of a document. |
|
|
|
### Citation |
|
|
|
```none |
|
@article{gemma_2024, |
|
title={Gemma}, |
|
url={https://www.kaggle.com/m/3301}, |
|
DOI={10.34740/KAGGLE/M/3301}, |
|
publisher={Kaggle}, |
|
author={Gemma Team}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
## Model Data |
|
|
|
Data used for model training and how the data was processed. |
|
|
|
## Ethics and Safety |
|
|
|
Ethics and safety evaluation approach and results. |
|
|
|
## Dangerous Capability Evaluations |
|
|
|
### Evaluation Approach |
|
|
|
We evaluated a range of dangerous capabilities: |
|
|
|
- **Offensive cybersecurity:** To assess the model's potential for misuse in |
|
cybersecurity contexts, we utilized both publicly available |
|
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as |
|
well as internally developed CTF challenges. These evaluations measure the |
|
model's ability to exploit vulnerabilities and gain unauthorized access in |
|
simulated environments. |
|
- **Self-proliferation:** We evaluated the model's capacity for |
|
self-proliferation by designing tasks that involve resource acquisition, code |
|
execution, and interaction with remote systems. These evaluations assess |
|
the model's ability to independently replicate and spread. |
|
- **Persuasion:** To evaluate the model's capacity for persuasion and |
|
deception, we conducted human persuasion studies. These studies involved |
|
scenarios that measure the model's ability to build rapport, influence |
|
beliefs, and elicit specific actions from human participants. |
|
|
|
|
|
## Usage and Limitations |
|
|
|
These models have certain limitations that users should be aware of. |
|
|
|
### Intended Usage |
|
|
|
Open Large Language Models (LLMs) have a wide range of applications across |
|
various industries and domains. The following list of potential uses is not |
|
comprehensive. The purpose of this list is to provide contextual information |
|
about the possible use-cases that the model creators considered as part of model |
|
training and development. |
|
|
|
* Content Creation and Communication |
|
* Text Generation: These models can be used to generate creative text formats |
|
such as poems, scripts, code, marketing copy, and email drafts. |
|
* Chatbots and Conversational AI: Power conversational interfaces for customer |
|
service, virtual assistants, or interactive applications. |
|
* Text Summarization: Generate concise summaries of a text corpus, research |
|
papers, or reports. |
|
* Research and Education |
|
* Natural Language Processing (NLP) Research: These models can serve as a |
|
foundation for researchers to experiment with NLP techniques, develop |
|
algorithms, and contribute to the advancement of the field. |
|
* Language Learning Tools: Support interactive language learning experiences, |
|
aiding in grammar correction or providing writing practice. |
|
* Knowledge Exploration: Assist researchers in exploring large bodies of text |
|
by generating summaries or answering questions about specific topics. |
|
|
|
### Limitations |
|
|
|
* Training Data |
|
* The quality and diversity of the training data significantly influence the |
|
model's capabilities. Biases or gaps in the training data can lead to |
|
limitations in the model's responses. |
|
* The scope of the training dataset determines the subject areas the model can |
|
handle effectively. |
|
* Context and Task Complexity |
|
* LLMs are better at tasks that can be framed with clear prompts and |
|
instructions. Open-ended or highly complex tasks might be challenging. |
|
* A model's performance can be influenced by the amount of context provided |
|
(longer context generally leads to better outputs, up to a certain point). |
|
* Language Ambiguity and Nuance |
|
* Natural language is inherently complex. LLMs might struggle to grasp subtle |
|
nuances, sarcasm, or figurative language. |
|
* Factual Accuracy |
|
* LLMs generate responses based on information they learned from their |
|
training datasets, but they are not knowledge bases. They may generate |
|
incorrect or outdated factual statements. |
|
* Common Sense |
|
* LLMs rely on statistical patterns in language. They might lack the ability |
|
to apply common sense reasoning in certain situations. |
|
|
|
### Ethical Considerations and Risks |
|
|
|
The development of large language models (LLMs) raises several ethical concerns. |
|
In creating an open model, we have carefully considered the following: |
|
|
|
* Bias and Fairness |
|
* LLMs trained on large-scale, real-world text data can reflect socio-cultural |
|
biases embedded in the training material. These models underwent careful |
|
scrutiny, input data pre-processing described and posterior evaluations |
|
reported in this card. |
|
* Misinformation and Misuse |
|
* LLMs can be misused to generate text that is false, misleading, or harmful. |
|
* Guidelines are provided for responsible use with the model, see the |
|
[Responsible Generative AI Toolkit][rai-toolkit]. |
|
* Transparency and Accountability: |
|
* This model card summarizes details on the models' architecture, |
|
capabilities, limitations, and evaluation processes. |
|
* A responsibly developed open model offers the opportunity to share |
|
innovation by making LLM technology accessible to developers and researchers |
|
across the AI ecosystem. |
|
|
|
Risks identified and mitigations: |
|
|
|
* Perpetuation of biases: It's encouraged to perform continuous monitoring |
|
(using evaluation metrics, human review) and the exploration of de-biasing |
|
techniques during model training, fine-tuning, and other use cases. |
|
* Generation of harmful content: Mechanisms and guidelines for content safety |
|
are essential. Developers are encouraged to exercise caution and implement |
|
appropriate content safety safeguards based on their specific product policies |
|
and application use cases. |
|
* Misuse for malicious purposes: Technical limitations and developer and |
|
end-user education can help mitigate against malicious applications of LLMs. |
|
Educational resources and reporting mechanisms for users to flag misuse are |
|
provided. Prohibited uses of Gemma models are outlined in the |
|
[Gemma Prohibited Use Policy][prohibited-use]. |
|
* Privacy violations: Models were trained on data filtered for removal of PII |
|
(Personally Identifiable Information). Developers are encouraged to adhere to |
|
privacy regulations with privacy-preserving techniques. |