|
--- |
|
library_name: transformers |
|
license: llama3.2 |
|
base_model: |
|
- meta-llama/Llama-3.2-3B |
|
- lianghsun/Llama-3.2-Taiwan-3B |
|
datasets: |
|
- lianghsun/tw-emergency-medicine-bench |
|
- lianghsun/tw-legal-nlp |
|
- lianghsun/tw-legal-synthetic-qa |
|
- lianghsun/tw-law-article-qa |
|
- lianghsun/tw-judgment-qa |
|
- lianghsun/tw-bar-examination-2020-chat |
|
- lianghsun/tw-structured-law-article |
|
- lianghsun/tw-judgment-gist-chat |
|
- lianghsun/tw-contract-review-chat |
|
- lianghsun/reasoning-base-20k-chat |
|
- lianghsun/vulnerability-mitigation-qa-zh_tw |
|
- benchang1110/Belle-Taide |
|
- rombodawg/Everything_Instruct_Multilingual |
|
tags: |
|
- legal |
|
- TW |
|
- Taiwan |
|
- ROC |
|
- llama-factory |
|
- zh-tw |
|
model-index: |
|
- name: Llama-3.2-Taiwan-Legal-3B-Instruct |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: lianghsun/tw-legal-benchmark-v1 |
|
type: lianghsun/tw-legal-benchmark-v1 |
|
metrics: |
|
- name: tw-legal-benchmark-v1 |
|
type: tw-legal-benchmark-v1 |
|
value: 22.01 |
|
new_version: lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct-v2024.11.13 |
|
language: |
|
- zh |
|
pipeline_tag: text-generation |
|
metrics: |
|
- accuracy |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# Model Card for Model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/618dc56cbc345ca7bf95f3cd/W6-UDg0_cNm4WJVlR9tiD.png) |
|
|
|
基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 模型,透過中華民國台灣法律條文及判決書等相關資料集進行微調。 |
|
|
|
## Model Update History |
|
|
|
| Update Date | Model Version | Key Changes | |
|
|--------------|-----------------------|-------------------------------------| |
|
| 2024-11-13 | v2024.11.13 | Fine-tuned version **v2024.11.13** of [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-3B). This instruction version initiates experimental integration of non-Chinese instructions to enhance the model’s robustness and mitigate risks of overfitting. |
|
| 2024-11-06 | v2024.11.6 | Starting with this release, fine-tuning is based on the foundation model [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-3B) **v2024.10.27** , and versioning has been updated to use the *YYYY-mm-dd* format. | |
|
| 2024-10-17 | v1.1.0 (v2024.10.17) | (Model collapsed 💥) Experimental fine-tuning on **v1.0.0** with added legal code data from the Republic of China (Taiwan) | |
|
| 2024-10-10 | v1.0.0 (v2024.10.10) | Full model training completed, but missing legal code data for the Republic of China (Taiwan) | |
|
| 2024-09-27 | v0.1.0 (v2024.09.27) | Model v0.1.0 released, but training was interrupted after 3 epochs due to lack of compute resources | |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 模型,此微調過程使用了來自中華民國台灣的法律條文與相關判決書資料集,以提升模型在法律領域的專業知識與應用能力。這些資料集涵蓋了法律條文的結構、判決書的格式,法庭上常見的法律語言與術語,並包含了部分法律資料科學任務的應用,使模型能夠更準確地理解和處理與台灣法律體系相關的問題。經過這些微調,模型將能夠更好地為法律專業人士提供幫助,並在台灣法制框架內提供更精準的回應與建議。 |
|
|
|
- **Developed by:** [Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang) |
|
- **Model type:** LlamaForCausalLM |
|
- **Language(s) (NLP)**: 主要處理繁體中文(zh-tw),針對中華民國台灣的法律用語與判決書進行微調。 |
|
- **License**: [llama3.2](https://huggingface.co./meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt) |
|
- **Finetuned from model**: [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) |
|
|
|
### Model Sources |
|
|
|
- **Repository:** [lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) |
|
- **Demo:** (WIP) |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
此模型可以直接用於理解和生成繁體中文法律文本,適合需要處理台灣法律相關問題的應用場景。模型預設的指令和回應能夠有效提供法律資訊、釐清法律條文、並生成符合法律專業的回應。其直接使用範圍包括但不限於法律資訊查詢、法律文本摘要、和基本的法條對話。 |
|
|
|
### Downstream Use |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
經過微調後,該模型可用於更具體的法律任務,如自動判決書分析、法律實體識別(NER)、法規編號轉換,以及法律合規審查輔助。此模型可以無縫集成至法律數據科學應用或法律技術(LegalTech)系統中,幫助法律專業人士或企業提升工作效率。 |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
該模型並不適用於非法律相關領域的生成任務,且不應用於進行可能涉及誤導或錯誤的法律建議,尤其是在未經專業審查的情況下。避免將模型用於未經授權或非法用途,如生成具爭議性或具偏見的法律建議。 |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
模型在生成法律條文和判決書內容時,可能會生成虛構或不存在的法條或判決書內容,這是模型的內在限制之一。使用者在參考這些資料時,應謹慎檢查生成的內容,並避免將模型輸出視為法律依據。建議在實際應用中,將模型生成的結果與可靠的法律見解和來源進行比對,確保準確性、合法性和適用性。 |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
此模型雖然經過法律文本的微調,但在於法律文本的數量及基礎模型為 SLM,模型能力仍有極限,使用者應注意以下風險與限制: |
|
|
|
1. **偏見風險**: |
|
模型可能會反映其訓練資料中的潛在偏見。由於法律文本的特定性,模型可能更熟悉某些法規、條文或判決案例,而在其他領域表現較弱。特別是在處理不常見的法律問題或未被訓練過的新法規時,模型的輸出可能會帶有偏見。 |
|
|
|
2. **技術限制**: |
|
雖然模型能夠處理大部分的法律文本,但對於結構極其複雜或語言模棱兩可的法律條文,模型可能無法產生精確的回答。使用者應避免完全依賴模型的輸出,尤其在法律決策過程中,建議進行額外的專業檢查。 |
|
|
|
3. **法律責任**: |
|
模型並非專業法律顧問,因此其生成的回應不應被視為正確的法律建議。使用者應確保在理性且專業背景下進行模型的應用,並避免在關鍵決策中過度依賴模型。 |
|
|
|
4. **誤用風險**: |
|
不當使用模型進行錯誤或誤導性的法律建議,可能對個人或企業造成負面影響。使用者應謹慎應用模型於合規或法律相關任務中,並保持對其輸出的檢視及校正。 |
|
|
|
為了減少這些風險,建議使用者在應用模型輸出時進行二次檢查,特別是在涉及法律決策的情境中。本模型現階段為提供法律科技領域進行大語言模型研究,並非取代專業法律工作者之專業建議。 |
|
|
|
## How to Get Started with the Model |
|
|
|
<!-- Use the code below to get started with the model. --> |
|
|
|
### Using vLLM |
|
|
|
要使用 [vLLM Docker image](https://docs.vllm.ai/en/latest/serving/deploying_with_docker.html) 來啟動此模型,您可以按照以下操作: |
|
``` |
|
docker run --runtime nvidia --gpus all \ |
|
-v ~/.cache/huggingface:/root/.cache/huggingface \ |
|
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \ |
|
-p 8000:8000 \ |
|
--ipc=host \ |
|
vllm/vllm-openai:latest \ |
|
--model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct |
|
``` |
|
|
|
請注意,如果想要使用不同版本的 checkpoint,請加上 `--revision <tag_name>` : |
|
``` |
|
docker run --runtime nvidia --gpus all \ |
|
-v ~/.cache/huggingface:/root/.cache/huggingface \ |
|
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \ |
|
-p 8000:8000 \ |
|
--ipc=host \ |
|
vllm/vllm-openai:latest \ |
|
--model lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct --revision <tag_name> |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data (for v2024.11.13) |
|
|
|
- [lianghsun/tw-legal-nlp](https://huggingface.co./datasets/lianghsun/tw-legal-nlp) |
|
- [lianghsun/tw-legal-synthetic-qa](https://huggingface.co./datasets/lianghsun/tw-legal-synthetic-qa) |
|
- [lianghsun/tw-law-article-qa](https://huggingface.co./datasets/lianghsun/tw-law-article-qa) |
|
- [lianghsun/tw-judgment-qa](https://huggingface.co./datasets/lianghsun/tw-judgment-qa) |
|
- [lianghsun/tw-bar-examination-2020-chat](https://huggingface.co./datasets/lianghsun/tw-bar-examination-2020-chat) |
|
- [lianghsun/tw-emergency-medicine-bench](https://huggingface.co./datasets/lianghsun/tw-emergency-medicine-bench) |
|
- [lianghsun/tw-structured-law-article](https://huggingface.co./datasets/lianghsun/tw-structured-law-article) |
|
- [lianghsun/tw-judgment-gist-chat](https://huggingface.co./datasets/lianghsun/tw-judgment-gist-chat) |
|
- [lianghsun/vulnerability-mitigation-qa-zh_tw](https://huggingface.co./datasets/lianghsun/vulnerability-mitigation-qa-zh_tw) |
|
- [lianghsun/tw-legal-qa-chat](https://huggingface.co./datasets/lianghsun/tw-legal-qa-chat) |
|
- [lianghsun/reasoning-base-20k-chat](https://huggingface.co./datasets/lianghsun/reasoning-base-20k-chat) |
|
- [lianghsun/tw-contract-review-chat](https://huggingface.co./datasets/lianghsun/tw-contract-review-chat) |
|
- [rombodawg/Everything_Instruct_Multilingual](https://huggingface.co./datasets/rombodawg/Everything_Instruct_Multilingual) |
|
- [benchang1110/Belle-Taide](https://huggingface.co./datasets/benchang1110/Belle-Taide) |
|
|
|
### Training procedure |
|
|
|
#### Preprocessing |
|
|
|
此模型在 v2024.11.06 版本後改採用 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-Legal-3B-Instruct) 做為基礎模型(foundation model)。 |
|
Tokenizer 仍與最原先的 [meta-llama/Llama-3.2-3B](https://huggingface.co./meta-llama/Llama-3.2-3B) 相同,在未來版本再視況擴充中文字。 |
|
|
|
#### Training hyperparameters (for v2024.11.13) |
|
|
|
The following hyperparameters were used during training: |
|
|
|
- **learning_rate:** (initial lr) 5e-5 |
|
- **train_batch_size:** 20 |
|
- **eval_batch_size:** Not specified |
|
- **seed:** 42 |
|
- **distributed_type:** single-node |
|
- **num_devices:** 8 |
|
- **gradient_accumulation_steps:** 16 |
|
- **total_train_batch_size:** 1,280 (train_batch_size * gradient_accumulation_steps * num_devices) |
|
- **optimizer:** adamw_torch_fused |
|
- **lr_scheduler_type:** cosine |
|
- **lr_scheduler_warmup_steps:** 100 |
|
- **num_epochs:** 3 |
|
- **grad_norm:** 1.1764454343711086 |
|
- **global_step:** 65 |
|
|
|
### Speeds, Sizes, Times (for v2024.11.13) |
|
|
|
- **Duration**: 30m 19s |
|
- **Train runtime**: 30m 19s |
|
- **Train samples per second**: 1.1764454343711086 |
|
- **Train steps per second**: 0.036 |
|
- **Total training FLOPs**: 89,423,735,685,120 |
|
- **Train loss**: 0.7657 |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
**Note**: ..(WIP).. |
|
|
|
#### Factors |
|
|
|
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> |
|
|
|
**Note**: ..(WIP).. |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
**Note**: ..(WIP).. |
|
|
|
### Results |
|
|
|
**Note**: ..(WIP).. |
|
|
|
#### Summary |
|
|
|
**Note**: ..(WIP).. |
|
|
|
## Model Examination |
|
|
|
<!-- Relevant interpretability work for the model goes here --> |
|
|
|
### 法條回覆 |
|
|
|
**Note**: ..(WIP).. |
|
|
|
### 判決書內容 |
|
|
|
**Note**: ..(WIP).. |
|
|
|
### 法律 NLP 任務 |
|
|
|
**Note**: ..(WIP).. |
|
|
|
## Environmental Impact (for v2024.11.13) |
|
|
|
- **Hardware Type:** 8 x NVIDIA A100 40GB |
|
- **Hours used:** 30m 19s |
|
- **Cloud Provider:** N/A |
|
- **Compute Region:** N/A |
|
- **Carbon Emitted:** N/A |
|
|
|
## Technical Specifications |
|
|
|
### Model Architecture and Objective |
|
|
|
本模型基於 [lianghsun/Llama-3.2-Taiwan-3B](https://huggingface.co./lianghsun/Llama-3.2-Taiwan-3B) ,使用自回歸 Transformer 架構進行語言建模。該模型的主要目標是提升對台灣法律文本的理解與生成能力,尤其是針對判決書、法條的專業處理與應用。透過專門設計的法律文本集進行微調,模型能更精確地回答法律問題並提供相關建議。 |
|
|
|
### Compute Infrastructure |
|
|
|
#### Hardware (for v2024.11.6) |
|
|
|
- 8 x NVIDIA A100 40GB |
|
|
|
#### Software |
|
|
|
- 微調過程使用了 [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 框架進行訓練。 |
|
|
|
## Citation |
|
|
|
無。 |
|
|
|
## Glossary |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> |
|
無。 |
|
|
|
## More Information |
|
|
|
### 算力 |
|
儘管我們已準備了許多關於中華民國台灣法律領域的資料集,但由於算力資源有限,**無法將所有資料集進行完整訓練**(是的,我們並沒有將全部資料集都進行訓練,僅取出被認為最基礎的法律文本),導致模型尚未達到最佳表現。因此,目前的 checkpoint 是基於有限資源的版本。如果您有意願贊助算力,歡迎與我聯繫。我相信若能將更多已準備但尚未納入訓練的法律語料進行微調,該模型將能達到繁體中文法律領域的最佳表現。 |
|
|
|
### 持績更新 |
|
此模型如有進一步資源,將會不定期更新,有關模型最新消息請見 **Model Update History** 章節。 |
|
|
|
## Model Card Authors |
|
|
|
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang) |
|
|
|
## Model Card Contact |
|
|
|
[Huang Liang Hsun](https://www.linkedin.com/in/lianghsunhuang) |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.2 |
|
- Pytorch 2.4.1+cu121 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.20.0 |