DEVCamiloSepulveda commited on
Commit
f59ef75
·
verified ·
1 Parent(s): c1c0469

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ language:
4
+ - en
5
+ base_model: meta-llama/Llama-3.2-1B
6
+ pipeline_tag: text-classification
7
+ library_name: peft
8
+ tags:
9
+ - regression
10
+ - story-point-estimation
11
+ - software-engineering
12
+ datasets:
13
+ - mulestudio
14
+ metrics:
15
+ - mae
16
+ - mdae
17
+ model-index:
18
+ - name: llama-3.2-1b-story-point-estimation
19
+ results:
20
+ - task:
21
+ type: regression
22
+ name: Story Point Estimation
23
+ dataset:
24
+ name: mulestudio Dataset
25
+ type: mulestudio
26
+ split: test
27
+ metrics:
28
+ - type: mae
29
+ value: 3.877
30
+ name: Mean Absolute Error (MAE)
31
+ - type: mdae
32
+ value: 2.781
33
+ name: Median Absolute Error (MdAE)
34
+ ---
35
+ # LLAMA 3 Story Point Estimator - mulestudio
36
+
37
+ This model is fine-tuned on issue descriptions from mulestudio and tested on mulestudio for story point estimation.
38
+
39
+ ## Model Details
40
+ - Base Model: LLAMA 3.2 1B
41
+ - Training Project: mulestudio
42
+ - Test Project: mulestudio
43
+ - Task: Story Point Estimation (Regression)
44
+ - Architecture: PEFT (LoRA)
45
+ - Tokenizer: SP SentencePiece
46
+
47
+ - Input: Issue titles
48
+ - Output: Story point estimation (continuous value)
49
+
50
+ ## Usage
51
+ ```python
52
+ from transformers import AutoModelForSequenceClassification, XLNetTokenizer
53
+ from peft import PeftConfig, PeftModel
54
+
55
+ # Load peft config model
56
+ config = PeftConfig.from_pretrained("DEVCamiloSepulveda/7-LLAMA3SP-mulestudio")
57
+
58
+ # Load tokenizer and model
59
+ tokenizer = XLNetTokenizer('spm_tokenizer.model', padding_side='right')
60
+ base_model = AutoModelForSequenceClassification.from_pretrained(
61
+ config.base_model_name_or_path,
62
+ num_labels=1,
63
+ torch_dtype=torch.float16,
64
+ device_map='auto'
65
+ )
66
+ model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/7-LLAMA3SP-mulestudio")
67
+
68
+ # Prepare input text
69
+ text = "Your issue description here"
70
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
71
+
72
+ # Get prediction
73
+ outputs = model(**inputs)
74
+ story_points = outputs.logits.item()
75
+ ```
76
+
77
+ ## Training Details
78
+ - Fine-tuning method: LoRA (Low-Rank Adaptation)
79
+ - Sequence length: 20 tokens
80
+ - Best training epoch: 0 / 20 epochs
81
+ - Batch size: 32
82
+ - Training time: 15.513 seconds
83
+ - Mean Absolute Error (MAE): 3.877
84
+ - Median Absolute Error (MdAE): 2.781
85
+ ### Framework versions
86
+
87
+ - PEFT 0.14.0
adapter_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "meta-llama/Llama-3.2-1B",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 16,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.1,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": [
21
+ "classifier",
22
+ "score"
23
+ ],
24
+ "peft_type": "LORA",
25
+ "r": 8,
26
+ "rank_pattern": {},
27
+ "revision": null,
28
+ "target_modules": [
29
+ "v_proj",
30
+ "k_proj",
31
+ "q_proj",
32
+ "o_proj"
33
+ ],
34
+ "task_type": "SEQ_CLS",
35
+ "use_dora": false,
36
+ "use_rslora": false
37
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d3274ed40bb190c27e2b791e634b5d6f87df914c8991620b347becf7be4c1bb
3
+ size 6840816
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b3a4bb17f63f08b5d181796bfd9177ccaac773b14f55eb7a8d067c0d26dc488
3
+ size 1560270490
spm_tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01efe174cfad718fe7365005a35085426f618e57a1a1a5a92d36465f448087d6
3
+ size 917083
spm_tokenizer.vocab ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff