Ikeofai commited on
Commit
dbb9e77
1 Parent(s): dbe3485

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,6 +9,7 @@ base_model: mistralai/Mistral-7B-Instruct-v0.2
9
  model-index:
10
  - name: learn-python-easy-v2
11
  results: []
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # learn-python-easy-v2
18
 
19
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.7009
22
 
@@ -26,11 +27,10 @@ More information needed
26
 
27
  ## Intended uses & limitations
28
 
29
- More information needed
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
34
 
35
  ## Training procedure
36
 
 
9
  model-index:
10
  - name: learn-python-easy-v2
11
  results: []
12
+ pipeline_tag: question-answering
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # learn-python-easy-v2
19
 
20
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on a samll dataset of 205 examples containing question and answer pairs regarding the Python Programming language for purposes of fine tuning experimentation.
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.7009
23
 
 
27
 
28
  ## Intended uses & limitations
29
 
30
+ This is intended to be used for experimental purposes regarding fine tuning of large language models and can be optimised for better outputs with more training examples.
31
 
32
  ## Training and evaluation data
33
 
 
34
 
35
  ## Training procedure
36