kevinkawchak commited on
Commit
dc9482f
1 Parent(s): 8f9d494

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -21,6 +21,19 @@ base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
21
  - **Dataset Identification:** Molecule-oriented Instructions
22
  - **Dataset Function:** description_guided_molecule_design
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
25
 
26
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
21
  - **Dataset Identification:** Molecule-oriented Instructions
22
  - **Dataset Function:** description_guided_molecule_design
23
 
24
+ [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing) <br>
25
+
26
+ A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
27
+
28
+ The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
29
+
30
+ References:
31
+ 1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
32
+ 2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
33
+ 3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
34
+ 4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
35
+ 5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
36
+
37
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
38
 
39
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)