a-F1 commited on
Commit
c363161
1 Parent(s): bcf7a74

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -13
README.md CHANGED
@@ -1,13 +1,34 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- # LLaMA-2-chat 7B fine tunes on TOFU (Origin Model)
6
 
7
  ## Model Details
 
 
 
 
 
 
8
 
9
- - **Base Model**: LLaMA-2-chat 7B
10
- - **Training**: Fine-tuned on TOFU dataset
11
 
12
  ## Loading the Model
13
 
@@ -22,17 +43,14 @@ model = AutoModelForCausalLM.from_pretrained("OPTML-Group/TOFU-origin-Llama-2-7b
22
 
23
  If you use this model in your research, please cite:
24
  ```
25
- @misc{fan2024simplicityprevailsrethinkingnegative,
26
- title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
27
- author={Chongyu Fan and Jiancheng Liu and Licong Lin and Jinghan Jia and Ruiqi Zhang and Song Mei and Sijia Liu},
28
- year={2024},
29
- eprint={2410.07163},
30
- archivePrefix={arXiv},
31
- primaryClass={cs.CL},
32
- url={https://arxiv.org/abs/2410.07163},
33
  }
34
  ```
35
 
36
- ## Contact
37
 
38
- For questions or issues regarding this model, please contact chongyu.fan93@gmail.com.
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - locuslab/TOFU
5
+ language:
6
+ - en
7
+ base_model:
8
+ - NousResearch/Llama-2-7b-chat-hf
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - unlearn
13
+ - machine-unlearning
14
+ - llm-unlearning
15
+ - data-privacy
16
+ - large-language-models
17
+ - trustworthy-ai
18
+ - trustworthy-machine-learning
19
+ - language-model
20
  ---
21
 
22
+ # Origin Model on Task "TOFU"
23
 
24
  ## Model Details
25
+ - **Training**:
26
+ - **Task**: [🤗datasets/locuslab/TOFU](https://huggingface.co/datasets/locuslab/TOFU)
27
+ - **Method**: Fine tune
28
+ - **Base Model**: [[🤗NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf)
29
+ - **Code Base**: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple)
30
+ - **Research Paper**: ["Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"](https://arxiv.org/abs/2410.07163)
31
 
 
 
32
 
33
  ## Loading the Model
34
 
 
43
 
44
  If you use this model in your research, please cite:
45
  ```
46
+ @article{fan2024simplicity,
47
+ title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
48
+ author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
49
+ journal={arXiv preprint arXiv:2410.07163},
50
+ year={2024}
 
 
 
51
  }
52
  ```
53
 
54
+ ## Reporting Issues
55
 
56
+ Reporting issues with the model: [github.com/OPTML-Group/Unlearn-Simple](https://github.com/OPTML-Group/Unlearn-Simple)