Seongyun commited on
Commit
02ec638
1 Parent(s): e29438d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -29,16 +29,16 @@ pipeline_tag: text-generation
29
  Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
30
 
31
  # Model Details
32
- Janus-ORPO is a model created by applying ORPO to Mistral-7B-v0.2 using the Multifaceted-Collection-ORPO.
33
  ## Model Description
34
 
35
  - **Model type:** Language model
36
  - **Language(s) (NLP):** English
37
  - **License:** Apache 2.0
38
- - **Related Models:** [Janus-66k-7B]() [Janus-DPO-7B](), [Janus-7B](), [Janus-RM-7B]()
39
- - **Training Datasets**: [Multifaceted-Collection-SFT](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT)
40
  - **Resources for more information:**
41
- - [Research paper]()
42
  - [GitHub Repo](https://github.com/kaistAI/Janus)
43
 
44
  # Usage
 
29
  Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT), a preference dataset containing 196k unique system messages for aligning LLMs to diverse human preferences. Janus not only excels at generating personalized responses that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
30
 
31
  # Model Details
32
+ Janus-ORPO-7B is a model created by applying ORPO to [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) using the [Multifaceted-Collection-ORPO](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-ORPO).
33
  ## Model Description
34
 
35
  - **Model type:** Language model
36
  - **Language(s) (NLP):** English
37
  - **License:** Apache 2.0
38
+ - **Related Models:** [Janus-DPO-7B](https://huggingface.co/kaist-ai/janus-dpo-7b), [Janus-7B](https://huggingface.co/kaist-ai/janus-7b), [Janus-RM-7B](https://huggingface.co/kaist-ai/janus-rm-7b)
39
+ - **Training Datasets**: [Multifaceted-Collection-ORPO](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-ORPO)
40
  - **Resources for more information:**
41
+ - [Research paper](https://arxiv.org/abs/2405.17977)
42
  - [GitHub Repo](https://github.com/kaistAI/Janus)
43
 
44
  # Usage