Saxo commited on
Commit
faa44fb
·
verified ·
1 Parent(s): 0e231e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -37
README.md CHANGED
@@ -1,47 +1,71 @@
1
  ---
2
- base_model:
3
- - pankajmathur/orca_mini_v8_0_70b
4
- - huihui-ai/Llama-3.3-70B-Instruct-abliterated
5
  library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
- # Linkbricks-Horizon-AI-Avengers-V1-70B
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
 
15
- ## Merge Details
16
- ### Merge Method
17
 
18
- This model was merged using the SLERP merge method.
 
 
19
 
20
- ### Models Merged
 
 
 
 
 
 
 
21
 
22
- The following models were included in the merge:
23
- * [pankajmathur/orca_mini_v8_0_70b](https://huggingface.co/pankajmathur/orca_mini_v8_0_70b)
24
- * [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated)
25
 
26
- ### Configuration
 
 
 
 
 
 
 
 
27
 
28
- The following YAML configuration was used to produce this model:
 
 
 
 
 
 
 
29
 
30
- ```yaml
31
- base_model: pankajmathur/orca_mini_v8_0_70b
32
- dtype: bfloat16
33
- merge_method: slerp
34
- parameters:
35
- t:
36
- - filter: self_attn
37
- value: [0.0, 0.5, 0.3, 0.7, 1.0]
38
- - filter: mlp
39
- value: [1.0, 0.5, 0.7, 0.3, 0.0]
40
- - value: 0.8
41
- slices:
42
- - sources:
43
- - layer_range: [0, 80]
44
- model: pankajmathur/orca_mini_v8_0_70b
45
- - layer_range: [0, 80]
46
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
47
- ```
 
1
  ---
 
 
 
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: meta-llama/Llama-3.3-70B-Instruct
5
+ datasets:
6
+ - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
7
+ - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
8
+ - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
9
+ - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
10
+ - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
11
+ - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
12
+ - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
13
+ - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
14
+ - Saxo/ko-news-corpus-1
15
+ - Saxo/ko-news-corpus-2
16
+ - Saxo/ko-news-corpus-3
17
+ - Saxo/ko-news-corpus-4
18
+ - Saxo/ko-news-corpus-5
19
+ - Saxo/ko-news-corpus-6
20
+ - Saxo/ko-news-corpus-7
21
+ - Saxo/ko-news-corpus-8
22
+ - Saxo/ko-news-corpus-9
23
+ - maywell/ko_Ultrafeedback_binarized
24
+ - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
25
+ - lilacai/glaive-function-calling-v2-sharegpt
26
+ - kuotient/gsm8k-ko
27
+ language:
28
+ - ko
29
+ - en
30
+ - jp
31
+ - cn
32
+ pipeline_tag: text-generation
33
  ---
 
 
 
34
 
35
+ # Model Card for Model ID
 
36
 
37
+ <div align="center">
38
+ <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
39
+ </div>
40
 
41
+ AIとビッグデータ分析の専門企業であるLinkbricksのデータサイエンティストであるジ・ユンソン(Saxo)ディレクターが <br>
42
+ meta-llama/Llama-3.3-70B-Instructベースモデルを使用し、H100-80G 8個で約35%程度のパラメータをSFT->DPO->ORPO->MERGEした多言語強化言語モデル。<br>
43
+ 8千万件の様々な言語圏のニュースやウィキコーパスを基に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学や論理判断データを通じて、日中韓英言語のクロスエンハンスメント処理と複雑な論理問題にも対応できるように訓練したモデルである。
44
+ -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br>
45
+ -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br>
46
+ -Function Call<br>
47
+ -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br>
48
+ -「transformers_version」: 「4.46.3」<br>
49
 
50
+ <br><br>
 
 
51
 
52
+ AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 <br>
53
+ meta-llama/Llama-3.3-70B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 약 35%정도의 파라미터를 SFT->DPO->ORPO->MERGE 한 다국어 강화 언어 모델<br>
54
+ 8천만건의 다양한 언어권의 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br>
55
+ -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br>
56
+ -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br>
57
+ -Function Call 및 Tool Calling 지원<br>
58
+ -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br>
59
+ -"transformers_version": "4.46.3"<br>
60
+ <br><br>
61
 
62
+ Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br>
63
+ about 35% of total parameters SFT->DPO->ORPO->MERGE training model based on meta-llama/Llama-3.3-70B-Instruct through 8 H100-80Gs as multi-lingual boosting language model <br>
64
+ It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 80M multi-lingual news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
65
+ -Tokenizer uses the base model without word expansion<br>
66
+ -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
67
+ -Function Calling<br>
68
+ -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
69
+ <br><br>
70
 
71
+ <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>