KoichiYasuoka commited on
Commit
9309bc0
1 Parent(s): 6633861

model improved

Browse files
Files changed (3) hide show
  1. README.md +1 -1
  2. maker.py +3 -3
  3. pytorch_model.bin +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm).
22
 
23
  ## How to Use
24
 
 
18
 
19
  ## Model Description
20
 
21
+ This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-spm-upos](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos).
22
 
23
  ## How to Use
24
 
maker.py CHANGED
@@ -1,5 +1,5 @@
1
  #! /usr/bin/python3
2
- src="KoichiYasuoka/roberta-base-thai-spm"
3
  tgt="KoichiYasuoka/roberta-base-thai-spm-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
@@ -47,9 +47,9 @@ trainDS=UDgoeswithDataset("train.conllu",tkz)
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
- cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()})
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
- trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
 
1
  #! /usr/bin/python3
2
+ src="KoichiYasuoka/roberta-base-thai-spm-upos"
3
  tgt="KoichiYasuoka/roberta-base-thai-spm-ud-goeswith"
4
  url="https://github.com/KoichiYasuoka/spaCy-Thai"
5
  import os
 
47
  devDS=UDgoeswithDataset("dev.conllu",tkz)
48
  testDS=UDgoeswithDataset("test.conllu",tkz)
49
  lid=trainDS(devDS,testDS)
50
+ cfg=AutoConfig.from_pretrained(src,num_labels=len(lid),label2id=lid,id2label={i:l for l,i in lid.items()},ignore_mismatched_sizes=True)
51
  arg=TrainingArguments(num_train_epochs=3,per_device_train_batch_size=32,output_dir="/tmp",overwrite_output_dir=True,save_total_limit=2,evaluation_strategy="epoch",learning_rate=5e-05,warmup_ratio=0.1)
52
+ trn=Trainer(args=arg,data_collator=DataCollatorForTokenClassification(tkz),model=AutoModelForTokenClassification.from_pretrained(src,config=cfg,ignore_mismatched_sizes=True),train_dataset=trainDS,eval_dataset=devDS)
53
  trn.train()
54
  trn.save_model(tgt)
55
  tkz.save_pretrained(tgt)
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:462c5881ab1ccf3649f9a1211c4fed453fcd674df5691a6705ccb53e041d0c99
3
  size 351840497
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5f35c331fce3daa2bb486b7752ab8e3d7142a1989096f5915f5d7ad7a3c977f
3
  size 351840497