datasets: | |
- fajrikoto/id_liputan6 | |
language: | |
- id | |
base_model: | |
- cahya/bert2bert-indonesian-summarization | |
# Fine-Tuned BERT2BERT Summarization Model | |
This model is fine-tuned based on the original [BERT2BERT Indonesian Summarization](https://huggingface.co./cahya/bert2bert-indonesian-summarization) model. | |
### Fine-Tuned Dataset: | |
- **Dataset**: [Liputan6_ID](https://huggingface.co./datasets/fajrikoto/id_liputan6) | |
- **Task**: Summarization | |
This model was fine-tuned using the [Liputan6_ID](https://huggingface.co./datasets/fajrikoto/id_liputan6) dataset, which contains Indonesian news articles. The model is optimized for summarizing domain-specific texts from the Liputan6 dataset. | |
## Code Sample | |
```python | |
from transformers import BertTokenizer, EncoderDecoderModel | |
tokenizer = BertTokenizer.from_pretrained("rowjak/bert-indonesian-news-summarization") | |
tokenizer.bos_token = tokenizer.cls_token | |
tokenizer.eos_token = tokenizer.sep_token | |
model = EncoderDecoderModel.from_pretrained("rowjak/bert-indonesian-news-summarization") | |
# | |
ARTICLE = "" | |
# generate summary | |
input_ids = tokenizer.encode(ARTICLE, return_tensors='pt') | |
summary_ids = model.generate(input_ids, | |
max_length=150, | |
num_beams=10, | |
repetition_penalty=2.5, | |
length_penalty=1.0, | |
early_stopping=True, | |
no_repeat_ngram_size=2, | |
use_cache=True, | |
do_sample = True, | |
temperature = 0.8, | |
top_k = 50, | |
top_p = 0.95) | |
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) | |
print(summary_text) | |
``` | |
Output: | |
``` | |
--- | |
``` | |