Model Card for the test-version of instructionBERT for Bertology
A minimalistic instruction model with an already good analysed and pretrained encoder like BERT. So we can research the Bertology with instruction-tuned models, look at the attention and investigate what happens to BERT embeddings during fine-tuning.
The training code is released at the instructionBERT repository. We used the Huggingface API for warm-starting BertGeneration with Encoder-Decoder-Models for this purpose.
Run the model with a longer output
from transformers import AutoTokenizer, EncoderDecoderModel
# load the fine-tuned seq2seq model and corresponding tokenizer
model_name = "Bachstelze/instructionBERT"
model = EncoderDecoderModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input = "Write a poem about love, peace and pancake."
input_ids = tokenizer(input, return_tensors="pt").input_ids
output_ids = model.generate(input_ids, max_new_tokens=200)
print(tokenizer.decode(output_ids[0]))
Training parameters
- base model: "bert-base-uncased"
- trained for 1 epoche
- batch size of 16
- 20000 warm-up steps
- learning rate of 0.0001
Purpose of instructionBERT
InstructionBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
- Downloads last month
- 75
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.