tinyllama-tarot-v1
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 .
Model description
This model is a language model capable of making predictions based on tarot cards. Trained to respond to questions related to topics such as love, career, and general life, tarot cards are the foundation of its predictions. The model can make predictions based on the selected tarot cards. You can access the tarot cards from the tarot dataset.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
Training results
Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
- Downloads last month
- 5,075
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text2text-generation models for peft library.
Model tree for barissglc/tinyllama-tarot-v1
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0