Edit model card

Chatml format. The dataset is about 1400 entries ranging from 8-16k. It's split three ways between long context multi turn chat, long context summarization, and writing analysis. Full fine tune using linear a rope scale factor of 2.0. Trained for five epochs with a learning rate of 1e-5.

Downloads last month
31
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for openerotica/Llama-3-lima-nsfw-16k-test

Quantizations
1 model