Kexer models

Kexer models are a collection of open-source generative text models fine-tuned on the Kotlin Exercices dataset. This is a repository for the fine-tuned Deepseek-coder-1.3b model in the Hugging Face Transformers format.

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load pre-trained model and tokenizer
model_name = 'JetBrains/deepseek-coder-1.3B-kexer'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')

# Create and encode input
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
    input_text, return_tensors='pt'
).to('cuda')

# Generate
output = model.generate(
    input_ids, max_length=60, num_return_sequences=1, 
    early_stopping=True, pad_token_id=tokenizer.eos_token_id,
)

# Decode output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

As with the base model, we can use FIM. To do this, the following format must be used:

'<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>'

Training setup

The model was trained on one A100 GPU with following hyperparameters:

Hyperparameter Value
warmup 10%
max_lr 1e-4
scheduler linear
total_batch_size 256 (~130K tokens per step)
num_epochs 4

More details about fine-tuning can be found in the technical report (coming soon!).

Fine-tuning data

For tuning this model, we used 15K exmaples from the synthetically generated Kotlin Exercices dataset. Every example follows the HumanEval format. In total, the dataset contains about 3.5M tokens.

Evaluation

For evaluation, we used the Kotlin HumanEval dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the datasets's page.

Here are the results of our evaluation:

Model name Kotlin HumanEval Pass Rate
Deepseek-coder-1.3B 26.71
Deepseek-coder-1.3B-Kexer 36.65

Ethical considerations and limitations

Deepseek-coder-1.3B-Kexer is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Deepseek-coder-1.3B-Kexer's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek-coder-1.3B-Kexer, developers should perform safety testing and tuning tailored to their specific applications of the model.

Downloads last month
408
Safetensors
Model size
1.35B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JetBrains/deepseek-coder-1.3B-kexer

Finetuned
(100)
this model
Quantizations
4 models

Dataset used to train JetBrains/deepseek-coder-1.3B-kexer

Collection including JetBrains/deepseek-coder-1.3B-kexer