david4096/gemma-2b-finetune-graph-genome

The Model david4096/gemma-2b-finetune-graph-genome was converted to MLX format from mlx-community/quantized-gemma-2b-it using mlx-lm version 0.16.0.

It was trained on text from about 670 papers with the phrase "graph genome" in them from biorxiv.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("david4096/gemma-2b-finetune-graph-genome-lg")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.