Custom Build Models
Collection
Text Generation Cook with it's GGUF
•
38 items
•
Updated
•
8
The Tulu-MathLingo-8B model is a fine-tuned version of meta-llama/Llama-3.1-8B, optimized for solving mathematical word problems and reasoning tasks in English and the Tulu language. The model integrates advanced language understanding and reasoning capabilities with a focus on providing solutions to math-related queries.
File Name | Size | Description | Upload Status |
---|---|---|---|
.gitattributes |
1.57 kB | Configures LFS tracking for large files. | Updated |
README.md |
292 Bytes | Basic details about the uploaded model. | Updated |
config.json |
988 Bytes | Contains model architecture and metadata. | Uploaded |
generation_config.json |
241 Bytes | Parameters for text generation (e.g., length, temperature). | Uploaded |
model-00001-of-00004.safetensors |
4.98 GB | Part 1 of model weights. | Uploaded (LFS) |
model-00002-of-00004.safetensors |
5 GB | Part 2 of model weights. | Uploaded (LFS) |
model-00003-of-00004.safetensors |
4.92 GB | Part 3 of model weights. | Uploaded (LFS) |
model-00004-of-00004.safetensors |
1.17 GB | Part 4 of model weights. | Uploaded (LFS) |
model.safetensors.index.json |
25.4 kB | Index file for multi-part model weights. | Uploaded |
special_tokens_map.json |
462 Bytes | Maps special tokens (e.g., <PAD> , <EOS> ). |
Uploaded |
tokenizer.json |
17.2 MB | Full tokenizer configuration. | Uploaded (LFS) |
tokenizer_config.json |
57.6 kB | Metadata for tokenizer usage. | Uploaded |
Multilingual Math Reasoning:
Text Generation:
Fine-Tuned Specializations:
Special Token Mapping:
<PAD>
and <EOS>
effectively.Secure and Efficient Storage:
Large Parameter Size:
Base Model: meta-llama/Llama-3.1-8B
Fine-Tuned:
Dataset:
Model Size:
Mathematical Word Problems:
Conversational AI for Math:
Multilingual Support:
Education Tools:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Tulu-MathLingo-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="fp16")
query = "If a train travels 60 miles in 2 hours, what is its average speed?"
inputs = tokenizer(query, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Answer:", response)
Hardware:
Optimization:
fp16
) for reduced memory footprint. Base model
meta-llama/Llama-3.1-8B