Zalmati / README.md
PetraAI's picture
Update README.md
2b73177 verified
|
raw
history blame
3.42 kB
metadata
license: apache-2.0
dataset:
  - PetraAI/PetraAI
  - emotion
  - bigcode/the-stack-v2
  - microsoft/orca-math-word-problems-200k
  - HuggingFaceTB/cosmopedia
  - fka/awesome-chatgpt-prompts
language:
  - ar
  - en
task_categories:
  - text-classification
  - token-classification
  - table-question-answering
  - question-answering
  - zero-shot-classification
  - translation
  - summarization
  - conversational
  - text-generation
  - text2text-generation
  - fill-mask
  - sentence-similarity
metrics:
  - accuracy
  - f1
  - bertscore
  - bleu
  - bleurt
  - brier_score
  - code_eval
  - character
tags:
  - chemistry
  - biology
  - finance
  - legal
  - music
  - code
  - art
  - climate
  - medical
  - text-classification
  - emotion
  - endpoints-template
pretty_name: Zalmati
datasets:
  - microsoft/orca-math-word-problems-200k
  - Cohere/wikipedia-2023-11-embed-multilingual-v3
  - HuggingFaceTB/cosmopedia
  - fka/awesome-chatgpt-prompts
  - bigcode/the-stack-v2

Overview

Zalmati is a powerful multilingual language model trained on the massive and diverse PetraAI dataset. It can handle a wide range of natural language processing tasks including text classification, emotion analysis, question answering, translation, summarization, text generation and more across multiple domains like chemistry, biology, finance, legal, and medicine. With its support for Arabic and English, Zalmati provides cutting-edge AI capabilities to users in the Arabic-speaking world.

Model Architecture

Zalmati is based on the transformer architecture and was pretrained on the 1M-10M sample range of the PetraAI dataset using masked language modeling. It leverages the latest advances in large language models and transfer learning to achieve state-of-the-art performance on various NLP benchmarks.

Intended Use

Zalmati can be used for a multitude of language understanding and generation tasks across different domains. Some example use cases:

  • Text classification for topics, emotions, etc.
  • Text summarization for legal/financial documents
  • Question answering for knowledge bases
  • Code generation and translation
  • Sentiment analysis for Arabic social media
  • Creative writing and story generation

The model outputs should be reviewed and filtered as needed based on the specific use case.

Limitations and Risks

Like all language models, Zalmati may reflect biases present in its training data. It should not be used for any high-stakes decision making without careful testing and monitoring. The model may also make factual mistakes or generate offensive/unsafe content that requires filtering.

For development/research purposes only. Not for clinical use. Please review the license for terms.

How to Use

You can use Zalmati through the HuggingFace Transformers library:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("PetraAI/Zalmati")
model = AutoModelForSeq2SeqLM.from_pretrained("PetraAI/Zalmati")

input_text = "Translate the following Arabic text to English: السلام عليكم" 
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(input_ids)
translated = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(translated) # "Peace be upon you"

@article{PetraAI2022ZalmatiModel,
  title={Zalmati: A Powerful Multilingual Language Model for Arabic and English},
  author={First Last and First Last}, 
  journal={arXiv},
  year={2022}
}