Model Card for SaulLM-54B-Instruct

image/jpeg

Note: This model is a research artifact and should be considered as such.

Model Details

Model Description

SaulLM-54B-Instruct is a state-of-the-art language model specifically designed for the legal domain. It was developed through a collaboration between Equall and MICS at CentraleSupélec (Université Paris-Saclay) and aims to contribute to the advancement of LLMs specialized for legal work.

  • Developed by: Equall and MICS of CentraleSupélec (Université Paris-Saclay)
  • Model type: A 54 billion parameter model pretrained and finetuned for legal tasks, leveraging data from US and European legal databases.
  • Language(s) (NLP): English
  • License: MIT-License
  • Finetuned from model: Base model developed by Equall relying on continuous pretraining of Mixtral’s models.

Intended Uses & Limitations

Intended Uses

SaulLM-54B-Instruct is intended to support further research and be adapted for various legal use cases.

Limitations

The information provided by the model is for informational purposes only and should not be interpreted as legal advice. Also, because SaulLM-54B-Instruct was trained with a focus on US and European legal systems, it may not perform as well on legal systems outside of those jurisdictions.

Bias, Risks, and Ethical Considerations

Bias and Risks

Despite efforts to mitigate bias, SaulLM-54B may still exhibit biases inherent in its training data or otherwise provide inaccurate responses. The model is trained on information up to a certain point in time, and the model cannot account for all recent legal developments. Users should be cautious and critically evaluate the model's outputs, especially in sensitive legal cases. The responsibility for making decisions based on the information rests with the user, not the model or its developers. Users are encouraged to seek the assistance of qualified legal professionals where legal advice is needed.

Ethical Considerations

Users must use SaulLM-54B responsibly, ensuring that the model is not misused in a way that violates the law or infringes on the rights of others. Among other things, the model may not be used to generate harmful content, spread misinformation, or violate privacy or intellectual property rights.

Technical Details

Training Data

SaulLM-54B was trained on a rich dataset comprising European and US legal texts, court rulings, and legislative documents.

Citation

To reference SaulLM-54B in your work, please cite this model card.

@misc{colombo2024saullm54bsaullm141bscaling,
      title={SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain}, 
      author={Pierre Colombo and Telmo Pires and Malik Boudiaf and Rui Melo and Dominic Culver and Sofia Morgado and Etienne Malaboeuf and Gabriel Hautreux and Johanne Charpentier and Michael Desa},
      year={2024},
      eprint={2407.19584},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.19584}, 
}

Downloads last month
449
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Equall/SaulLM-54B-Instruct

Finetunes
1 model
Quantizations
2 models

Dataset used to train Equall/SaulLM-54B-Instruct

Collection including Equall/SaulLM-54B-Instruct