Hebrew-Gemma-11B-V2

An updated version of Hebrew-Gemma-11B that was trained longer and had some bugs fixes.

Base Models:

Instruct Models:

Hebrew-Gemma-11B is an open-source Large Language Model (LLM) is a hebrew/english pretrained generative text model with 11 billion parameters, based on the Gemma-7B architecture from Google.

It is continued pretrain of gemma-7b, extended to a larger scale and trained on 3B additional tokens of both English and Hebrew text data.

The resulting model Gemma-11B is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.

Terms of Use

As an extention of Gemma-7B, this model is subject to the original license and terms of use by Google.

Gemma-7B original Terms of Use: Terms

Usage

Below are some code snippets on how to get quickly started with running the model.

First make sure to pip install -U transformers, then copy the snippet from the section that is relevant for your usecase.

Running on CPU

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2")

input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
input_ids = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Running on GPU

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2", device_map="auto")

input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Running with 4-Bit precision

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Gemma-11B-V2", quantization_config = BitsAndBytesConfig(load_in_4bit=True))

input_text = "ืฉืœื•ื! ืžื” ืฉืœื•ืžืš ื”ื™ื•ื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0])

Benchmark Results

  • Coming Soon!

Notice

Hebrew-Gemma-11B-V2 is a pretrained base model and therefore does not have any moderation mechanisms.

Authors

  • Trained by Yam Peleg.
  • In collaboration with Jonathan Rouach and Arjeo, inc.
Downloads last month
5,635
Safetensors
Model size
10.5B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yam-peleg/Hebrew-Gemma-11B-V2

Quantizations
3 models

Collection including yam-peleg/Hebrew-Gemma-11B-V2