ThomasBaruzier commited on
Commit
0952be6
1 Parent(s): c8e767e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +647 -31
README.md CHANGED
@@ -1,22 +1,15 @@
1
  ---
2
- library_name: llama.cpp
3
  license: gemma
4
- widget:
5
- - text: '<start_of_turn>user
6
-
7
- How does the brain work?<end_of_turn>
8
-
9
- <start_of_turn>model
10
-
11
- '
12
- inference:
13
- parameters:
14
- max_new_tokens: 200
15
  extra_gated_heading: Access Gemma on Hugging Face
16
- extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
17
- agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
 
18
  Face and click below. Requests are processed immediately.
19
  extra_gated_button_content: Acknowledge license
 
 
20
  ---
21
 
22
  <hr>
@@ -33,25 +26,17 @@ All quants were made using the imatrix option and Bartowski's [calibration file]
33
 
34
  <hr><br>
35
 
36
- # Gemma Model Card
37
-
38
- **Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
39
 
40
- This model card corresponds to the 2b instruct version the Gemma 2 model in GGUF Format. The weights here are **float32**.
41
-
42
- > [!IMPORTANT]
43
- >
44
- > In llama.cpp, and other related tools such as Ollama and LM Studio, please make sure that you have these flags set correctly, especially **`repeat-penalty`**. Georgi Gerganov (llama.cpp's author) shared his experience in https://huggingface.co/google/gemma-2b-it/discussions/38#65d2b14adb51f7c160769fa1.
45
-
46
- You can also visit the model card of the [2B pretrained v2 model GGUF](https://huggingface.co/google/gemma-2b-v2-GGUF).
47
 
48
  **Resources and Technical Documentation**:
49
 
50
- * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
51
- * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
52
- * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
53
 
54
- **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-2b-it-GGUF)
55
 
56
  **Authors**: Google
57
 
@@ -64,9 +49,640 @@ Summary description and brief definition of inputs and outputs.
64
  Gemma is a family of lightweight, state-of-the-art open models from Google,
65
  built from the same research and technology used to create the Gemini models.
66
  They are text-to-text, decoder-only large language models, available in English,
67
- with open weights, pre-trained variants, and instruction-tuned variants. Gemma
68
- models are well-suited for a variety of text generation tasks, including
69
  question answering, summarization, and reasoning. Their relatively small size
70
  makes it possible to deploy them in environments with limited resources such as
71
  a laptop, desktop or your own cloud infrastructure, democratizing access to
72
- state of the art AI models and helping foster innovation for everyone.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
5
  extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: >-
7
+ To access Gemma on Hugging Face, you’re required to review and agree to
8
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
9
  Face and click below. Requests are processed immediately.
10
  extra_gated_button_content: Acknowledge license
11
+ tags:
12
+ - conversational
13
  ---
14
 
15
  <hr>
 
26
 
27
  <hr><br>
28
 
29
+ # Gemma 2 model card
 
 
30
 
31
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
 
 
 
 
 
 
32
 
33
  **Resources and Technical Documentation**:
34
 
35
+ * [Responsible Generative AI Toolkit][rai-toolkit]
36
+ * [Gemma on Kaggle][kaggle-gemma]
37
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma2]
38
 
39
+ **Terms of Use**: [Terms][terms]
40
 
41
  **Authors**: Google
42
 
 
49
  Gemma is a family of lightweight, state-of-the-art open models from Google,
50
  built from the same research and technology used to create the Gemini models.
51
  They are text-to-text, decoder-only large language models, available in English,
52
+ with open weights for both pre-trained variants and instruction-tuned variants.
53
+ Gemma models are well-suited for a variety of text generation tasks, including
54
  question answering, summarization, and reasoning. Their relatively small size
55
  makes it possible to deploy them in environments with limited resources such as
56
  a laptop, desktop or your own cloud infrastructure, democratizing access to
57
+ state of the art AI models and helping foster innovation for everyone.
58
+
59
+ ### Usage
60
+
61
+ Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
62
+ ```sh
63
+ pip install -U transformers
64
+ ```
65
+
66
+ Then, copy the snippet from the section that is relevant for your usecase.
67
+
68
+ #### Running with the `pipeline` API
69
+
70
+ ```python
71
+ import torch
72
+ from transformers import pipeline
73
+
74
+ pipe = pipeline(
75
+ "text-generation",
76
+ model="google/gemma-2-2b-it",
77
+ model_kwargs={"torch_dtype": torch.bfloat16},
78
+ device="cuda", # replace with "mps" to run on a Mac device
79
+ )
80
+
81
+ messages = [
82
+ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
83
+ ]
84
+
85
+ outputs = pipe(messages, max_new_tokens=256)
86
+ assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
87
+ print(assistant_response)
88
+ # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
89
+ ```
90
+
91
+ #### Running the model on a single / multi GPU
92
+
93
+ ```python
94
+ # pip install accelerate
95
+ from transformers import AutoTokenizer, AutoModelForCausalLM
96
+ import torch
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
99
+ model = AutoModelForCausalLM.from_pretrained(
100
+ "google/gemma-2-2b-it",
101
+ device_map="auto",
102
+ torch_dtype=torch.bfloat16,
103
+ )
104
+
105
+ input_text = "Write me a poem about Machine Learning."
106
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
107
+
108
+ outputs = model.generate(**input_ids, max_new_tokens=32)
109
+ print(tokenizer.decode(outputs[0]))
110
+ ```
111
+
112
+ You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
113
+ ```python
114
+ messages = [
115
+ {"role": "user", "content": "Write me a poem about Machine Learning."},
116
+ ]
117
+ input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
118
+
119
+ outputs = model.generate(**input_ids, max_new_tokens=256)
120
+ print(tokenizer.decode(outputs[0]))
121
+ ```
122
+
123
+ <a name="precisions"></a>
124
+ #### Running the model on a GPU using different precisions
125
+
126
+ The native weights of this model were exported in `bfloat16` precision.
127
+
128
+ You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
129
+
130
+ * _Upcasting to `torch.float32`_
131
+
132
+ ```python
133
+ # pip install accelerate
134
+ from transformers import AutoTokenizer, AutoModelForCausalLM
135
+
136
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
137
+ model = AutoModelForCausalLM.from_pretrained(
138
+ "google/gemma-2-2b-it",
139
+ device_map="auto",
140
+ )
141
+
142
+ input_text = "Write me a poem about Machine Learning."
143
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
144
+
145
+ outputs = model.generate(**input_ids, max_new_tokens=32)
146
+ print(tokenizer.decode(outputs[0]))
147
+ ```
148
+
149
+ #### Running the model through a CLI
150
+
151
+ The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
152
+ for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
153
+ for getting started, then launch the CLI through the following command:
154
+
155
+ ```shell
156
+ local-gemma --model 2b --preset speed
157
+ ```
158
+
159
+ #### Quantized Versions through `bitsandbytes`
160
+
161
+ <details>
162
+ <summary>
163
+ Using 8-bit precision (int8)
164
+ </summary>
165
+
166
+ ```python
167
+ # pip install bitsandbytes accelerate
168
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
169
+
170
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
171
+
172
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
173
+ model = AutoModelForCausalLM.from_pretrained(
174
+ "google/gemma-2-2b-it",
175
+ quantization_config=quantization_config,
176
+ )
177
+
178
+ input_text = "Write me a poem about Machine Learning."
179
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
180
+
181
+ outputs = model.generate(**input_ids, max_new_tokens=32)
182
+ print(tokenizer.decode(outputs[0]))
183
+ ```
184
+ </details>
185
+
186
+ <details>
187
+ <summary>
188
+ Using 4-bit precision
189
+ </summary>
190
+
191
+ ```python
192
+ # pip install bitsandbytes accelerate
193
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
194
+
195
+ quantization_config = BitsAndBytesConfig(load_in_4bit=True)
196
+
197
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
198
+ model = AutoModelForCausalLM.from_pretrained(
199
+ "google/gemma-2-2b-it",
200
+ quantization_config=quantization_config,
201
+ )
202
+
203
+ input_text = "Write me a poem about Machine Learning."
204
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
205
+
206
+ outputs = model.generate(**input_ids, max_new_tokens=32)
207
+ print(tokenizer.decode(outputs[0]))
208
+ ```
209
+ </details>
210
+
211
+ #### Advanced Usage
212
+
213
+ <details>
214
+ <summary>
215
+ Torch compile
216
+ </summary>
217
+
218
+ [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
219
+ inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
220
+
221
+ Note that two warm-up steps are required before the full inference speed is realised:
222
+
223
+ ```python
224
+ import os
225
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
226
+
227
+ from transformers import AutoTokenizer, Gemma2ForCausalLM
228
+ from transformers.cache_utils import HybridCache
229
+ import torch
230
+
231
+ torch.set_float32_matmul_precision("high")
232
+
233
+ # load the model + tokenizer
234
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
235
+ model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16)
236
+ model.to("cuda")
237
+
238
+ # apply the torch compile transformation
239
+ model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
240
+
241
+ # pre-process inputs
242
+ input_text = "The theory of special relativity states "
243
+ model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
244
+ prompt_length = model_inputs.input_ids.shape[1]
245
+
246
+ # set-up k/v cache
247
+ past_key_values = HybridCache(
248
+ config=model.config,
249
+ max_batch_size=1,
250
+ max_cache_len=model.config.max_position_embeddings,
251
+ device=model.device,
252
+ dtype=model.dtype
253
+ )
254
+
255
+ # enable passing kv cache to generate
256
+ model._supports_cache_class = True
257
+ model.generation_config.cache_implementation = None
258
+
259
+ # two warm-up steps
260
+ for idx in range(2):
261
+ outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
262
+ past_key_values.reset()
263
+
264
+ # fast run
265
+ outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
266
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
267
+ ```
268
+
269
+ For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
270
+
271
+ </details>
272
+
273
+ ### Inputs and outputs
274
+
275
+ * **Input:** Text string, such as a question, a prompt, or a document to be
276
+ summarized.
277
+ * **Output:** Generated English-language text in response to the input, such
278
+ as an answer to a question, or a summary of a document.
279
+
280
+ ### Citation
281
+
282
+ ```none
283
+ @article{gemma_2024,
284
+ title={Gemma},
285
+ url={https://www.kaggle.com/m/3301},
286
+ DOI={10.34740/KAGGLE/M/3301},
287
+ publisher={Kaggle},
288
+ author={Gemma Team},
289
+ year={2024}
290
+ }
291
+ ```
292
+
293
+ ## Model Data
294
+
295
+ Data used for model training and how the data was processed.
296
+
297
+ ### Training Dataset
298
+
299
+ These models were trained on a dataset of text data that includes a wide variety
300
+ of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
301
+ trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
302
+ Here are the key components:
303
+
304
+ * Web Documents: A diverse collection of web text ensures the model is exposed
305
+ to a broad range of linguistic styles, topics, and vocabulary. Primarily
306
+ English-language content.
307
+ * Code: Exposing the model to code helps it to learn the syntax and patterns of
308
+ programming languages, which improves its ability to generate code or
309
+ understand code-related questions.
310
+ * Mathematics: Training on mathematical text helps the model learn logical
311
+ reasoning, symbolic representation, and to address mathematical queries.
312
+
313
+ The combination of these diverse data sources is crucial for training a powerful
314
+ language model that can handle a wide variety of different tasks and text
315
+ formats.
316
+
317
+ ### Data Preprocessing
318
+
319
+ Here are the key data cleaning and filtering methods applied to the training
320
+ data:
321
+
322
+ * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
323
+ applied at multiple stages in the data preparation process to ensure the
324
+ exclusion of harmful and illegal content.
325
+ * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
326
+ reliable, automated techniques were used to filter out certain personal
327
+ information and other sensitive data from training sets.
328
+ * Additional methods: Filtering based on content quality and safety in line with
329
+ [our policies][safety-policies].
330
+
331
+ ## Implementation Information
332
+
333
+ Details about the model internals.
334
+
335
+ ### Hardware
336
+
337
+ Gemma was trained using the latest generation of
338
+ [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
339
+
340
+ Training large language models requires significant computational power. TPUs,
341
+ designed specifically for matrix operations common in machine learning, offer
342
+ several advantages in this domain:
343
+
344
+ * Performance: TPUs are specifically designed to handle the massive computations
345
+ involved in training LLMs. They can speed up training considerably compared to
346
+ CPUs.
347
+ * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
348
+ for the handling of large models and batch sizes during training. This can
349
+ lead to better model quality.
350
+ * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
351
+ handling the growing complexity of large foundation models. You can distribute
352
+ training across multiple TPU devices for faster and more efficient processing.
353
+ * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
354
+ solution for training large models compared to CPU-based infrastructure,
355
+ especially when considering the time and resources saved due to faster
356
+ training.
357
+ * These advantages are aligned with
358
+ [Google's commitments to operate sustainably][sustainability].
359
+
360
+ ### Software
361
+
362
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
363
+
364
+ JAX allows researchers to take advantage of the latest generation of hardware,
365
+ including TPUs, for faster and more efficient training of large models.
366
+
367
+ ML Pathways is Google's latest effort to build artificially intelligent systems
368
+ capable of generalizing across multiple tasks. This is specially suitable for
369
+ [foundation models][foundation-models], including large language models like
370
+ these ones.
371
+
372
+ Together, JAX and ML Pathways are used as described in the
373
+ [paper about the Gemini family of models][gemini-2-paper]; "the 'single
374
+ controller' programming model of Jax and Pathways allows a single Python
375
+ process to orchestrate the entire training run, dramatically simplifying the
376
+ development workflow."
377
+
378
+ ## Evaluation
379
+
380
+ Model evaluation metrics and results.
381
+
382
+ ### Benchmark Results
383
+
384
+ These models were evaluated against a large collection of different datasets and
385
+ metrics to cover different aspects of text generation:
386
+
387
+ | Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
388
+ | ------------------------------ | ------------- | ------------- | ------------- | -------------- |
389
+ | [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
390
+ | [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
391
+ | [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
392
+ | [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
393
+ | [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
394
+ | [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
395
+ | [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
396
+ | [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
397
+ | [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
398
+ | [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
399
+ | [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
400
+ | [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
401
+ | [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
402
+ | [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
403
+ | [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
404
+ | [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
405
+ | [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
406
+
407
+ ## Ethics and Safety
408
+
409
+ Ethics and safety evaluation approach and results.
410
+
411
+ ### Evaluation Approach
412
+
413
+ Our evaluation methods include structured evaluations and internal red-teaming
414
+ testing of relevant content policies. Red-teaming was conducted by a number of
415
+ different teams, each with different goals and human evaluation metrics. These
416
+ models were evaluated against a number of different categories relevant to
417
+ ethics and safety, including:
418
+
419
+ * Text-to-Text Content Safety: Human evaluation on prompts covering safety
420
+ policies including child sexual abuse and exploitation, harassment, violence
421
+ and gore, and hate speech.
422
+ * Text-to-Text Representational Harms: Benchmark against relevant academic
423
+ datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
424
+ * Memorization: Automated evaluation of memorization of training data, including
425
+ the risk of personally identifiable information exposure.
426
+ * Large-scale harm: Tests for "dangerous capabilities," such as chemical,
427
+ biological, radiological, and nuclear (CBRN) risks.
428
+
429
+ ### Evaluation Results
430
+
431
+ The results of ethics and safety evaluations are within acceptable thresholds
432
+ for meeting [internal policies][safety-policies] for categories such as child
433
+ safety, content safety, representational harms, memorization, large-scale harms.
434
+ On top of robust internal evaluations, the results of well-known safety
435
+ benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
436
+ are shown here.
437
+
438
+ #### Gemma 2.0
439
+
440
+ | Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
441
+ | ------------------------ | ------------- | ------------- | ------------- | -------------- |
442
+ | [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
443
+ | [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
444
+ | [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
445
+ | [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
446
+ | [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
447
+ | [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
448
+ | [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
449
+ | [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
450
+ | [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
451
+
452
+ ## Dangerous Capability Evaluations
453
+
454
+ ### Evaluation Approach
455
+
456
+ We evaluated a range of dangerous capabilities:
457
+
458
+ - **Offensive cybersecurity:** To assess the model's potential for misuse in
459
+ cybersecurity contexts, we utilized both publicly available
460
+ Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
461
+ well as internally developed CTF challenges. These evaluations measure the
462
+ model's ability to exploit vulnerabilities and gain unauthorized access in
463
+ simulated environments.
464
+ - **Self-proliferation:** We evaluated the model's capacity for
465
+ self-proliferation by designing tasks that involve resource acquisition, code
466
+ execution, and interaction with remote systems. These evaluations assess
467
+ the model's ability to independently replicate and spread.
468
+ - **Persuasion:** To evaluate the model's capacity for persuasion and
469
+ deception, we conducted human persuasion studies. These studies involved
470
+ scenarios that measure the model's ability to build rapport, influence
471
+ beliefs, and elicit specific actions from human participants.
472
+
473
+ ### Evaluation Results
474
+
475
+ All evaluations are described in detail in
476
+ [Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
477
+ and in brief in the
478
+ [Gemma 2 technical report][tech-report].
479
+
480
+ <table>
481
+ <thead>
482
+ <tr>
483
+ <th>Evaluation</th>
484
+ <th>Capability</th>
485
+ <th>Gemma 2 IT 27B</th>
486
+ </tr>
487
+ </thead>
488
+ <tbody>
489
+ <tr>
490
+ <td>InterCode-CTF</td>
491
+ <td>Offensive cybersecurity</td>
492
+ <td>34/76 challenges</td>
493
+ </tr>
494
+ <tr>
495
+ <td>Internal CTF</td>
496
+ <td>Offensive cybersecurity</td>
497
+ <td>1/13 challenges</td>
498
+ </tr>
499
+ <tr>
500
+ <td>Hack the Box</td>
501
+ <td>Offensive cybersecurity</td>
502
+ <td>0/13 challenges</td>
503
+ </tr>
504
+ <tr>
505
+ <td>Self-proliferation early warning</td>
506
+ <td>Self-proliferation</td>
507
+ <td>1/10 challenges</td>
508
+ </tr>
509
+ <tr>
510
+ <td>Charm offensive</td>
511
+ <td>Persuasion</td>
512
+ <td>Percent of participants agreeing:
513
+ 81% interesting,
514
+ 75% would speak again,
515
+ 80% made personal connection</td>
516
+ </tr>
517
+ <tr>
518
+ <td>Click Links</td>
519
+ <td>Persuasion</td>
520
+ <td>34% of participants</td>
521
+ </tr>
522
+ <tr>
523
+ <td>Find Info</td>
524
+ <td>Persuasion</td>
525
+ <td>9% of participants</td>
526
+ </tr>
527
+ <tr>
528
+ <td>Run Code</td>
529
+ <td>Persuasion</td>
530
+ <td>11% of participants</td>
531
+ </tr>
532
+ <tr>
533
+ <td>Money talks</td>
534
+ <td>Persuasion</td>
535
+ <td>£3.72 mean donation</td>
536
+ </tr>
537
+ <tr>
538
+ <td>Web of Lies</td>
539
+ <td>Persuasion</td>
540
+ <td>18% mean shift towards correct belief, 1% mean shift towards
541
+ incorrect belief</td>
542
+ </tr>
543
+ </tbody>
544
+ </table>
545
+
546
+ ## Usage and Limitations
547
+
548
+ These models have certain limitations that users should be aware of.
549
+
550
+ ### Intended Usage
551
+
552
+ Open Large Language Models (LLMs) have a wide range of applications across
553
+ various industries and domains. The following list of potential uses is not
554
+ comprehensive. The purpose of this list is to provide contextual information
555
+ about the possible use-cases that the model creators considered as part of model
556
+ training and development.
557
+
558
+ * Content Creation and Communication
559
+ * Text Generation: These models can be used to generate creative text formats
560
+ such as poems, scripts, code, marketing copy, and email drafts.
561
+ * Chatbots and Conversational AI: Power conversational interfaces for customer
562
+ service, virtual assistants, or interactive applications.
563
+ * Text Summarization: Generate concise summaries of a text corpus, research
564
+ papers, or reports.
565
+ * Research and Education
566
+ * Natural Language Processing (NLP) Research: These models can serve as a
567
+ foundation for researchers to experiment with NLP techniques, develop
568
+ algorithms, and contribute to the advancement of the field.
569
+ * Language Learning Tools: Support interactive language learning experiences,
570
+ aiding in grammar correction or providing writing practice.
571
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
572
+ by generating summaries or answering questions about specific topics.
573
+
574
+ ### Limitations
575
+
576
+ * Training Data
577
+ * The quality and diversity of the training data significantly influence the
578
+ model's capabilities. Biases or gaps in the training data can lead to
579
+ limitations in the model's responses.
580
+ * The scope of the training dataset determines the subject areas the model can
581
+ handle effectively.
582
+ * Context and Task Complexity
583
+ * LLMs are better at tasks that can be framed with clear prompts and
584
+ instructions. Open-ended or highly complex tasks might be challenging.
585
+ * A model's performance can be influenced by the amount of context provided
586
+ (longer context generally leads to better outputs, up to a certain point).
587
+ * Language Ambiguity and Nuance
588
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
589
+ nuances, sarcasm, or figurative language.
590
+ * Factual Accuracy
591
+ * LLMs generate responses based on information they learned from their
592
+ training datasets, but they are not knowledge bases. They may generate
593
+ incorrect or outdated factual statements.
594
+ * Common Sense
595
+ * LLMs rely on statistical patterns in language. They might lack the ability
596
+ to apply common sense reasoning in certain situations.
597
+
598
+ ### Ethical Considerations and Risks
599
+
600
+ The development of large language models (LLMs) raises several ethical concerns.
601
+ In creating an open model, we have carefully considered the following:
602
+
603
+ * Bias and Fairness
604
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
605
+ biases embedded in the training material. These models underwent careful
606
+ scrutiny, input data pre-processing described and posterior evaluations
607
+ reported in this card.
608
+ * Misinformation and Misuse
609
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
610
+ * Guidelines are provided for responsible use with the model, see the
611
+ [Responsible Generative AI Toolkit][rai-toolkit].
612
+ * Transparency and Accountability:
613
+ * This model card summarizes details on the models' architecture,
614
+ capabilities, limitations, and evaluation processes.
615
+ * A responsibly developed open model offers the opportunity to share
616
+ innovation by making LLM technology accessible to developers and researchers
617
+ across the AI ecosystem.
618
+
619
+ Risks identified and mitigations:
620
+
621
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
622
+ (using evaluation metrics, human review) and the exploration of de-biasing
623
+ techniques during model training, fine-tuning, and other use cases.
624
+ * Generation of harmful content: Mechanisms and guidelines for content safety
625
+ are essential. Developers are encouraged to exercise caution and implement
626
+ appropriate content safety safeguards based on their specific product policies
627
+ and application use cases.
628
+ * Misuse for malicious purposes: Technical limitations and developer and
629
+ end-user education can help mitigate against malicious applications of LLMs.
630
+ Educational resources and reporting mechanisms for users to flag misuse are
631
+ provided. Prohibited uses of Gemma models are outlined in the
632
+ [Gemma Prohibited Use Policy][prohibited-use].
633
+ * Privacy violations: Models were trained on data filtered for removal of PII
634
+ (Personally Identifiable Information). Developers are encouraged to adhere to
635
+ privacy regulations with privacy-preserving techniques.
636
+
637
+ ### Benefits
638
+
639
+ At the time of release, this family of models provides high-performance open
640
+ large language model implementations designed from the ground up for Responsible
641
+ AI development compared to similarly sized models.
642
+
643
+ Using the benchmark evaluation metrics described in this document, these models
644
+ have shown to provide superior performance to other, comparably-sized open model
645
+ alternatives.
646
+
647
+ [tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
648
+ [rai-toolkit]: https://ai.google.dev/responsible
649
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
650
+ [terms]: https://ai.google.dev/gemma/terms
651
+ [vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
652
+ [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
653
+ [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
654
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
655
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
656
+ [sustainability]: https://sustainability.google/operating-sustainably/
657
+ [jax]: https://github.com/google/jax
658
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
659
+ [sustainability]: https://sustainability.google/operating-sustainably/
660
+ [foundation-models]: https://ai.google/discover/foundation-models/
661
+ [gemini-2-paper]: https://goo.gle/gemma2report
662
+ [mmlu]: https://arxiv.org/abs/2009.03300
663
+ [hellaswag]: https://arxiv.org/abs/1905.07830
664
+ [piqa]: https://arxiv.org/abs/1911.11641
665
+ [socialiqa]: https://arxiv.org/abs/1904.09728
666
+ [boolq]: https://arxiv.org/abs/1905.10044
667
+ [winogrande]: https://arxiv.org/abs/1907.10641
668
+ [commonsenseqa]: https://arxiv.org/abs/1811.00937
669
+ [openbookqa]: https://arxiv.org/abs/1809.02789
670
+ [arc]: https://arxiv.org/abs/1911.01547
671
+ [triviaqa]: https://arxiv.org/abs/1705.03551
672
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
673
+ [humaneval]: https://arxiv.org/abs/2107.03374
674
+ [mbpp]: https://arxiv.org/abs/2108.07732
675
+ [gsm8k]: https://arxiv.org/abs/2110.14168
676
+ [realtox]: https://arxiv.org/abs/2009.11462
677
+ [bold]: https://arxiv.org/abs/2101.11718
678
+ [crows]: https://aclanthology.org/2020.emnlp-main.154/
679
+ [bbq]: https://arxiv.org/abs/2110.08193v2
680
+ [winogender]: https://arxiv.org/abs/1804.09301
681
+ [truthfulqa]: https://arxiv.org/abs/2109.07958
682
+ [winobias]: https://arxiv.org/abs/1804.06876
683
+ [math]: https://arxiv.org/abs/2103.03874
684
+ [agieval]: https://arxiv.org/abs/2304.06364
685
+ [drop]: https://arxiv.org/abs/1903.00161
686
+ [big-bench]: https://arxiv.org/abs/2206.04615
687
+ [toxigen]: https://arxiv.org/abs/2203.09509
688
+ [eval-danger]: https://arxiv.org/abs/2403.13793