Access Gemma on Hugging Face

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.

Log in or Sign Up to review the conditions and access this model content.

Gemma Model Card

TensorFlow Lite

Model Page: Gemma

A 4-bit, 8-bit, GPU-compatible and CPU-compatible TensorFlow Lite implementation of the Gemma 2B instruction-tuned model, optimized for on-device machine learning. A 2B parameter base model that has been instruction-tuned to respond to prompts in a conversational manner.

This is Gemma 1.1 2B (IT), an update over the original instruction-tuned Gemma release.

Resources and Technical Documentation:

Terms of Use: Terms

Authors: Google

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support tflite models for this pipeline type.

Collections including google/gemma-1.1-2b-it-tflite