Is this the same as Gemini Nano?

#25
by nonetrix - opened

If I remember correctly Gemini Nano is meant to be a much smaller model that can run on devices locally, I think specifically Google mentioned their Pixel phones. Is this a version of that model? Is this the model that will be used in Pixel phones? Or is the model different or a updated version of that meant to replace Gemini Nano? Seeems like a model that could easily run on the Pixel phones with more memory. If It's not Gemini Nano do you plan to release Gemini Nano under a similar license and perhaps allow us to in a official capacity at least run it on other devices? Thanks for releasing this model regardless, I hope to see more releases

Google org

Hello! No, the Gemma models are not the same as Gemini Nano -- the Nano models are targeted directly at Android and on-device usage. Nano are the most capable models for a wide range of mobile-focused use cases, while Gemma is targeted at a broader AI Developer community so you can build and experiment easily, including inference and fine-tuning across a wide variety of ML Frameworks and hardware platforms, especially on laptop, desktop, Hugging Face, Google Cloud, etc. Excited to see what you can build with it, and thanks for the question!

trisfromgoogle changed discussion status to closed

Hello

If I remember correctly Gemini Nano is meant to be a much smaller model that can run on devices locally, I think specifically Google mentioned their Pixel phones. Is this a version of that model? Is this the model that will be used in Pixel phones? Or is the model different or a updated version of that meant to replace Gemini Nano? Seeems like a model that could easily run on the Pixel phones with more memory. If It's not Gemini Nano do you plan to release Gemini Nano under a similar license and perhaps allow us to in a official capacity at least run it on other devices? Thanks for releasing this model regardless, I hope to see more releases

There is gemini nano quantized weights floating around now. You can also try it out with chrome dev here: https://kharms.ai/nano

Still has the small model issues

image.png

image.png

Phi 3.1 is better :3
boring rant The fact it can run on phones is awesome though, I'd love to see microsoft get phi running on mobile devices too. The SLM race is going to be interesting to watch.

I do think it would perform better on device/pixels, It's still in dev on desktop

  • I don't if the nano in chrome is the 1.8B or 3.25B parameter model
    GArPtqeX0AAnJR9.png

I'd love to see a larger model distilled from Gemini for desktops with better performance. Nano is really fast with even my 2080. Being able to run 7B parameter models elsewhere makes nano a second choice. a 7b parameter multi modal model built into chrome would be nice.

It's still useful though.
Eg:
"convert this:
"IQ1_S": 1.56,
"IQ2_XXS": 2.06,
"IQ2_XS": 2.31,
"IQ2_S": 2.5,
"IQ2_M": 2.7,
"IQ3_XXS": 3.06,
"IQ3_XS": 3.3,
"IQ3_S": 3.5,
"IQ3_M": 3.7,
"IQ4_XS": 4.25,
"IQ4_NL": 4.5,
Into this format: IQ1_S (1.56BPW), IQ2_XXS (2.06 BPW),"
Model response:

IQ1_S (1.56BPW), IQ2_XXS (2.06 BPW), IQ2_XS (2.31 BPW), IQ2_S (2.5 BPW), IQ2_M (2.7 BPW), IQ3_XXS (3.06 BPW), IQ3_XS (3.3 BPW), IQ3_S (3.5 BPW), IQ3_M (3.7 BPW), IQ4_XS (4.25 BPW), IQ4_NL (4.5 BPW)

Unofficial I assume, not sure why Google doesn't release Gemini Nano officially as open weight in my personal opinion. People are going to get their hands on them anyway in unofficial means outside of Pixel phones etc., and they have already released what seem like more powerful models as open source especially Gemma 27B. Just seems like duplication of effort but that is my personal opinion and a suggestion to release them officially so it isn't perhaps a legal grey area, Google is free to do whatever though.

Also, speaking of Gemma nice work on 27B finally dethroned Mixtral 8x7B for me because I wanted mix of speed and smartness at same time. Great models :-)

Edit: Oh, wait never mind I think I heard about this maybe? It's integrated into Chrome, so I assume it's fine unlike what I assumed. But my point still stands, if there was demand people would get these models running elsewhere with or without Google's involvement probably. Also curious if this is open source Chromium, I assume not but also hope it's configurable to supports other model architectures perhaps. Gemma 27B inside this would be great because I have 64GBs of RAM, but If made this feature I'd probably just bundle LLAMA CPP with the browser ideally but I assume it's custom likely

Unofficial I assume, not sure why Google doesn't release Gemini Nano officially as open weight in my personal opinion. People are going to get their hands on them anyway in unofficial means outside of Pixel phones etc., and they have already released what seem like more powerful models as open source especially Gemma 27B. Just seems like duplication of effort but that is my personal opinion and a suggestion to release them officially so it isn't perhaps a legal grey area, Google is free to do whatever though.

Also, speaking of Gemma nice work on 27B finally dethroned Mixtral 8x7B for me because I wanted mix of speed and smartness at same time. Great models :-)

Edit: Oh, wait never mind I think I heard about this maybe? It's integrated into Chrome, so I assume it's fine unlike what I assumed. But my point still stands, if there was demand people would get these models running elsewhere with or without Google's involvement probably. Also curious if this is open source Chromium, I assume not but also hope it's configurable to supports other model architectures perhaps. Gemma 27B inside this would be great because I have 64GBs of RAM, but If made this feature I'd probably just bundle LLAMA CPP with the browser ideally but I assume it's custom likely

From my experience the implementation in chrome seems to be roughly the same speed as Phi-3-Mini running in transformers.js/web-gpu so i think it would be possible to load other models in place of Nano if not officialy then with a community tool.

And yeah, brave nightly lacks the "On Device Model" component required. The chromium release i could find also lacks the component required
The chromium release is from 13 minutes ago too 🥲

As for bigger open source gemini models, i only want them for the vision portion of the model. Or a vision gemma model would be nice. There isn't much competition in the open source vision model area from large companies which is disappointing, especially now that models have enough native context that one image doesn't fill it entirely.

I'd be excited for Meta's chameleon if it was supported more widely. it supports visual question answering, image captioning, text generation and image generation in one model

Edit - I'm hopeful Mozilla will add local ai to their browser. They have added AI with providers you can chose from it's opt in unlike every other browser with ai. I would hope they add Mozilla-Ocho/llamafile as an option, or maybe webgpu? They support open source ai so i don't see why they wouldn't

Sign up or log in to comment