surprising - that model is incredible for it's size.

#2
by mirek190 - opened

Is multilingual, quite coherent in conversation, answering like a human, quite good in simpler math and reasoning like:

  • explain step by step 25-4*2+3=?
  • If I have 3 apples today and yesterday I ate one apple. How many apples do I have today?
  • Alfons is faster than Bert. Bert is faster than Claudia. Is Claudia faster than Alfons?
  • How many days are between 12-12-1971 and 18-4-2024? ( quite close to a proper answer - If a model is closer to a perfect answer then is just better in reasoning and math )

but more complex math with reasoning is too much for that model.

  • I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river?

This above is just too much for that tiny model.

For 2b - model is very impressive almost unreliable

Now we need to know of any 8B model that are a way better than it. Perhaps not even Gemma 8B is way better, perhaps a little bit better. Interesting, they are doing better small models, but no excellent 8B or 15B model yet. At this time, we should have 8B models way better than Chatgpt 3.5 turbo, and near Chatgpt 4. 😮

The model is really impressive. Yet, it's speed is lower than any 7B model on my M1 Mac.

UPD: Inference speed is slow only in the SERVER mode...

BTW: I suppose the model is too good due to proper distillation.

UPD: And wow! The model is incredible in general knowledge and language capabilities and can compete with 7-8B models!

YES is like a miracle not even 4b but 2b! ... I never suspect such small model can be so good. Is worse than gemma 2 9b but no too much. I wonder how really good can be 9b model ...
8 months ago I thought mistral 0.2 7b is a ceiling how small models ...

I wonder if it has to do with the training tokens, 2T tokens for a 2B model is an insane ratio, if the 2T are of the highest quality, that could explain how it's punching so dam high above it's weight class

Screenshot 2024-08-02 181709.png

Class for its own ;)

Ask the question: "The father of my father is called Peter. He has a sister called Nancy. What is the family relationship between me and Nancy ?" .

Here, the model went crazy ! 😂😂

Yes gemma 2 2b is a bit too stupid for such reasoning but 9b version is easily solving it.

gemma 2 2b - fail but was close to a proper answer ;)

Screenshot 2024-08-04 221235.png

gemma 2 9b - meh ... too easy

Screenshot 2024-08-04 221146.png

Yes gemma 2 2b is a bit too stupid for such reasoning but 9b version is easily solving it.

gemma 2 2b - fail but was close to a proper answer ;)

Screenshot 2024-08-04 221235.png

gemma 2 9b - meh ... too easy

Screenshot 2024-08-04 221146.png

Yes, Gemma 2 9B is one of the smartest. I wish it were even more, but we know that they will be in the future. Thanks. 😃🙏👍

BUT WAIT!

Can you finetune a Gemma 2 2B to know stuff like Gemma 2 9B?

Can you augment its knowledge by adding responses from the higher end Gemmas?

BUT WAIT!

Can you finetune a Gemma 2 2B to know stuff like Gemma 2 9B?

Can you augment its knowledge by adding responses from the higher end Gemmas?

I think this won't work as you may think. You're trying to make a smaller model to behave exactly like a bigger model. If that was possible, we could make an 8B model be on the same level of ChatGPT. We need other new technologies in order to create better small models, and the next generation is always smarter than the older ones. I think in the future we will discover ways to an 8B model be even better than current Chatgpt. For now, these models are only curiosities. We can't trust their answers, they are good for chatting and oole playing, but not for reasoning and not hallucinating. 🙏👍

sucks to find out that finetuning cannot improve a 2B any further even after augmenting with a lot of more information gathered from various versions of ChatGPT, Gemini or other HF models

The good point here is that the model is multilingual. It could be an excellent offline translator. And frankly it translates not bad. However, due to low number of parameters it predicts wrong words. Maybe it's possible to finetune this model for translation tasks. Given its size, it is possible to accomplish using the consumer GPU or Google Colab. I have only 8GM on my Mac M1. So this is the only model that I can run with 4096 context size. Also, it would be perfect to extend the context window somehow.

Also I noticed that it works very fast only with LLAMA CLI not with LLAMA SERVER. Maybe anyone knows the answer. It not only relates to this model, to all models working with llama cpp

Also I noticed that it works very fast only with LLAMA CLI not with LLAMA SERVER.

What about llama-cpp-python?
This way you can write your own frontend.

..so better is use llama server or ollama .. all providing API

The good point here is that the model is multilingual. It could be an excellent offline translator. And frankly it translates not bad. However, due to low number of parameters it predicts wrong words. Maybe it's possible to finetune this model for translation tasks. Given its size, it is possible to accomplish using the consumer GPU or Google Colab. I have only 8GM on my Mac M1. So this is the only model that I can run with 4096 context size. Also, it would be perfect to extend the context window somehow.

Also I noticed that it works very fast only with LLAMA CLI not with LLAMA SERVER. Maybe anyone knows the answer. It not only relates to this model, to all models working with llama cpp

Fine-tune more 2b model?
I already surprised it has reasoning capability at all and is even multilingual is even more astonishing.... even is coding not too bad !
Level of LLM coders from 8 months ago of size 33b.

The tendency is that they become better with time, not worse. Yesterday I saw the ad of a pocket translator and voice to text for deaf people, who can write what the other person is talking even in the worst noisy ambients. This is amazing 💥🙏

The tendency is that they become better with time, not worse. Yesterday I saw the ad of a pocket translator and voice to text for deaf people, who can write what the other person is talking even in the worst noisy ambients. This is amazing 💥🙏

Amazing will be if works offline ;)
If using API and internet then ..meh .Such project exist on GitHub .

Level of LLM coders from 8 months ago of size 33b.

Coming soon:

  • 1B model as good as GPT-3.5
  • 7B model as good as GPT-4
  • 10B model as good as GPT-4o

Level of LLM coders from 8 months ago of size 33b.

Coming soon:

  • 1B model as good as GPT-3.5
  • 7B model as good as GPT-4
  • 10B model as good as GPT-4o

It's a dream that will become reality. Everybody will have an assistant/friend who can talk and hear on the phone, and take care of all agenda, solve problems, do anything. We won't have only one. We'll have many. 🙏👍💥❤️

I'm dropping a new type of Gemma 2: SYSTEM PROMPT ENABLED VERSIONS!

Check: SystemGemma2

Anyone can test and GGUF those?

Modified tokenizers have also been uploaded separately.

Sign up or log in to comment