kotol

company
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

gv-hf's activity

merveΒ 
posted an update 9 days ago
view post
Post
5141
Google just released PaliGemma 2 Mix: new versatile instruction vision language models πŸ”₯

> Three new models: 3B, 10B, 28B with res 224, 448 πŸ’™
> Can do vision language tasks with open-ended prompts, understand documents, and segment or detect anything 🀯

Read more https://huggingface.co./blog/paligemma2mix
Try the demo google/paligemma2-10b-mix
All models are here google/paligemma-2-mix-67ac6a251aaf3ee73679dcc4
merveΒ 
posted an update 14 days ago
view post
Post
4641
Your weekly recap of open AI is here, and it's packed with models! merve/feb-14-releases-67af876b404cc27c6d837767

πŸ‘€ Multimodal
> OpenGVLab released InternVideo 2.5 Chat models, new video LMs with long context
> AIDC released Ovis2 model family along with Ovis dataset, new vision LMs in different sizes (1B, 2B, 4B, 8B, 16B, 34B), with video and OCR support
> ColQwenStella-2b is a multilingual visual retrieval model that is sota in it's size
> Hoags-2B-Exp is a new multilingual vision LM with contextual reasoning, long context video understanding

πŸ’¬ LLMs
A lot of math models!
> Open-R1 team released OpenR1-Math-220k large scale math reasoning dataset, along with Qwen2.5-220K-Math fine-tuned on the dataset, OpenR1-Qwen-7B
> Nomic AI released new Nomic Embed multilingual retrieval model, a MoE with 500 params with 305M active params, outperforming other models
> DeepScaleR-1.5B-Preview is a new DeepSeek-R1-Distill fine-tune using distributed RL on math
> LIMO is a new fine-tune of Qwen2.5-32B-Instruct on Math

πŸ—£οΈ Audio
> Zonos-v0.1 is a new family of speech recognition models, which contains the model itself and embeddings

πŸ–ΌοΈ Vision and Image Generation
> We have ported DepthPro of Apple to transformers for your convenience!
> illustrious-xl-v1.0 is a new illustration generation model
Β·
merveΒ 
posted an update 21 days ago
view post
Post
3065
Interesting releases in open AI this week, let's recap 🀠 merve/feb-7-releases-67a5f7d7f172d8bfe0dd66f4

πŸ€– Robotics
> Pi0, first open-source foundation vision-language action model was released in Le Robot (Apache 2.0)

πŸ’¬ LLMs
> Groundbreaking: s1 is simpler approach to test-time scaling, the release comes with small s1K dataset of 1k question-reasoning trace pairs (from Gemini-Thinking Exp) they fine-tune Qwen2.5-32B-Instruct to get s1-32B, outperforming o1-preview on math 🀯 s1-32B and s1K is out!
> Adyen released DABstep, a new benchmark along with it's leaderboard demo for agents doing data analysis
> Krutrim released Krutrim-2 instruct, new 12B model based on NeMo12B trained and aligned on Indic languages, a new multilingual sentence embedding model (based on STSB-XLM-R), and a translation model for Indic languages

πŸ‘€ Multimodal
> PKU released Align-DS-V, a model aligned using their new technique called LLF for all modalities (image-text-audio), along with the dataset Align Anything
> OLA-7B is a new any-to-any model by Tencent that can take text, image, video, audio data with context window of 32k tokens and output text and speech in English and Chinese
> Krutrim released Chitrarth, a new vision language model for Indic languages and English

πŸ–ΌοΈ Vision
> BiRefNet_HR is a new higher resolution BiRefNet for background removal

πŸ—£οΈ Audio
> kyutai released Hibiki, it's a real-time speech-to-speech translation model 🀯 it's available for French-English translation
> Krutrim released Dhwani, a new STT model for Indic languages
> They also release a new dataset for STT-TTS

πŸ–ΌοΈ Image Generation
> Lumina released Lumina-Image-2.0, a 2B parameter-flow based DiT for text to image generation
> Tencent released Hunyuan3D-2, a 3D asset generation model based on DiT and Hunyuan3D-Paint
> boreal-hl-v1 is a new boring photorealistic image generation LoRA based on Hunyuan
XenovaΒ 
posted an update 21 days ago
view post
Post
7751
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. ⚑️

Generate 10 seconds of speech in ~1 second for $0.

What will you build? πŸ”₯
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
βœ‚οΈ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)

Who wants to help?
Β·