Mert Erbak's picture

Mert Erbak PRO

merterbak

AI & ML interests

Currently NLP and Image Processing

Organizations

Open-Source AI Meetup's profile picture MLX Community's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture open/ acc's profile picture AI Starter Pack's profile picture

merterbak's activity

reacted to DualityAI-RebekahBogdanoff's post with πŸ”₯ about 22 hours ago
view post
Post
1928
βœ¨πŸŽ‰Duality.ai just released a multiclass object detection dataset for YOLOv8, as well as a tutorial on how to create your own multiclass dataset!

Carefully crafted (not GenAI created) synthetic data that ACTUALLY trains a model that works in the physical world.

Create a free FalconEDU account, and download the 1000 image and annotation dataset - https://falcon.duality.ai/secure/documentation/ex3-dataset?sidebarMode=learn
-or-
Follow along with Exercise 3: Multiclass Object Detection to start creating - https://falcon.duality.ai/secure/documentation/ex3-objdetection-multiclass
-or-
Download this Colab notebook to see the data work, no hardware required - https://falcon.duality.ai/secure/documentation/ex3-dataset?sidebarMode=learn

reacted to burtenshaw's post with πŸ”₯ 2 days ago
view post
Post
5498
Now the Hugging Face agent course is getting real! With frameworks like smolagents, LlamaIndex, and LangChain.

πŸ”— Follow the org for updates https://huggingface.co./agents-course

This week we are releasing the first framework unit in the course and it’s on smolagents. This is what the unit covers:

- why should you use smolagents vs another library?
- how to build agents that use code
- build multiagents systems
- use vision language models for browser use

The team has been working flat out on this for a few weeks. Led by @sergiopaniego and supported by smolagents author @m-ric .
reacted to prithivMLmods's post with πŸš€ 6 days ago
view post
Post
5751
It's really interesting about the deployment of a new state of matter in Majorana 1: the world’s first quantum processor powered by topological qubits. If you missed this news this week, here are some links for you:

πŸ…±οΈTopological qubit arrays: https://arxiv.org/pdf/2502.12252

βš›οΈ Quantum Blog: https://azure.microsoft.com/en-us/blog/quantum/2025/02/19/microsoft-unveils-majorana-1-the-worlds-first-quantum-processor-powered-by-topological-qubits/

πŸ“– Read the story: https://news.microsoft.com/source/features/innovation/microsofts-majorana-1-chip-carves-new-path-for-quantum-computing/

πŸ“ Majorana 1 Intro: https://youtu.be/Q4xCR20Dh1E?si=Z51DbEYnZFp_88Xp

πŸŒ€The Path to a Million Qubits: https://youtu.be/wSHmygPQukQ?si=TS80EhI62oWiMSHK
Β·
reacted to mmhamdy's post with πŸ”₯ 7 days ago
view post
Post
2692
πŸŽ‰ We're excited to introduce MemoryCode, a novel synthetic dataset designed to rigorously evaluate LLMs' ability to track and execute coding instructions across multiple sessions. MemoryCode simulates realistic workplace scenarios where a mentee (the LLM) receives coding instructions from a mentor amidst a stream of both relevant and irrelevant information.

πŸ’‘ But what makes MemoryCode unique?! The combination of the following:

βœ… Multi-Session Dialogue Histories: MemoryCode consists of chronological sequences of dialogues between a mentor and a mentee, mirroring real-world interactions between coworkers.

βœ… Interspersed Irrelevant Information: Critical instructions are deliberately interspersed with unrelated content, replicating the information overload common in office environments.

βœ… Instruction Updates: Coding rules and conventions can be updated multiple times throughout the dialogue history, requiring LLMs to track and apply the most recent information.

βœ… Prospective Memory: Unlike previous datasets that cue information retrieval, MemoryCode requires LLMs to spontaneously recall and apply relevant instructions without explicit prompts.

βœ… Practical Task Execution: LLMs are evaluated on their ability to use the retrieved information to perform practical coding tasks, bridging the gap between information recall and real-world application.

πŸ“Œ Our Findings

1️⃣ While even small models can handle isolated coding instructions, the performance of top-tier models like GPT-4o dramatically deteriorates when instructions are spread across multiple sessions.

2️⃣ This performance drop isn't simply due to the length of the context. Our analysis indicates that LLMs struggle to reason compositionally over sequences of instructions and updates. They have difficulty keeping track of which instructions are current and how to apply them.

πŸ”— Paper: From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions (2502.13791)
πŸ“¦ Code: https://github.com/for-ai/MemoryCode
reacted to lysandre's post with ❀️ 7 days ago
view post
Post
5245
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
Β·
reacted to onekq's post with πŸ‘€ 8 days ago
view post
Post
2035
Still waiting for πŸ‘½GrokπŸ‘½ 3 API βŒ›πŸ˜žπŸ˜«
reacted to their post with πŸš€ 8 days ago
view post
Post
3512
πŸ”₯ Meet Muse: that can generate a game environment based on visuals or players’ controller actions. It was developed by Microsoft Research in collaboration with Ninja Theory (Hellblade developer). It’s built on something called the World and Human Action Model (WHAM-1.6B model). They trained on 7 years of Bleeding Edge gameplay and it can generate 2 minute long 3D game sequences with consistent physics and character behaviors all from just a second of input. They’ve gone and open-sourced it too. Open weights, the WHAM Demonstrator, and sample data on Azure AI Foundry for anyone to play with. Hope so soon on Hugging Face πŸ€—.

πŸ“„ Paper: https://www.nature.com/articles/s41586-025-08600-3
Blog Post: https://www.microsoft.com/en-us/research/blog/introducing-muse-our-first-generative-ai-model-designed-for-gameplay-ideation/

  • 1 reply
Β·
replied to their post 9 days ago
reacted to merve's post with πŸš€ 9 days ago
view post
Post
5149
Google just released PaliGemma 2 Mix: new versatile instruction vision language models πŸ”₯

> Three new models: 3B, 10B, 28B with res 224, 448 πŸ’™
> Can do vision language tasks with open-ended prompts, understand documents, and segment or detect anything 🀯

Read more https://huggingface.co./blog/paligemma2mix
Try the demo google/paligemma2-10b-mix
All models are here google/paligemma-2-mix-67ac6a251aaf3ee73679dcc4
reacted to burtenshaw's post with πŸš€ 9 days ago
view post
Post
6884
AGENTS + FINETUNING! This week Hugging Face learn has a whole pathway on finetuning for agentic applications. You can follow these two courses to get knowledge on levelling up your agent game beyond prompts:

1️⃣ New Supervised Fine-tuning unit in the NLP Course https://huggingface.co./learn/nlp-course/en/chapter11/1
2️⃣New Finetuning for agents bonus module in the Agents Course https://huggingface.co./learn/agents-course/bonus-unit1/introduction

Fine-tuning will squeeze everything out of your model for how you’re using it, more than any prompt.
  • 2 replies
Β·
posted an update 9 days ago
view post
Post
3512
πŸ”₯ Meet Muse: that can generate a game environment based on visuals or players’ controller actions. It was developed by Microsoft Research in collaboration with Ninja Theory (Hellblade developer). It’s built on something called the World and Human Action Model (WHAM-1.6B model). They trained on 7 years of Bleeding Edge gameplay and it can generate 2 minute long 3D game sequences with consistent physics and character behaviors all from just a second of input. They’ve gone and open-sourced it too. Open weights, the WHAM Demonstrator, and sample data on Azure AI Foundry for anyone to play with. Hope so soon on Hugging Face πŸ€—.

πŸ“„ Paper: https://www.nature.com/articles/s41586-025-08600-3
Blog Post: https://www.microsoft.com/en-us/research/blog/introducing-muse-our-first-generative-ai-model-designed-for-gameplay-ideation/

  • 1 reply
Β·
reacted to fdaudens's post with ❀️ 9 days ago
reacted to clem's post with ❀️ 11 days ago
view post
Post
3417
We crossed 1B+ tokens routed to inference providers partners on HF, that we released just a few days ago.

Just getting started of course but early users seem to like it & always happy to be able to partner with cool startups in the ecosystem.

Have you been using any integration and how can we make it better?

https://huggingface.co./blog/inference-providers
reacted to jasoncorkill's post with ❀️ 16 days ago
view post
Post
4528
Runway Gen-3 Alpha: The Style and Coherence Champion

Runway's latest video generation model, Gen-3 Alpha, is something special. It ranks #3 overall on our text-to-video human preference benchmark, but in terms of style and coherence, it outperforms even OpenAI Sora.

However, it struggles with alignment, making it less predictable for controlled outputs.

We've released a new dataset with human evaluations of Runway Gen-3 Alpha: Rapidata's text-2-video human preferences dataset. If you're working on video generation and want to see how your model compares to the biggest players, we can benchmark it for you.

πŸš€ DM us if you’re interested!

Dataset: Rapidata/text-2-video-human-preferences-runway-alpha
  • 1 reply
Β·
reacted to ginipick's post with πŸ”₯ 20 days ago
view post
Post
5218
🌟 3D Llama Studio - AI 3D Generation Platform

πŸ“ Project Overview
3D Llama Studio is an all-in-one AI platform that generates high-quality 3D models and stylized images from text or image inputs.

✨ Key Features

Text/Image to 3D Conversion 🎯

Generate 3D models from detailed text descriptions or reference images
Intuitive user interface

Text to Styled Image Generation 🎨

Customizable image generation settings
Adjustable resolution, generation steps, and guidance scale
Supports both English and Korean prompts

πŸ› οΈ Technical Features

Gradio-based web interface
Dark theme UI/UX
Real-time image generation and 3D modeling

πŸ’« Highlights

User-friendly interface
Real-time preview
Random seed generation
High-resolution output support (up to 2048x2048)

🎯 Applications

Product design
Game asset creation
Architectural visualization
Educational 3D content

πŸ”— Try It Now!
Experience 3D Llama Studio:

ginigen/3D-LLAMA

#AI #3DGeneration #MachineLearning #ComputerVision #DeepLearning
reacted to KnutJaegersberg's post with πŸ‘€ 20 days ago
view post
Post
2716
A Brief Survey of Associations Between Meta-Learning and General AI

The paper titled "A Brief Survey of Associations Between Meta-Learning and General AI" explores how meta-learning techniques can contribute to the development of Artificial General Intelligence (AGI). Here are the key points summarized:

1. General AI (AGI) and Meta-Learning:
- AGI aims to develop algorithms that can handle a wide variety of tasks, similar to human intelligence. Current AI systems excel at specific tasks but struggle with generalization to unseen tasks.
- Meta-learning or "learning to learn" improves model adaptation and generalization, allowing AI systems to tackle new tasks efficiently using prior experiences.

2. Neural Network Design in Meta-Learning:
- Techniques like Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks enable self-improvement and adaptability for deep models, supporting generalization across tasks.
- Highway networks and ResNet-style models use shortcuts for efficient backpropagation, allowing deeper models that can be used in meta-learning frameworks.

3. Coevolution:
- Coevolution involves the mutual evolution of multiple components, such as learners or task-solvers, to improve overall performance.
- Coevolution between learners enhances collaboration and competition within AI systems, while coevolution between tasks and solvers (e.g., POWERPLAY and AI-GA frameworks) pushes solvers to adapt to increasingly complex tasks.

4. Curiosity in Meta-Learning:
- Curiosity-based exploration encourages AI systems to discover new, diverse features of the environment, avoiding local optima.
- Curiosity-based objectives can be combined with performance-based objectives to ensure efficient exploration and adaptation in complex tasks.

5. Forgetting Mechanisms:
- Forgetting is crucial to avoid memory overload in AI systems

https://arxiv.org/abs/2101.04283
reacted to singhsidhukuldeep's post with πŸš€ 28 days ago
view post
Post
2034
Exciting breakthrough in AI: AirRAG - A Novel Approach to Retrieval Augmented Generation!

Researchers from Alibaba Cloud have developed a groundbreaking framework that significantly improves how AI systems reason and retrieve information. AirRAG introduces five fundamental reasoning actions that work together to create more accurate and comprehensive responses.

>> Key Technical Innovations:
- Implements Monte Carlo Tree Search (MCTS) for exploring diverse reasoning paths
- Utilizes five core actions: System Analysis, Direct Answer, Retrieval-Answer, Query Transformation, and Summary-Answer
- Features self-consistency verification and process-supervised reward modeling
- Achieves superior performance across complex QA datasets like HotpotQA, MuSiQue, and 2WikiMultiHopQA

>> Under the Hood:
The system expands solution spaces through tree-based search, allowing for multiple reasoning paths to be explored simultaneously. The framework implements computationally optimal strategies, applying more resources to key actions while maintaining efficiency.

>> Results Speak Volumes:
- Outperforms existing RAG methods by over 10% on average
- Shows remarkable scalability with increasing inference computation
- Demonstrates exceptional flexibility in integrating with other advanced technologies

This research represents a significant step forward in making AI systems more capable of complex reasoning tasks. The team's innovative approach combines human-like reasoning with advanced computational techniques, setting new benchmarks in the field.
reacted to AdinaY's post with πŸ”₯ about 1 month ago
reacted to StephenGenusa's post with πŸ‘€ about 2 months ago
view post
Post
1194
I have a pro account and I am logged in. I have duplicated a space due to the error "You have exceeded your GPU quota", I am showing 0 GPU use, yet I am unable to use it "You have exceeded your GPU quota (60s requested vs. 44s left). Create a free account to get more daily usage quota." "Expert Support" is a pitch for consulting.
Β·
reacted to openfree's post with πŸ”₯ about 2 months ago
view post
Post
5243
# 🧬 Protein Genesis AI: Design Proteins with Just a Prompt

## πŸ€” Current Challenges in Protein Design

Traditional protein design faces critical barriers:
- πŸ’° High costs ($1M - $10M+) & long development cycles (2-3 years)
- πŸ”¬ Complex equipment and expert knowledge required
- πŸ“‰ Low success rates (<10%)
- ⏰ Time-consuming experimental validation

## ✨ Our Solution: Protein Genesis AI

Transform protein design through simple natural language input:
"Design a protein that targets cancer cells"
"Create an enzyme that breaks down plastic"


### Key Features
- πŸ€– AI-powered automated design
- πŸ“Š Real-time analysis & optimization
- πŸ”¬ Instant 3D visualization
- πŸ’Ύ Immediate PDB file generation

## 🎯 Applications

### Medical & Industrial
- πŸ₯ Drug development
- πŸ’‰ Antibody design
- 🏭 Industrial enzymes
- ♻️ Environmental solutions

### Research & Education
- πŸ”¬ Basic research
- πŸ“š Educational tools
- 🧫 Experimental design
- πŸ“ˆ Data analysis

## πŸ’« Key Advantages

- πŸ‘¨β€πŸ’» No coding or technical expertise needed
- ⚑ Results in minutes (vs. years)
- πŸ’° 90% cost reduction
- 🌐 Accessible anywhere

## πŸŽ“ Who Needs This?
- 🏒 Biotech companies
- πŸ₯ Pharmaceutical research
- πŸŽ“ Academic institutions
- πŸ§ͺ Research laboratories

## 🌟 Why It Matters
Protein Genesis AI democratizes protein design by transforming complex processes into simple text prompts. This breakthrough accelerates scientific discovery, potentially leading to faster drug development and innovative biotechnology solutions. The future of protein design starts with a simple prompt! πŸš€

openfree/ProteinGenesis
Β·