Ed Addario's picture

Ed Addario PRO

eaddario

AI & ML interests

None yet

Recent Activity

updated a model 1 day ago
eaddario/Llama-Guard-3-8B-GGUF
reacted to albertvillanova's post with 🔥 3 days ago
🚀 Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. 🦾🔒 Here's why this is a game-changer for agent-based systems: 🧵👇 1️⃣ Security First 🔐 Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications. 2️⃣ Deterministic & Reproducible Runs 📦 By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable setting—no more environment mismatches or dependency issues! 3️⃣ Resource Control & Limits 🚦 Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents don’t spiral out of control. 4️⃣ Safer Code Execution in Production 🏭 Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure. 5️⃣ Easy to Integrate 🛠️ With smolagents, you can simply configure your agent to use Docker or E2B as its execution backend—no need for complex security setups! 6️⃣ Perfect for Autonomous AI Agents 🤖 If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation. ⚡ Get started now: https://github.com/huggingface/smolagents What will you build with smolagents? Let us know! 🚀💡
View all activity

Organizations

None yet

eaddario's activity

reacted to clem's post with ❤️ about 15 hours ago
reacted to albertvillanova's post with 🔥 3 days ago
view post
Post
3606
🚀 Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. 🦾🔒

Here's why this is a game-changer for agent-based systems: 🧵👇

1️⃣ Security First 🔐
Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.

2️⃣ Deterministic & Reproducible Runs 📦
By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable setting—no more environment mismatches or dependency issues!

3️⃣ Resource Control & Limits 🚦
Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents don’t spiral out of control.

4️⃣ Safer Code Execution in Production 🏭
Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.

5️⃣ Easy to Integrate 🛠️
With smolagents, you can simply configure your agent to use Docker or E2B as its execution backend—no need for complex security setups!

6️⃣ Perfect for Autonomous AI Agents 🤖
If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.

⚡ Get started now: https://github.com/huggingface/smolagents

What will you build with smolagents? Let us know! 🚀💡
reacted to clem's post with 🔥 3 days ago
view post
Post
5809
Super happy to welcome Nvidia as our latest enterprise hub customer. They have almost 2,000 team members using Hugging Face, and close to 20,000 followers of their org. Can't wait to see what they'll open-source for all of us in the coming months!

Nvidia's org: https://huggingface.co./nvidia
Enterprise hub: https://huggingface.co./enterprise
posted an update 3 days ago
view post
Post
673
Squeezing out tensor bits, part III and final (for now 😉)

(For context please see: https://huggingface.co./posts/eaddario/832567461491467)

I have just finished uploading eaddario/Hammer2.1-7b-GGUF and eaddario/Dolphin3.0-Mistral-24B-GGUF.

While I was able to get 7+% reduction with Hammer2.1-7b, the larger Dolphin3.0-Mistral-24B proved to be a more difficult nut to crack (only 3%).

I have an idea as to why this was the case, which I'll test with QwQ-32B, but it will be a while before I can find the time.
replied to their post 3 days ago
view reply

Thank you @UICO , but at the moment rather than a technique, it's more of a mix of brutish-force, educated guesses, trial and error and the occasional luck, but will tackle QwQ 32B next as it will help me validate an idea (see my next post)

replied to their post 3 days ago
view reply

The process is a bit all over the place at the moment. Some steps are automated, others are manual, and a few are trial and error. But as I streamline it, I will publish my findings in a How To guide, along with the tools I'm using.

reacted to singhsidhukuldeep's post with 👍 6 days ago
view post
Post
6697
Exciting New Tool for Knowledge Graph Extraction from Plain Text!

I just came across a groundbreaking new tool called KGGen that's solving a major challenge in the AI world - the scarcity of high-quality knowledge graph data.

KGGen is an open-source Python package that leverages language models to extract knowledge graphs (KGs) from plain text. What makes it special is its innovative approach to clustering related entities, which significantly reduces sparsity in the extracted KGs.

The technical approach is fascinating:

1. KGGen uses a multi-stage process involving an LLM (GPT-4o in their implementation) to extract entities and relations from source text
2. It aggregates graphs across sources to reduce redundancy
3. Most importantly, it applies iterative LM-based clustering to refine the raw graph

The clustering stage is particularly innovative - it identifies which nodes and edges refer to the same underlying entities or concepts. This normalizes variations in tense, plurality, stemming, and capitalization (e.g., "labors" clustered with "labor").

The researchers from Stanford and University of Toronto also introduced MINE (Measure of Information in Nodes and Edges), the first benchmark for evaluating KG extractors. When tested against existing methods like OpenIE and GraphRAG, KGGen outperformed them by up to 18%.

For anyone working with knowledge graphs, RAG systems, or KG embeddings, this tool addresses the fundamental challenge of data scarcity that's been holding back progress in graph-based foundation models.

The package is available via pip install kg-gen, making it accessible to everyone. This could be a game-changer for knowledge graph applications!
posted an update 6 days ago
replied to their post 6 days ago
view reply

In this case, the Q2_K refers to the quantization of the embedding layer applied to each version of the model, rather than the overall quantization used. For example, the DeepSeek-R1-Distill-Qwen-7B-Q4_K_M model would have its embedding layer quantized at Q2_K instead of the usual Q4_K.

Once all the quantized versions are generated, I then produce Perplexity, KL Divergence, ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores for each version using the test datasets documented in the model card.

New activity in eaddario/imatrix-calibration 8 days ago