Ed Addario's picture

Ed Addario PRO

eaddario

AI & ML interests

None yet

Recent Activity

updated a model 1 day ago
eaddario/Llama-Guard-3-8B-GGUF
reacted to albertvillanova's post with 🔥 3 days ago
🚀 Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. 🦾🔒 Here's why this is a game-changer for agent-based systems: 🧵👇 1️⃣ Security First 🔐 Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications. 2️⃣ Deterministic & Reproducible Runs 📦 By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable setting—no more environment mismatches or dependency issues! 3️⃣ Resource Control & Limits 🚦 Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents don’t spiral out of control. 4️⃣ Safer Code Execution in Production 🏭 Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure. 5️⃣ Easy to Integrate 🛠️ With smolagents, you can simply configure your agent to use Docker or E2B as its execution backend—no need for complex security setups! 6️⃣ Perfect for Autonomous AI Agents 🤖 If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation. ⚡ Get started now: https://github.com/huggingface/smolagents What will you build with smolagents? Let us know! 🚀💡
View all activity

Organizations

None yet

Posts 3

view post
Post
666
Squeezing out tensor bits, part III and final (for now 😉)

(For context please see: https://huggingface.co./posts/eaddario/832567461491467)

I have just finished uploading eaddario/Hammer2.1-7b-GGUF and eaddario/Dolphin3.0-Mistral-24B-GGUF.

While I was able to get 7+% reduction with Hammer2.1-7b, the larger Dolphin3.0-Mistral-24B proved to be a more difficult nut to crack (only 3%).

I have an idea as to why this was the case, which I'll test with QwQ-32B, but it will be a while before I can find the time.