Arcee AI

Enterprise
company
Verified
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

arcee-ai's activity

freddyaboultonย 
posted an update 3 days ago
view post
Post
2814
Getting WebRTC and Websockets right in python is very tricky. If you've tried to wrap an LLM in a real-time audio layer then you know what I'm talking about.

That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.

Check out our org: hf.co/fastrtc
bartowskiย 
posted an update about 2 months ago
view post
Post
44835
Switching to author_model-name

I posted a poll on twitter, and others have mentioned the interest in me using the convention of including the author name in the model path when I upload.

It has a couple advantages, first and foremost of course is ensuring clarity of who uploaded the original model (did Qwen upload Qwen2.6? Or did someone fine tune Qwen2.5 and named it 2.6 for fun?)

The second thing is that it avoids collisions, so if multiple people upload the same model and I try to quant them both, I would normally end up colliding and being unable to upload both

I'll be implementing the change next week, there are just two final details I'm unsure about:

First, should the files also inherit the author's name?

Second, what to do in the case that the author name + model name pushes us past the character limit?

Haven't yet decided how to handle either case, so feedback is welcome, but also just providing this as a "heads up"
  • 3 replies
ยท
freddyaboultonย 
posted an update 2 months ago
freddyaboultonย 
posted an update 2 months ago
freddyaboultonย 
posted an update 3 months ago
view post
Post
2226
Version 0.0.21 of gradio-pdf now properly loads chinese characters!
freddyaboultonย 
posted an update 3 months ago
view post
Post
1603
Hello Llama 3.2! ๐Ÿ—ฃ๏ธ๐Ÿฆ™

Build a Siri-like coding assistant that responds to "Hello Llama" in 100 lines of python! All with Gradio, webRTC ๐Ÿ˜Ž

freddyaboulton/hey-llama-code-editor
freddyaboultonย 
posted an update 3 months ago
bartowskiย 
posted an update 3 months ago
view post
Post
55810
Looks like Q4_0_N_M file types are going away

Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable)

You can see the reference PR here:

https://github.com/ggerganov/llama.cpp/pull/10446

So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms)

As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those !

Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541

Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
ยท
bartowskiย 
posted an update 3 months ago
view post
Post
16305
Old mixtral model quants may be broken!

Recently Slaren over on llama.cpp refactored the model loader - in a way that's super awesome and very powerful - but with it came breaking of support for "split tensor MoE models", which applies to older mixtral models

You may have seen my upload of one such older mixtral model, ondurbin/bagel-dpo-8x7b-v0.2, and with the newest changes it seems to be able to run without issue

If you happen to run into issues with any other old mixtral models, drop a link here and I'll try to remake them with the new changes so that we can continue enjoying them :)
  • 2 replies
ยท
abhishekย 
posted an update 3 months ago
view post
Post
2340
๐ŸŽ‰ SUPER BLACK FRIDAY DEAL ๐ŸŽ‰

Train almost any model on a variety of tasks such as llm finetuning, text classification/regression, summarization, question answering, image classification/regression, object detection, tabular data, etc for FREE using AutoTrain locally. ๐Ÿ”ฅ
https://github.com/huggingface/autotrain-advanced