Mohamed Rashad PRO

MohamedRashad

AI & ML interests

Computer Vision, Robotics, Natural Language Processing

Recent Activity

Organizations

Navid AI's profile picture

MohamedRashad's activity

reacted to alielfilali01's post with 🤗 9 days ago
view post
Post
3297
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
·
reacted to their post with 🔥 11 days ago
posted an update 11 days ago
reacted to their post with 🚀 26 days ago
posted an update 26 days ago
posted an update about 1 month ago
reacted to their post with 👀 3 months ago
posted an update 3 months ago
reacted to reach-vb's post with 🧠 3 months ago
view post
Post
2842
Less than two days ago Kyutai Labs open sourced Moshi - an ~7.6B on-device Speech to Speech foundation model and Mimi - SoTA streaming speech codec! 🔥

The release includes:

1. Moshiko & Moshika - Moshi finetuned on synthetic data (CC-BY license) ( kyutai/moshi-v01-release-66eaeaf3302bef6bd9ad7acd)
2. Mimi - Streaiming Audio Codec, processes 24 kHz audio, down to a 12.5 Hz representation with a bandwidth of 1.1 kbps (CC-BY license) ( kyutai/mimi)
3. Model checkpoints & Inference codebase written in Rust (Candle), PyTorch & MLX (Apache license) (https://github.com/kyutai-labs/moshi)

How does Moshi work?

1. Moshi processes two audio streams: one for itself and one for the user, with the user's stream coming from audio input and Moshi's stream generated by the model.

2. Along with these audio streams, Moshi predicts text tokens for its speech, enhancing its generation quality.

3. The model uses a small Depth Transformer for codebook dependencies and a large 7B parameter Temporal Transformer for temporal dependencies.

4. The theoretical latency is 160ms, with a practical latency of around 200ms on an L4 GPU.

Model size & inference:

Moshiko/ka are 7.69B param models

bf16 ~16GB VRAM
8-bit ~8GB VRAM
4-bit ~4GB VRAM

You can run inference via Candle 🦀, PyTorch and MLX - based on your hardware.

The Kyutai team, @adefossez @lmz and team are cracked AF, they're bringing some serious firepower to the open source/ science AI scene, looking forward to what's next! 🐐
  • 1 reply
·
reacted to their post with ❤️ 3 months ago
view post
Post
3378
For all the Muslims out there who are interested in Quran and its tafsir (explanations). This humble dataset consists of 84 different books of tafsir for nearly all the ayat in the Quran:
MohamedRashad/Quran-Tafseer

I hope it helps someone to build something nice and useful with it ^_^
posted an update 3 months ago
view post
Post
3378
For all the Muslims out there who are interested in Quran and its tafsir (explanations). This humble dataset consists of 84 different books of tafsir for nearly all the ayat in the Quran:
MohamedRashad/Quran-Tafseer

I hope it helps someone to build something nice and useful with it ^_^
reacted to rwightman's post with ❤️ 4 months ago
view post
Post
1288
The timm leaderboard timm/leaderboard has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations.

Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison:
* test_vit ( timm/test_vit.r160_in1k)
* test_efficientnet ( timm/test_efficientnet.r160_in1k)
* test_byobnet ( timm/test_byobnet.r160_in1k, a mix of resnet, darknet, effnet/regnet like blocks)

They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?
replied to rwightman's post 4 months ago
reacted to rwightman's post with 🔥 4 months ago
view post
Post
2060
The latest timm validation & test set results are now viewable by a leaderboard space: timm/leaderboard

As of yesterday, I updated all of the results for ImageNet , ImageNet-ReaL, ImageNet-V2, ImageNet-R, ImageNet-A, and Sketch sets. The csv files can be found in the GH repo https://github.com/huggingface/pytorch-image-models/tree/main/results

Unfortunately the latest benchmark csv files are not yet up to date, there are some gaps in dataset results vs throughput/flop numbers impact the plots.

h/t to @MohamedRashad for making the first timm leaderboard.
  • 1 reply
·
reacted to vilarin's post with 🔥 5 months ago
view post
Post
4193
Black Forest Labs, BASED! 👏
FLUX.1 is more delightful, with good instruction following.
FLUX.1 dev( black-forest-labs/FLUX.1-dev) with a 12B parameter distillation model, second only to Black Forest Labs' state-of-the-art model FLUX.1 pro. 🙀

Update 🤙Official demo:
black-forest-labs/FLUX.1-dev
  • 1 reply
·
reacted to their post with ❤️ 6 months ago
posted an update 6 months ago
reacted to their post with 🔥 6 months ago
posted an update 6 months ago
posted an update 6 months ago