Ali El Filali's picture

Ali El Filali

alielfilali01

AI & ML interests

AI Psychometrician ? | NLP (mainly for Arabic) | Other interests include Reinforcement Learning and Cognitive sciences among others

Recent Activity

updated a dataset about 8 hours ago
OALL/requests
liked a dataset 1 day ago
atlasia/TerjamaBench
updated a dataset 1 day ago
inceptionai/requests-dataset
View all activity

Articles

Organizations

Gradio-Themes-Party's profile picture Arabic Machine Learning 's profile picture BigLAM: BigScience Libraries, Archives and Museums's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Blog-explorers's profile picture ASAS AI's profile picture Nt3awnou's profile picture Qwen's profile picture Mixed Arabic Datasets's profile picture ZeroGPU Explorers's profile picture 2A2I Legacy Models & Datasets's profile picture AtlasIA's profile picture 2A2I's profile picture MLX Community's profile picture Open Arabic LLM Leaderboard's profile picture Social Post Explorers's profile picture C4AI Community's profile picture Dev Mode Explorers's profile picture Chinese LLMs on Hugging Face's profile picture ThinkAI's profile picture KABOUR's profile picture Hugging Face Discord Community's profile picture llmc's profile picture Arabic Translation Prompt Engineering's profile picture Inception's profile picture Dataset Tools's profile picture ml-fw-prerelease's profile picture Data Is Better Together Contributor's profile picture Donut Earthers 🍩's profile picture QudraTech's profile picture 3C3H's profile picture

Posts 29

view post
Post
1705
3C3H AraGen Leaderboard welcomes today deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 💀) to the ranking of best LLMs in Arabic !


Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !

- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !

- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)


Check out the latest rankings: inceptionai/AraGen-Leaderboard
view post
Post
1806
~75% on the challenging GPQA with only 40M parameters 🔥🥳

GREAT ACHIEVEMENT ! Or is it ?

This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.

The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.

Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.

What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.

This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, it’s apparently possible to (intentionally or unintentionally) leak test data through this method.

Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)