Samuel Lima Braz

samuellimabraz

AI & ML interests

None yet

Recent Activity

liked a dataset 1 day ago
getomni-ai/ocr-benchmark
liked a model 3 days ago
h2oai/h2ovl-mississippi-2b
liked a dataset 8 days ago
tech4humans/signature-detection
View all activity

Organizations

Tech4Humans's profile picture Hugging Face Discord Community's profile picture

samuellimabraz's activity

upvoted an article 22 days ago
view article
Article

Vision Language Models Explained

β€’ 279
upvoted an article 23 days ago
view article
Article

Open-source DeepResearch – Freeing our search agents

β€’ 1.11k
upvoted an article 29 days ago
upvoted an article 30 days ago
view article
Article

KV Caching Explained: Optimizing Transformer Inference Efficiency

By not-lain β€’
β€’ 33
upvoted 2 articles about 1 month ago
view article
Article

FineWeb2-C: Help Build Better Language Models in Your Language

By davanstrien and 5 others β€’
β€’ 18
New activity in tech4humans/signature-detection about 1 month ago
posted an update about 1 month ago
view post
Post
422
I wrote a article on Parameter-Efficient Fine-Tuning (PEFT), exploring techniques for efficient fine-tuning in LLMs, their implementations, and variations.

The study is based on the article "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" and the PEFT library integrated with Hugging Face's Transformers.

Article: https://huggingface.co./blog/samuellimabraz/peft-methods
Notebook: https://colab.research.google.com/drive/1B9RsKLMa8SwTxLsxRT8g9OedK10zfBEP?usp=sharing
Collection: samuellimabraz/service-summary-6793ccfe774073328ea9f8df

Analyzed methods:
- Adapters: Soft Prompts (Prompt Tuning, Prefix Tuning, P-tuning), IAΒ³.
- Reparameterization: LoRA, QLoRA, LoHa, LoKr, X-LoRA, Intrinsic SAID, and variations of initializations (PiSSA, OLoRA, rsLoRA, DoRA).
- Selective Tuning: BitFit, DiffPruning, FAR, FishMask.

I'm starting out in generative AI, I have more experience with computer vision and robotics. Just sharing here πŸ€—