Bruno Henrique PRO

Bruno

AI & ML interests

None yet

Recent Activity

updated a collection 11 days ago
QA_PTBr
liked a Space 11 days ago
Qwen/Qwen2.5-Turbo-1M-Demo
liked a Space 11 days ago
Qwen/QwQ-32B-preview
View all activity

Organizations

Spaces-explorers's profile picture 🤗 Course Team AI Law Assistant's profile picture Training Transformers Together's profile picture CVPR Demo Track's profile picture Gradio-Themes-Party's profile picture Gradio-Blocks-Party's profile picture Webhooks Explorers (BETA)'s profile picture EuroPython 2022's profile picture ICML 2022's profile picture Musika's profile picture

Bruno's activity

reacted to fdaudens's post with ❤️❤️ about 1 month ago
view post
Post
967
My new favorite bookmark: AnyChat. The ultimate AI Swiss Army knife that lets you switch between ChatGPT, Gemini, Claude, LLaMA, Grok & more—all in one place!

Really cool work by @akhaliq

akhaliq/anychat
reacted to singhsidhukuldeep's post with 👀 about 2 months ago
view post
Post
2098
Exciting Research Alert: Revolutionizing Dense Passage Retrieval with Entailment Tuning!

The good folks at HKUST have developed a novel approach that significantly improves information retrieval by leveraging natural language inference.

The entailment tuning approach consists of several key steps to enhance dense passage retrieval performance.

Data Preparation
- Convert questions into existence claims using rule-based transformations.
- Combine retrieval data with NLI data from SNLI and MNLI datasets.
- Unify the format of both data types using a consistent prompting framework.

Entailment Tuning Process
- Initialize the model using pre-trained language models like BERT or RoBERTa.
- Apply aggressive masking (β=0.8) specifically to the hypothesis components while preserving premise information.
- Train the model to predict the masked hypothesis tokens from the premise content.
- Run the training for 10 epochs using 8 GPUs, taking approximately 1.5-3.5 hours.

Training Arguments for Entailment Tuning (Yes! They Shared Them)
- Use a learning rate of 2e-5 with 100 warmup steps.
- Set batch size to 128.
- Apply weight decay of 0.01.
- Utilize the Adam optimizer with beta values (0.9, 0.999).
- Maintain maximum gradient norm at 1.0.

Deployment
- Index passages using FAISS for efficient retrieval.
- Shard vector store across multiple GPUs.
- Enable sub-millisecond retrieval of the top-100 passages per query.

Integration with Existing Systems
- Insert entailment tuning between pre-training and fine-tuning stages.
- Maintain compatibility with current dense retrieval methods.
- Preserve existing contrastive learning approaches during fine-tuning.

Simple, intuitive, and effective!

This advancement significantly improves the quality of retrieved passages for question-answering systems and retrieval-augmented generation tasks.
liked a Space 2 months ago