Papers
arxiv:2502.19577

Tell me why: Visual foundation models as self-explainable classifiers

Published on Feb 26
· Submitted by hturbe on Mar 3
Authors:
,
,

Abstract

Visual foundation models (VFMs) have become increasingly popular due to their state-of-the-art performance. However, interpretability remains crucial for critical applications. In this sense, self-explainable models (SEM) aim to provide interpretable classifiers that decompose predictions into a weighted sum of interpretable concepts. Despite their promise, recent studies have shown that these explanations often lack faithfulness. In this work, we combine VFMs with a novel prototypical architecture and specialized training objectives. By training only a lightweight head (approximately 1M parameters) on top of frozen VFMs, our approach (ProtoFM) offers an efficient and interpretable solution. Evaluations demonstrate that our approach achieves competitive classification performance while outperforming existing models across a range of interpretability metrics derived from the literature. Code is available at https://github.com/hturbe/proto-fm.

Community

Paper author Paper submitter

In this work, we present ProtoFM, a novel approach that enhances interpretability in visual classification tasks. Our method seamlessly integrates Visual Foundation Models (VFMs) with a prototypical architecture and specialized training objectives, enabling more meaningful and faithful explanations. By training only a lightweight head (~1M parameters) on top of frozen VFMs, ProtoFM achieves this with minimal computational overhead.

Extensive evaluations demonstrate that ProtoFM not only maintains competitive classification performance but also significantly enhances interpretability, as validated through a comprehensive set of quantitative metrics.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.19577 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.19577 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.19577 in a Space README.md to link it from this page.

Collections including this paper 3