Fine-Grained Prediction of Reading Comprehension from Eye Movements
Abstract
Can human reading comprehension be assessed from eye movements in reading? In this work, we address this longstanding question using large-scale eyetracking data over textual materials that are geared towards behavioral analyses of reading comprehension. We focus on a fine-grained and largely unaddressed task of predicting reading comprehension from eye movements at the level of a single question over a passage. We tackle this task using three new multimodal language models, as well as a battery of prior models from the literature. We evaluate the models' ability to generalize to new textual items, new participants, and the combination of both, in two different reading regimes, ordinary reading and information seeking. The evaluations suggest that although the task is highly challenging, eye movements contain useful signals for fine-grained prediction of reading comprehension. Code and data will be made publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Decoding Reading Goals from Eye Movements (2024)
- Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models (2024)
- Linear Recency Bias During Training Improves Transformers' Fit to Reading Times (2024)
- GazeGenie: Enhancing Multi-Line Reading Research with an Innovative User-Friendly Tool (2024)
- Large-scale cloze evaluation reveals that token prediction tasks are neither lexically nor semantically aligned (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper