The Geometry of Tokens in Internal Representations of Large Language Models
Abstract
We investigate the relationship between the geometry of token embeddings and their role in the next token prediction within transformer models. An important aspect of this connection uses the notion of empirical measure, which encodes the distribution of token point clouds across transformer layers and drives the evolution of token representations in the mean-field interacting picture. We use metrics such as intrinsic dimension, neighborhood overlap, and cosine similarity to observationally probe these empirical measures across layers. To validate our approach, we compare these metrics to a dataset where the tokens are shuffled, which disrupts the syntactic and semantic structure. Our findings reveal a correlation between the geometric properties of token embeddings and the cross-entropy loss of next token predictions, implying that prompts with higher loss values have tokens represented in higher-dimensional spaces.
Community
Code: https://github.com/ritareasciencepark/token_geometry
TLDR: We investigate the relationship between the geometry of token embeddings and their role in the next token prediction within transformer models. Our findings reveal a correlation between the geometric properties of token embeddings and the cross-entropy loss of next token predictions, implying that prompts with higher loss values have tokens represented in higher-dimensional spaces.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Does Representation Matter? Exploring Intermediate Layers in Large Language Models (2024)
- Uncovering Uncertainty in Transformer Inference (2024)
- NormXLogit: The Head-on-Top Never Lies (2024)
- Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs (2024)
- Quantifying Positional Biases in Text Embedding Models (2024)
- Mixture of Hidden-Dimensions Transformer (2024)
- Enhancing Lexicon-Based Text Embeddings with Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper