File size: 90,887 Bytes
1beefd8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
[
  {
    "name": "PaliGemma 2: A Family of Versatile VLMs for Transfer",
    "id": "2412.03555",
    "content": "4 2 0 2 c e D 4\nV C . s c [\n1 v 5 5 5 3 0 . 2 1 4 2 : v i X r a\nDecember 2024\nPaliGemma 2: A Family of Versatile VLMs for Transfer Andreas Steiner*, , Andr  Susano Pinto*, Michael Tschannen*, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsenko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, Siyang Qin, Reeve Ingle, Emanuele Bugliarello, Sahar Kazemzadeh, Thomas Mesnard, Ibrahim Alabdulmohsin, Lucas Beyer and Xiaohua Zhai Google DeepMind, *Core team,  Project lead\nPaliGemma 2 is an upgrade of the PaliGemma open Vision-Language Model (VLM) based on the Gemma 2 family of language models. We combine the SigLIP-So400m vision encoder that was also used by PaliGemma with the whole range of Gemma 2 models, from the 2B one all the way up to the 27B model. We train these models at three resolutions (224px2, 448px2 and 896px2) in multiple stages to equip them with broad knowledge for transfer via fine-tuning. The resulting family of base models covering different model sizes and resolutions allows us to investigate factors impacting transfer performance (such as learning rate) and to analyze the interplay between the type of task, model size, and resolution. We further increase the number and breadth of transfer tasks beyond the scope of PaliGemma including different OCR-related tasks such as table structure recognition, molecular structure recognition, music score recognition, as well as long fine-grained captioning and radiography report generation, on which PaliGemma 2 obtains state-of-the-art results.\n1. Introduction\nPaliGemma [9] is a 3B vision-language model (VLM) for transfer combining the SigLIP [108] vision encoder and the 2B Gemma language model [21]. It matches the performance of much larger prior VLMs consisting of a range of different vision encoders and language models. We now upgrade PaliGemma by replacing its language model component with the more recent and more capable language models from the Gemma 2 fam- ily [22], producing new PaliGemma 2 base VLMs at 3 different sizes (3B, 10B, 28B) and 3 different resolutions (224px2, 448px2, 896px2). To equip these VLMs with broad capabilities we use the same 3-stage training recipe as PaliGemma. The resulting models are designed to be fine-tuned, and when evaluated on the 30+ transfer tasks considered in [9] (which include common cap- tioning and VQA tasks, and some video and re- ferring expression tasks), PaliGemma 2 slightly outperforms PaliGemma at the same resolution and model size, and obtains substantial improve- ments at larger model sizes. We release the PaliGemma 2 VLMs as open-weight models which can serve as drop-in replacement for PaliGemma.\nHaving a family of models at hand that are all derived from comparable building blocks and are trained according to the same recipe allows us to analyze the effect of model size and resolution on the downstream performance in a controlled setting (see Sec. 4.1). For example, while almost every task benefits from added compute, we iden- tify which transfer tasks benefit more from com- pute due to increased resolutions, and which from compute due to a larger, more capable language model. We also show that larger models tend to have a lower optimal transfer learning rate.\nWe also explore new tasks which were not ex- plored in depth in [9], including text detection and recognition (Sec. 4.2), table structure recog- nition (Sec. 4.3), molecular structure recogni- tion (Sec. 4.4), optical music score recognition (Sec. 4.5), long caption generation (Sec. 4.6), spa- tial reasoning (Sec. 4.7), and radiography report generation (Sec. 4.8). PaliGemma 2 obtains state- of-the-art results on many of those tasks. Finally, we benchmark and analyze low-precision vari- ants of PaliGemma 2 for on-device deployment on CPU (Sec. 4.9).\nCorresponding author(s): andstein,andresp,[email protected]   2024 Google DeepMind. All rights reserved\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n896244822242linear projectionImage tokensInput text tokensOutput text tokens2B9B27BGemma 2SigLIP-400m/14\nFigure 1 | PaliGemma 2 processes a 224px2/ 448px2/896px2 image with a SigLIP-400m en- coder with patch size 14px2, yielding 256/1024/ 4096 tokens. After a linear projection, the image tokens are concatenated with the input text to- kens and Gemma 2 autoregressively completes this prefix with an answer.\n0.2740.2550.8460.4980.0460.6660.2270.807segment puffin in the back ; puffin in front<loc0255><loc0274><loc0846><loc0498><seg024>[...]<seg018> puffin in front ;<loc0046><loc0666><loc0227><loc0807><seg106>[...]<seg055> puffin in the backInput:Output:\nFigure 2 | Referring segmentation example from our PaliGemma demoa. The model is pretrained with a vocabulary that includes localization to- kens (for detection) and segmentation tokens (to define a binary mask inside a bounding box).\n2. Related work\nhttps://huggingface.co./spaces/big-vision/paligemma\nOver the last few years, VLMs evolved rapidly from simple dual-encoder (contrastive) [31, 77, 108] or encoder-decoder (captioning) [20, 93, 94, 98] designs trained from scratch, to more capable designs combining a pretrained vision encoder with a pretrained language model [4, 5, 14, 16, 48, 72, 96, 103]. Broadly, three paradigms are used to transfer these models: zero-shot, few- shot, and fine-tuning. Another recent trend is  instruction tuning  which aims to make the mod- els more user friendly [18, 54].\nmarize the most important aspects here. We use the same pretrained SigLIP-So400m vision en- coder [3, 108] and map its (sequence of) em- beddings to the Gemma 2 input space with a linear projection. The visual embeddings are com- bined with a text prompt and fed to the Gemma 2 language model (prefill). Predictions are then obtained by autoregressively sampling from the language model (see Fig. 1).\nSeveral previous works [9, 19, 34, 35, 45, 66, 92, 109] have investigated the effect of scaling VLMs along different axes such as training data and compute, resolution, model size, and quality of components, in particular the vision encoder. However, we are not aware of prior work which jointly studies the effect of the image resolution and the size of the language models on transfer via fine-tuning. In particular, prior works rely- ing on different language model sizes often use models with different architecture and training recipes from different labs, e.g. [35, 92] (with the notable exception of [47]).\nWe pretrain PaliGemma 2 in three stages (with stage 0 corresponding to unimodal pretraining of the components, see [108] and [21]).\nStage 1 combines the pretrained SigLIP- So400m and Gemma 2 checkpoints (raw checkpoints, without post-training steps) and trains them jointly on a multimodal task mixture of 1 billion examples designed to enable transferability to a wide range of tasks via fine-tuning. The image resolution is 224px2; no parameters are frozen during this stage.\n3. Model\nWe follow exactly the same modeling, training, and data setup as PaliGemma [9] and briefly sum-\nStage 2 first trains for 50 million examples at resolution 448px2 and then for 10 million examples at resolution 896px2. The task mix- ture has the same components but tasks ben- efiting from high resolution are upweighted, and the output sequence length is increased\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nTraining cost / example\nVision Encoder\nLLM\nParams. 224px2\n448px2\n896px2\nGemma 2 2B PaliGemma 2 PaliGemma 2 10B SigLIP-So400m Gemma 2 9B Gemma 2 27B PaliGemma 2 28B\n3.0B 9.7B 27.7B\n1.0 3.7 18.9\n4.6 18.3 63.5\n23.5 67.7  155.6\nTable 1 | The vision encoder parameter count is small compared to the LLM, but the compute is dominated by the vision tokens in the LLM. The last three columns show the relative training cost per example (as measured in our pre-training setup). Models are trained on Cloud TPUv5e [24], except the 28B model at 896px2 is trained on TPUv5p, for which we assume a speed-up of 2.3  per chip.\n(to promote e.g. learning of OCR for long sequences of visual text).\ncial VLM as common among other open VLMs such as LLaVA [54].\nStage 3 fine-tunes the checkpoints from stage 1 or 2 (depending on the resolution) to the target task. PaliGemma considered a range of academic benchmarks, including some involving multiple images and short videos. We consider the same set of bench- marks here (exploring the same set of hyper- parameters from [9, Sec. 3.2.4]). In addition, we also explore new applications involving document-related tasks, long caption gener- ation, and medical image understanding.\nSimilar to PaliGemma, we train PaliGemma 2 models on Cloud TPUv5e Pod slices [24] (ex- cept TPUv5p for the 28B model at 896px2) of 256 to 1024 chips and use a fully-sharded data-parallel (FSDP [8, 110]) sharding strategy. PaliGemma 2 3B has roughly the same training cost as PaliGemma (3 days for Stage 1 using 256 chips); the cost for other variants and resolutions can be inferred from Table 1. It is worth noting that increasing resolution incurs a similar addi- tional cost as increasing the language model size.\nFollowing [22], we apply logits soft-capping [6] to the attention and output logits in the Gemma 2 component with the same parameters as [22] in Stages 1 and 2, but not in Stage 3, as this led to worse results for some transfer tasks. Fur- ther, we use the Adam optimizer [42] with de- fault hyperparameters throughout, and adjust the learning rate based on the model size in Stages 1 and 2. Specifically, we multiply the learning rate of 2   10 5 used in Stages 1 and 2 for PaliGemma by 0.5 for PaliGemma 2 3B and by 0.25 for PaliGemma 2 10B and 28B.\n4. Experiments\nIn addition to the broad range of transfer tasks considered in [9], we also consider new tasks in- volving text detection and recognition (Sec. 4.2), table structure recognition (Sec. 4.3), molecular structure recognition (Sec. 4.4), optical music score recognition (Sec. 4.5), long caption genera- tion (Sec. 4.6), spatial reasoning (Sec. 4.7), and radiography report generation (Sec. 4.8).\nWe provide examples for each new task in Ap-\nFor details on the training data mixture we re- fer to [9, Sec. 3.2.5] and provide a brief sum- mary here. The mixture involves captioning, grounded captioning (as in [94]), OCR, differ- ent machine generated visual question answer- ing (VQA) tasks [11, 75], detection [13] and in- stance segmentation [15]. Many of the corre- sponding labels are machine generated, mostly re- lying on publicly available specialist models (see [9, Sec. 3.2.5]), and none uses a large commer-\npendix A and transfer details in Appendix B.\n4.1. Investigating model size and resolution\nTo study the effect of model size and reso- lution on task performance we finetune the 3 model variants (3B, 10B and 28B) in two resolutions (224px2 and 448px2) on the 30+ academic benchmarks used by [9], covering a broad range of captioning, VQA, and refer-\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nRSVQA-hr (test2)\nRelative improvement 3B 10B\nNLVR2\nRefCOCO+ (val)\nOCR-VQA\nST-VQA (val)\nGQA\nTallyQA (simple)\nRefCOCO (val)\nXM3600 (en)\nAI2D\nCOCO-35L (en)\nRefCOCOg (val)\nChartQA (human)\nOKVQA\nInfoVQA (val)\nXM3600 (avg35)\nChartQA (aug)\n224px  448px \nAOKVQA-DA (val)\n10%\n100%\nCOCO-35L (avg34)\nTextVQA (val)\nTextCaps\n100%\nTallyQA (complex)\nDocVQA (val)\nRSVQA-lr\nCountBenchQA\nWidgetCap\n10%\nScreen2Words\nAOKVQA-MC (val)\nxGQA (avg7)\nVQAv2 (minival)\nVizWizVQA (val)\nNoCaps\nScienceQA\nCOCOcap\nMARVL (avg5)\nRSVQA-hr (test)\nFigure 3 | Relative improvements of metrics after transfer, when choosing a pre-trained checkpoint with a larger LM, or with a higher resolution. The tasks are grouped into tasks sensitive to both model size and resolution ( ), sensitive to model size ( ), and sensitive to resolution ( ). Note that some benchmarks are quite saturated (e.g. ScienceQA s relative improvement of 2.2% corresponds to an error reduction of 53.8%   see Figure 13). Data used to create this plot available in Table 13.\nring segmentation tasks on natural images, doc- uments, infographics, and videos. We reuse the optimal hyperparameters from the earlier PaliGemma work and only sweep the learning rate {0.03, 0.06, 0.1, 0.3, 0.6, 1.0, 3.0}   10 5 for every model size. Since for most tasks the earlier work used the same hyperparameters for 224px2 and 448px2, we only sweep at 224px2 resolution and reuse the selection for both resolutions. We select the best learning rate based on the respec- tive validation split for each model size and task, then retrain the models and report the test met- rics. Complete results are available in Table 13.\n4.1.1. Effect on task performance\nIncreasing image resolution and increasing LM size both lead to an increase in the FLOPs spent on the prediction (and training, see Table 1) of our PaliGemma 2 models. Thus, we generally expect most tasks to benefit from both these changes. On the other hand, some tasks might benefit from more detail in the input (higher resolution) or bet- ter language understanding and increased world knowledge provided by a larger LM. To get a more fine-grained understanding of these aspects we visualize in Fig. 3 the relative improvement in transfer metrics when equipping PaliGemma 2\n3B (224px2) with either the bigger 9B LM while keeping the resolution (3.7  more FLOPs), or keeping the model size but increasing the resolu- tion to 448px2 (4.6  more FLOPs).\nAs expected, most tasks similarly benefit from a resolution and model increase (green markers). There is a group of tasks (yellow markers) fo- cused on text, document, screen and chart under- standing which mainly benefit from a resolution increase. The images in the corresponding bench- marks often have a native resolution significantly larger than 224px2, which is aligned with this ob- servation. Another group of tasks (blue markers) mostly benefits from LM size increase. Some of these tasks involve multilingual data (XM3600 (avg35)), or require advanced visual reasoning (AI2D, CountBenchQA, NLVR2).\nFig. 4 provides additional detail on the scaling behavior as a function of resolution and model size. Compared to increasing model size from 3B to 10B, increasing it further to 28B often only leads to moderate improvements, or no improve- ments at all. Using the largest PaliGemma 2 can thus be useful if one wants to get the best possi- ble performance and has no compute or latency constraints. A possible factor related to the rela- tively worse transferability of PaliGemma 2 28B\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n160\n141\n10B\n90.5\nOCR-VQA\nCOCO-35L (en)\n180\nRefCOCO+ (val)\n66.5\nScreen2Words\nDocVQA (val)\n124\n143\nAOKVQA-DA (val)\n130\n170\n93.5\nChartQA (aug)\n150\nChartQA (human)\nTallyQA (complex)\nTextVQA (val)\n144\nST-VQA (val)\n139\n150\nAI2D\n145\n142\nxGQA (avg7)\nRSVQA-lr\nXM3600 (en)\nCOCO-35L (avg34)\n28B\nRSVQA-hr (test2)\n140\nVizWizVQA (val)\n126\n28B\n10B\n28B\nRefCOCOg (val)\nMARVL (avg5)\n68.5\n127\n115\n28B\n116\nOKVQA\n138\n10B\n122.5\nRefCOCO (val)\nWidgetCap\n44.0\nCOCOcap\nTallyQA (simple)\nCountBenchQA\nSciCap\n117.5\n90.6\n92.5\nTextCaps\n140\n117\n94.0\n141\nNLVR2\n44.5\nNoCaps\n93.0\nAOKVQA-MC (val)\n10B\n66.0\n125\n90.7\n112.5\nXM3600 (avg35)\n115.0\n67.5\n90.9\nInfoVQA (val)\n91.0\n10B\n68.0\n140\n114\n143\nGQA\n90.8\nScienceQA\n43.5\n120.0\n67.0\n43.0\n10B\nVQAv2 (minival)\n145\n142\n28B\n123\n45.0\n92.0\n28B\nFigure 4 | Transfer performance as a function of model size and resolution (median over 5 transfer runs). The shaded area marks standard deviation to reported value. Lighter lines correspond to higher resolution (448px2). The tasks are grouped into tasks sensitive to both model size and resolution ( ), sensitive to model size ( ), and sensitive to resolution ( ). Data for this plot is available in Table 13.\nis that the underlying Gemma 2 27B model is trained from scratch, as opposed to the 2B and 9B models, which are distilled [22, Sec. 6.1].\n4.1.2. Model size and transfer learning rate\nFigure 5 visualizes the (normalized) task perfor- mance as a function of the transfer learning rate. As a general trend we observe that the optimal learning rate for larger models tends to be lower than for smaller models (diagonal patterns in the heat map). We thus recommend to sweep smaller learning rates when increasing model size. Addi- tionally, we found that the new PaliGemma 2 3B generally has a smaller optimal transfer learning rate when compared to PaliGemma.\n4.1.3. Using Gemma 2 instead of Gemma 1\nWe also compare with PaliGemma in Table 15. It can be seen that for the same resolution and model size (i.e. 3B) PaliGemma 2 models perform slightly better than the corresponding PaliGemma models. On average over the 30+ aca- demic benchmarks the scores were 0.65 better for 224px2 and 0.85 for 448px2.\n4.2. Text detection and recognition\nWe apply PaliGemma 2 to advanced OCR in- volving localization and recognition of individual words from images. Specifically, the outputs are pairs of {transcription, bounding box}. Following the HierText competition [57], we use word level precision, recall, and F1 as the metrics. A word\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n6e-6\nRefCOCO+ (val)\nbest\n6e-7\nCOCOcap (minival)\n3e-7\nRSVQA-hr (minival)\n10B\nScreen2Words (minival)\n1e-5\nTallyQA (complex)\nAOKVQA-MC (val)\n28B\n1e-5\n3e-5\nNLVR2 (minival)\n1e-6\n1e-5\n3e-5\nTallyQA (simple)\n1e-6\nOCR-VQA (minival)\n6e-6\n3e-7\nCOCO-35L (avg34)\nScienceQA (minival)\n3e-5\n28B\n6e-6\n3e-7\nCOCO-35L (en)\n3e-5\n10B\n6e-7\n6e-6\n10B\nRefCOCO (val)\n1e-5\nAOKVQA-DA (val)\n1e-6\n1e-6\n3e-6\nSciCap (minival)\n3e-6\n3e-7\n10B\n6e-6\n3e-7\n28B\n3e-6\n6e-6\nInfoVQA (val)\n6e-7\n28B\nTextVQA (val)\nworse\nDocVQA (val)\n28B\nGQA (minival)\n1e-5\nVQAv2 (minival)\n3e-7\n3e-6\n3e-6\nVizWizVQA (val)\nAI2D (minival)\n10B\nWidgetCap (minival)\nChartQA (human) (minival)\n1e-6\n1e-6\nST-VQA (val)\n1e-5\nRSVQA-lr (minival)\nOKVQA (minival)\nChartQA (aug) (minival)\nRefCOCOg (val)\n3e-6\n6e-7\n3e-5\nTextCaps (minival)\n3e-5\n6e-7\n6e-7\nFigure 5 | Per-task performance as a function of model size and learning rate for several of the downstream tasks. Values are normalized for each task and model size, with darker color indicating better task performance. Larger models tend to have a lower optimal transfer learning rate. Zero-shot tasks not shown as their values were not used to select learning rates. The data used for this plot is provided in Table 14.\nresult is considered true positive if the IoU with the ground-truth bounding box is greater than or equal to 0.5 and the transcription matches the ground-truth. Note that the HierText protocol does not normalize letter cases, punctuation sym- bols, or filter based on text lengths but directly compares predictions against ground-truth.\nWe fine-tune PaliGemma 2 on a mixture of the train splits of ICDAR 15 [36], Total-Text [17], MLT17 and MLT19 [68], HierText [56], Tex- tOCR [84], IntelOCR [44] and evaluate on the ICDAR 15 and Total-Text test sets, which are the most commonly used OCR benchmarks. Table 2 shows the results: PaliGemma 2 3B at 896px2 outperforms the state of the art HTS [58]. We emphasize that this result is obtained simply by fine-tuning a general-purpose VLM which does not rely on task-specific architecture components as common in the OCR literature. This highlights PaliGemma 2 s versatile interface, and shows the benefits of OCR-related pretraining in Stages 2 and 3. We further tried reducing the resolution which led to substantially lower prediction qual- ity, while increasing the model size did not lead to improvements.\n4.3. Table structure recognition\nThe goal of table structure recognition is to ex- tract table text content, corresponding bound- ing box coordinates, and the table structure in HTML format from document images. To transfer PaliGemma 2 to this task we finetune on (the train splits of) two popular data sets, PubTabNet [112] containing 516k images of tabular data from the PubMed Central Open Access Subset (commer- cial use collection) and FinTabNet [111], consist- ing of 113k financial report tables from annual reports of S&P 500 companies. We remove ex- amples with obviously corrupted ground truth (e.g. a bounding box extending outside the image frame) from the training data and further apply the refinements from [86] to FinTabNet. Images are resized to the target resolution while preserv- ing the aspect ratio, and padded to square size to match the target input resolution.\nWe assess model quality with the Tree Edit Distance Similarity (TEDS) [112] and the Grid Table Similarity (GriTS) [85], two families of metrics which measure cell text content, cell topology/structure, and bounding box quality. PaliGemma 2 sets a new state of the art for most of these metrics (Table 3). We further tried in-\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nICDAR 15 Incidental\nTotal-Text\nHTS PaliGemma 2 3B 896px2\n81.9 68.4 81.9 70.7\n74.5 75.9\n75.7 69.4 72.4 73.8 74.5 74.2\nTable 2 | Text detection and recognition performance: The 896px2 PaliGemma 2 model outperforms the state-of-the-art model HTS [58] on ICDAR 15 Incidental and Total-Text, under the evaluation protocol of HierText [57].\nFinTabNet\nPubTabNet\nS-TEDS\nTEDS GriTS-Top GriTS-Con\nS-TEDS\nTEDS GriTS-Top GriTS-Con\nSOTA PaliGemma 2 3B 896px2\n98.9 99.2\n98.2 98.9\n99.0 99.4\n98.6 99.2\n97.9 97.6\n96.9 97.3\n98.0\n97.8\nTable 3 | PaliGemma 2 results for table structure recognition on FinTabNet [111] and PubTabNet [112], compared to the state of the art. The reference metrics are from [28, 38, 60, 86].\ncreasing the model size which did not lead to additional benefits, and using a lower image res- olution led to a small regression in quality.\n4.4. Molecular structure recognition\nWe explore PaliGemma 2 for molecular struc- ture recognition, inferring the molecule graph structure (represented as a SMILES string [99]) from molecular drawings. As training data we use 1 million molecules from the PubChem dataset [41], rendered using the In- digo toolkit [71], and augmented with a variety of drawing styles and random perturbations, fol- lowing MolScribe [76]. We then evaluate on the same eval set as [76] consisting of 5.7k synthetic molecule images rendered with the ChemDraw library. We use exact match percentage as a met- ric, shown in Table 4. PaliGemma 2 outperforms the state of the art MolScribe when using 448px2 resolution; further increasing the resolution did not lead to a higher exact match percentage.\nthe task of\nother common score-related information such as articulation and barlines.\nWe use the GrandStaff dataset [79] containing 53.7k images and employ the official train, valida- tion and test splits. During training we use both the original images and synthetically augmented versions. Evaluation is done on the original im- ages without distortion. The metrics are the same as in [80] and are based on the the normalized mean edit distance. More specifically, the Charac- ter Error Rate (CER) counts errors at the character level, the Symbol Error Rate (SER) measures er- rors at the symbol level (combining multiple char- acters), and the Line Error Rate (LER) is based on full lines in the **kern encoding.\nThe results are shown in Table 5 along with those of the current state of the art method [80]. The error rates decrease with increasing resolu- tion, with the best error rates obtained at 896px2 resolution. Increasing the model size from 3B to 10B did not lead to further error reduction.\n4.5. Optical music score recognition\nWe apply PaliGemma 2 to optical music score recognition: translating images of single-line pi- anoform scores into their digital score representa- tion in the **kern format1. The **kern repre- sentation encodes pitch and duration along with\n1https://www.humdrum.org/rep/kern/\n4.6. Generating long, fine-grained captions\nGenerating long image captions with fine-grained detail has many use cases in multimodal learn- ing, for example to train text-to-image generation models with good controllability [7, 105]. To adapt PaliGemma 2 for this task we fine-tune on\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nFull Match \n#par. #char. #sent. NES \nMolScribe [76] PaliGemma 2 10B 448px2\n93.8 94.8\nTable 4 | PaliGemma 2 performance for molecule structure recognition on ChemDraw data [76].\nMiniGPT-4 mPLUG-Owl2 InstructBLIP LLaVA-1.5 VILA\n7B 8B 7B 7B 7B\n484 459 510 395 871\n5.6 52.3 4.4 48.4 4.0 42.6 4.2 40.6 8.6 28.6\nCER  SER \nLER \nPaliGemma PaLI-5B\n8.9 34.3 3B 5B 1065 11.3 32.9\n535\nSheet Music Tr. [80] PaliGemma 2 3B 896px2\n3.9 1.6\n5.1 2.3\n13.1 6.7\nPaliGemma 2 448px2 3B PaliGemma 2 448px2 10B\n529 521\n7.7 28.4 7.5 20.3\nTable 5 | PaliGemma 2 performance for music score recognition on the GrandStaff data set [80]. Character Error Rate (CER), Symbol Error Rate (SER), and Line Error Rate (LER) in [%].\nTable 6 | PaliGemma 2 results for long captioning on the DOCCI data [69]. Pali* models are mod- els fine-tuned on DOCCI at 448px2; the other baselines are instruction-tuned on a broad range of tasks. Average prediction length in characters and sentences, and percentage of Non-Entailment Sentences (NES), measuring factual inaccuracies.\nthe DOCCI (Descriptions of Connected and Con- trasting Images) [69] data set which contains 15k images with detailed human-annotated En- glish descriptions with an average length of 7.1 sentences (639 characters, 136 words). The de- scriptions provide object spatial relations, object counting, text rendering, world knowledge, etc.\nWe first fine-tune PaliGemma 2 on DOCCI s train split, exploring the hyperparameter range suggested in [9, Sec. 3.2.4]. We select the most performant models by perplexity scores based on the test split, and generate image captions on the 100-image qual_dev split, with a max- imum decoding length of 192. We then con- duct human evaluations assessing whether each generated sentence is factually aligned with (en- tailed by) the image content (see Appendix B.5 for details on the evaluation protocol). Based on these evaluations we select the most factu- ally aligned models and retrain them on the union of train and test splits, followed by another round of human evaluation (on the qual_dev split). The results, shown in Table 6 indicate that the fine-tuned PaliGemma 2 model produces more factually aligned sentences than many pop- ular VLMs, which are often instruction-tuned on 10 100  larger high-quality captioning sets than PaliGemma 2. Unsurprisingly, we observe that in- creasing model size and resolution both improve factual alignment.\n4.7. Spatial reasoning\nVLMs like PaliGemma 2 obtain strong perfor- mance in vision-language tasks which involve ob- ject localization, such as referring expression com- prehension and segmentation [9, 15, 94, 104]. These tasks and the associated benchmarks of- ten rely on machine-generated annotations and are blind to complex failure modes, e.g. those involving negations.\nThe Visual Spatial Reasoning (VSR) bench- mark [53] is designed to overcome these issues and we use it here to assess the spatial reason- ing capabilities of PaliGemma 2. It is formulated as a classification task, where a model needs to determine whether a statement about the spa- tial relationship of objects in the image is correct or not. To use PaliGemma 2 s flexible text in- terface we frame this benchmark as a QA task with True / False answers. The results in Table 7 show that PaliGemma 2 outperforms prior fine- tuned models, and fine-tuning also provides a significant improvement over InstructBlip [18], a strong zero-shot model form the literature. We observe significant benefits from larger model size, indicating benefits from improved language understanding, whereas going beyond resolution 224 did not lead to improvements.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nzs. split\nrand. split\nF1 \nHuman [53]\n95.4\nInstructBLIP (zs.) [18] LXMERT [89]\n65.6 70.1\n61.2\nPaliGemma 2 3B 224px2 PaliGemma 2 10B 224px2\n74.8 79.8\n81.6 86.8\nTable 7 | PaliGemma 2 accuracy on VSR [53] on the zeroshot and random test splits. We show a fine-tuned (LXMERT) and zero-shot (Instruct- BLIP) baseline from the literature.\nFlamingo-CXR [90] Med-Gemini-2D [102]\n13.8 10.1 29.7 20.5 17.5 20.5 28.3 24.4\nPaliGemma 2 3B 896px2 19.9 14.6 31.9 28.8 PaliGemma 2 10B 896px2 17.4 15.0 32.4 29.5\nTable 8 | PaliGemma 2 performance for radiogra- phy report generation on the on the MIMIC-CXR data [23, 33]. We report CIDEr (C), BlEU4 (B), Rouge-L (R), and RadGraph F1-scores [%] [30] (a clinical metric).\n4.8. Radiography report generation\nTo explore the capabilities of PaliGemma 2 mod- els in the medical domain, we apply it to auto- matic chest X-ray report generation, which can be cast as a (long) captioning task on X-ray im- ages. We fine-tune PaliGemma 2 on the MIMIC- CXR dataset [23, 33] which contains 377k images (originating from 228k radiographic studies at the Beth Israel Deaconess Medical Center in Boston, MA) with free-text radiology reports. We use the same train, validation, and test splits as [90]. To improve quality, we use an LLM (Gemini 1.5 pro) to remove mentions of prior X-rays as the model does not have access to those.\nWe measure the RadGraph F1-score [30], which is the F1 score between the entities ex- tracted from the reference report and the gener- ated one using RadGraph. RadGraph takes into account the absence or presence of findings in the report, as well as their relationships to image features. Results are reported on test data held out during training and tuning.\nTable 8 shows the performance of PaliGemma 2 models along with baselines from the litera- ture. PaliGemma 2 obtains a state-of-the-art Rad- Graph score. Increasing resolution and model size both lead to modest improvements.\n4.9. CPU inference and quantization\nCPUs, and briefly present experiments using the gemma.cpp2 framework here. gemma.cpp is a lightweight, portable C++ inference engine that supports 8-bit switched-floating-point quantiza- tion (alternative options for CPU inference include llama.cpp3, XNNPack4, and others).\nTo assess the inference speed for CPU-only in- ference, we run PaliGemma 2 inference on four different architectures with gemma.cpp. We use a checkpoint of PaliGemma 2 3B (224px2) fine- tuned on COCOcap and the example image for PaliGemma in gemma.cpp. The prompt  de- scribe this image  results in a prefill length of 256 + 4 = 260 tokens (for image + text). The out- put response  A large building with two towers on the water  consists of 11 tokens. All runs used batch size 1. The results are presented in Table 9 and give an overview of what can be expected on different processors (for this particular setting).\nFrom evaluations on PaliGemma [9] we already know that going from 32-bit floating point (f32) to 16-bit (bf16) weights is possible without a loss of quality. Here we compare to the gemma.cpp mixed quantization. Table 10 shows a quality comparison for five of the fine-tuning datasets (chosen for coverage of various tasks). We fine- tuned PaliGemma 2 3B (224px2) once for each of these five datasets. (Noticeable differences to Table 13 for the Jax version are the result of us- ing greedy decoding for COCOcap and TextCaps.) We then evaluated the resulting checkpoints both in Jax and in gemma.cpp after quantization. The\nIn some cases we may want to run inference of PaliGemma 2 on devices without accelera- tors. We are interested in the resulting run- times and quality when running inference on\n2https://github.com/google/gemma.cpp 3https://github.com/ggerganov/llama.cpp 4https://github.com/google/XNNPACK\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nWalltime [s]\nTokens/sec\nProcessor\nThreads\nViT\nPrefill Extend Prefill Extend\nApple M1 Max Apple M3 Pro AMD Milan AMD Milan AMD Genoa AMD Genoa\n4+1 7+1 8+1 32+1 8+1 32+1\n1.6 0.8 0.82 0.39 0.36 0.17\n8.2 4.4 4.9 1.8 1.8 0.8\n0.9 0.5 0.64 0.34 0.29 0.27\n32 59 53 144 147 323\n12 22 17 32 37 41\nTable 9 | CPU-only inference speed measurements with gemma.cpp-based implementation on different architectures. Inference of finetuned PaliGemma 2 3B (224px2) with greedy decoding. Prefill is done with 260 tokens and followed by 11 calls to extend during decoding.\nCOCOcap TextCaps AI2D OKVQA DocVQA(val)\nJax, F32, 12.1GB gemma.cpp, quantized, 4.0GB relative metric values [%]\n140.0 139.8\n99.9\n126.3 126.6\n100.2\n75.4 75.6\n100.1\n64.0 64.1\n100.1\n39.8 39.8\n99.9\nTable 10 | Quality comparison between Jax/f32 inference on TPU and quantized gemma.cpp-based inference on CPU. Inference of one fine-tuned PaliGemma 2 3B (224px2) run. Noticeable differences to Table 13 for the Jax version are the result of using greedy decoding for COCOcap and TextCaps. Relative numbers based on metric values before rounding to one decimal.\nrelative quality after quantization shows no prac- tical quality difference.\n[2] H. Agrawal, K. Desai, Y. Wang, X. Chen, R. Jain, M. Johnson, D. Batra, D. Parikh, S. Lee, and P. Anderson. NoCaps: Novel object captioning at scale. In ICCV, 2019.\n5. Conclusion\nWith PaliGemma 2 we present a new family of open-weight models spanning a broad range of model sizes an input resolutions. PaliGemma 2 obtains strong transfer performance across a broad range of captioning, VQA, and video tasks. In particular, the newly added larger variants lead to significant improvements compared to PaliGemma for users with a larger compute bud- get. Furthermore, we show that PaliGemma 2 excels in applications beyond what was consid- ered in PaliGemma, including domains like music, molecules, and medical imaging.\n[3] I. Alabdulmohsin, X. Zhai, A. Kolesnikov, and L. Beyer. Getting vit in shape: Scaling laws for compute-optimal model design. In NeurIPS, 2023.\n[4] J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Men- sch, K. Millican, M. Reynolds, R. Ring, E. Rutherford, S. Cabi, T. Han, Z. Gong, S. Samangooei, M. Monteiro, J. Menick, S. Borgeaud, A. Brock, A. Nematzadeh, S. Sharifzadeh, M. Binkowski, R. Barreira, O. Vinyals, A. Zisserman, and K. Simonyan. Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022.\nReferences\n[1] M. Acharya, K. Kafle, and C. Kanan. Tal- lyQA: Answering complex counting ques- tions. In AAAI, 2019.\n[5] J. Bai, S. Bai, S. Yang, S. Wang, S. Tan, P. Wang, J. Lin, C. Zhou, and J. Zhou. Qwen-VL: A versatile vision- language model for understanding, lo-\nPaliGemma 2: A Family of Versatile VLMs for Transfer\ncalization, arXiv:2308.12966, 2023.\ntext reading, and beyond.\n[6] I. Bello, H. Pham, Q. V. Le, M. Norouzi, and S. Bengio. Neural combinatorial op- timization with reinforcement learning. arXiv:1611.09940, 2016.\n[7] J. Betker, G. Goh, L. Jing, T. Brooks, J. Wang, L. Li, L. Ouyang, J. Zhuang, J. Lee, Y. Guo, et al. Improving image generation with better captions. Technical Report, 2023.\n[8] L. Beyer, X. Zhai, and A. Kolesnikov. https://github.com/\nBig vision. google-research/big_vision, 2022.\n[9] L. Beyer, A. Steiner, A. S. Pinto, A. Kolesnikov, X. Wang, D. Salz, M. Neu- mann, I. Alabdulmohsin, M. Tschannen, E. Bugliarello, T. Unterthiner, D. Keysers, S. Koppula, F. Liu, A. Grycner, A. Grit- senko, N. Houlsby, M. Kumar, K. Rong, J. Eisenschlos, R. Kabra, M. Bauer, M. Bo njak, X. Chen, M. Minderer, P. Voigtlaender, I. Balazevic, J. Puigcerver, P. Papalampidi, O. Henaff, X. Xiong, R. Soricut, J. Harmsen, and X. Zhai. PaliGemma: A versatile 3B VLM for transfer. arXiv:2407.07726, 2024.\nI. Bica,\n[10] A. F. Biten, R. Tito, A. Mafla, L. Gomez, M. Rusinol, C. Jawahar, E. Valveny, and D. Karatzas. Scene text visual question answering. In ICCV, Oct. 2019.\n[14] X. Chen, X. Wang, S. Changpinyo, A. J. Piergiovanni, P. Padlewski, D. Salz, S. Goodman, A. Grycner, B. Mustafa, L. Beyer, A. Kolesnikov, J. Puigcerver, N. Ding, K. Rong, H. Akbari, G. Mishra, L. Xue, A. Thapliyal, J. Bradbury, W. Kuo, M. Seyedhosseini, C. Jia, B. K. Ayan, C. Riquelme, A. Steiner, A. Angelova, X. Zhai, N. Houlsby, and R. Soricut. PaLI: A jointly-scaled multilingual language- image model. arXiv:2209.06794, 2022.\n[15] X. Chen, X. Wang, L. Beyer, A. Kolesnikov, J. Wu, P. Voigtlaender, B. Mustafa, S. Good- I. Alabdulmohsin, P. Padlewski, man, D. Salz, X. Xiong, D. Vlasic, F. Pavetic, K. Rong, T. Yu, D. Keysers, X. Zhai, and R. Soricut. PaLI-3 vision lan- guage models: Smaller, faster, stronger. arXiv:2310.09199, 2023.\n[16] X. Chen,\nJ. Djolonga, P. Padlewski, B. Mustafa, S. Changpinyo, J. Wu, C. R. Ruiz, S. Goodman, X. Wang, Y. Tay, S. Shak- eri, M. Dehghani, D. Salz, M. Lucic, M. Tschannen, A. Nagrani, H. Hu, M. Joshi, B. Pang, C. Montgomery, P. Pietrzyk, M. Ritter, A. J. Piergiovanni, M. Minderer, F. Pavetic, A. Waters, G. Li, I. Alabdul- mohsin, L. Beyer, J. Amelot, K. Lee, A. P. Steiner, Y. Li, D. Keysers, A. Arnab, Y. Xu, K. Rong, A. Kolesnikov, M. Seyedhosseini, A. Angelova, X. Zhai, N. Houlsby, and R. Soricut. PaLI-X: On scaling up a mul- tilingual vision and language model. In CVPR, 2024.\n[11] S. Changpinyo, D. Kukliansy, I. Szpektor, X. Chen, N. Ding, and R. Soricut. All you may need for VQA are image captions. In NAACL, 2022.\n[12] D. L. Chen and W. B. Dolan. Collecting highly parallel data for paraphrase evalu- ation. In ACL, 2011.\n[17] C. K. Ch ng and C. S. Chan. Total-Text: A comprehensive dataset for scene text de- tection and recognition. In ICDAR, 2017.\n[18] W. Dai, J. Li, D. Li, A. M. H. Tiong, J. Zhao, W. Wang, B. Li, P. Fung, and S. Hoi. InstructBLIP: Towards general- purpose vision-language models with in- struction tuning. arxiv:2305.06500, 2023.\n[13] T. Chen, S. Saxena, L. Li, D. J. Fleet, and G. E. Hinton. Pix2seq: A language mod- eling framework for object detection. In ICLR, 2022.\n[19] M. Deitke, C. Clark, S. Lee, R. Tripathi, Y. Yang, J. S. Park, M. Salehi, N. Muen- nighoff, K. Lo, L. Soldaini, et al. Molmo and PixMo: Open weights and open data\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nfor state-of-the-art multimodal models. arXiv:2409.17146, 2024.\n[20] K. Desai and J. Johnson. Virtex: Learning visual representations from textual anno- tations. In CVPR, 2021.\n[30] S. Jain, A. Agrawal, A. Saporta, S. Truong, T. Bui, P. Chambon, Y. Zhang, M. P. Lun- gren, A. Y. Ng, C. Langlotz, et al. Rad- Graph: Extracting clinical entities and re- lations from radiology reports. In NeurIPS Datasets and Benchmarks Track, 2022.\n[21] Gemma Team. Gemma: Open models based on gemini research and technology. arXiv:2403.08295, 2024.\n[22] Gemma Team. Gemma 2:\nImproving open language models at a practical size. arXiv:2408.00118, 2024.\n[23] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley. PhysioBank, PhysioToolkit, and PhysioNet: components of a new re- search resource for complex physiologic signals. Circulation, 101(23), 2000.\n[24] Google Cloud.\nIntroduction to Cloud TPU. https://cloud.google.com/ tpu/docs/intro-to-tpu, 20xx. Ac- cessed: 2024-07-04.\n[31] C. Jia, Y. Yang, Y. Xia, Y. Chen, Z. Parekh, H. Pham, Q. V. Le, Y. Sung, Z. Li, and T. Duerig. Scaling up visual and vision- language representation learning with noisy text supervision. In ICML, 2021.\nJ. Qiu, and A. Chaura- [32] G. Jocher, sia. URL Ultralytics YOLO, 2023. https://github.com/ultralytics/ ultralytics.\n[33] A. E. Johnson, T. J. Pollard, S. J. Berkowitz, N. R. Greenbaum, M. P. Lungren, C.-Y. Deng, R. G. Mark, and S. Horng. MIMIC- CXR, a de-identified publicly available database of chest radiographs with free- text reports. Scientific data, 6(1):317, 2019.\n[25] Y. Goyal, T. Khot, D. Summers-Stay, D. Ba- tra, and D. Parikh. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In CVPR, 2017.\n[34] O. F. Kar, A. Tonioni, P. Poklukar, A. Kul- shrestha, A. Zamir, and F. Tombari. BRAVE: Broadening the en- coding vision-language models. arXiv:2404.07204, 2024.\nvisual\n[26] D. Gurari, Q. Li, A. J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, and J. P. Bigham. VizWiz Grand Challenge: Answering vi- sual questions from blind people. In CVPR, 2018.\n[35] S. Karamcheti, S. Nair, A. Balakrishna, P. Liang, T. Kollar, and D. Sadigh. Pris- Investigating the design matic VLMs: space of visually-conditioned language models. arXiv:2402.07865, 2024.\n[27] T.-Y. Hsu, C. L. Giles, and T.-H. Huang. Scicap: Generating captions for scientific figures. arXiv:2110.11624, 2021.\n[28] Y. Huang, N. Lu, D. Chen, Y. Li, Z. Xie, S. Zhu, L. Gao, and W. Peng. Improv- ing table structure recognition with visual- alignment sequential coordinate modeling. In CVPR, 2023.\n[36] D. Karatzas, L. Gomez-Bigorda, A. Nico- laou, S. K. Ghosh, A. D. Bagdanov, M. Iwa- mura, J. Matas, L. Neumann, V. R. Chan- drasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny. ICDAR 2015 competition on robust reading. In ICDAR, 2015.\n[37] K. Karkkainen and J. Joo. Fairface: Face attribute dataset for balanced race, gen- der, and age for bias measurement and mitigation. In WACV, 2021.\n[29] D. Hudson and C. Manning. GQA: A new dataset for real-world visual reasoning and compositional question answering. CVPR, 2019.\n[38] T. Kawakatsu. Multi-cell decoder and mu- tual learning for table structure and char- acter recognition. In ICDAR, 2024.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n[39] S. Kazemzadeh, V. Ordonez, M. Matten, and T. Berg. ReferItGame: Referring to objects in photographs of natural scenes. In EMNLP, Oct. 2014.\n[49] Y. Li, G. Li, L. He, J. Zheng, H. Li, and Z. Guan. Widget Captioning: Generat- ing natural language description for mo- In EMNLP, bileuser interface elements. 2020.\n[40] A. Kembhavi, M. Salvato, E. Kolve, M. Seo, H. Hajishirzi, and A. Farhadi. A diagram is worth a dozen images. In ECCV, 2016.\n[50] Y. Li, H. Mao, R. Girshick, and K. He. Exploring plain vision transformer back- bones for object detection. In ECCV, 2022.\n[41] S. Kim, P. A. Thiessen, E. E. Bolton, J. Chen, G. Fu, A. Gindulyte, L. Han, J. He, S. He, B. A. Shoemaker, et al. Pubchem substance and compound databases. Nucleic acids research, 44(D1):D1202 D1213, 2016.\n[51] T. Lin, M. Maire, S. J. Belongie, L. D. Bour- dev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll a r, and C. L. Zitnick. Microsoft COCO: common objects in con- text. arXiv:1405.0312, 2014.\n[42] D. P. Kingma and J. Ba.\nAdam: A method for stochastic optimization. arXiv:1412.6980, 2017.\n[52] F. Liu, E. Bugliarello, E. M. Ponti, S. Reddy, N. Collier, and D. Elliott. Visually grounded reasoning across languages and cultures. In EMNLP, Nov. 2021.\n[43] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. Carlos Niebles. Dense-captioning events in videos. In ICCV, 2017.\n[53] F. Liu, G. E. T. Emerson, and N. Collier. Visual spatial reasoning. TACL, 11:635  651, 2023.\n[44] I. Krylov, S. Nosov, and V. Sovrasov. Open images v5 text annotation and yet another mask text spotter. In ACCV, 2021.\n[54] H. Liu, C. Li, Q. Wu, and Y. J. Lee. Visual instruction tuning. In NeurIPS, 2023.\n[45] H. Lauren on, L. Tronchon, M. Cord, and V. Sanh. What matters when models? vision-language building arXiv:2405.02246, 2024.\n[55] S. Lobry, D. Marcos, J. Murray, and D. Tuia. RSVQA: Visual question answering for re- mote sensing data. IEEE Trans. on Geo- science and Remote Sensing, 58(12), Dec. 2020.\n[46] A. Lees, V. Q. Tran, Y. Tay, J. Sorensen, J. Gupta, D. Metzler, and L. Vasserman. A new generation of perspective API: Ef- ficient multilingual character-level trans- formers. arXiv:2202.11176, 2022.\n[56] S. Long, S. Qin, D. Panteleev, A. Bissacco, Y. Fujii, and M. Raptis. Towards end-to- end unified scene text detection and layout analysis. In CVPR, 2022.\n[47] B. Li, H. Zhang, K. Zhang, D. Guo, Y. Zhang, R. Zhang, F. Li, Z. Liu, and C. Li. LLaVA-NeXT: What else instruction tuning influences beyond data?, May 2024. URL https: //llava-vl.github.io/blog/ 2024-05-25-llava-next-ablations/.\nvisual\n[57] S. Long, S. Qin, D. Panteleev, A. Bissacco, Y. Fujii, and M. Raptis. ICDAR 2023 com- petition on hierarchical text detection and recognition. In ICDAR, 2023.\n[58] S. Long, S. Qin, Y. Fujii, A. Bissacco, and M. Raptis. Hierarchical text spotter for joint text spotting and layout analysis. In WACV, 2024.\n[48] J. Li, D. Li, S. Savarese, and S. C. H. Hoi. BLIP-2: bootstrapping language- image pre-training with frozen image en- coders and large language models. In ICML, 2023.\n[59] P. Lu, S. Mishra, T. Xia, L. Qiu, K.-W. Chang, S.-C. Zhu, O. Tafjord, P. Clark, and A. Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS, 2022.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n[60] N. T. Ly and A. Takasu. An end-to-end multi-task learning model for image-based table recognition. arXiv:2303.08648, 2023.\n[69] Y. Onoe, S. Rane, Z. Berger, Y. Bitton, J. Cho, R. Garg, A. Ku, Z. Parekh, J. Pont- Tuset, G. Tanzer, S. Wang, and J. Baldridge. DOCCI: Descriptions of Connected and Contrasting Images. In ECCV, 2024.\n[61] J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy. Generation and comprehension of unambiguous ob- ject descriptions. In CVPR, 2016.\n[70] H. Pang. 2024. ppaanngggg/yolo-doclaynet.\nJan. YOLO-DocLayNet, URL https://github.com/\n[62] K. Marino, M. Rastegari, A. Farhadi, and R. Mottaghi. OK-VQA: A visual question answering benchmark requiring external knowledge. In CVPR, 2019.\n[71] D. Pavlov, M. Rybalkin, B. Karulin, M. Kozhevnikov, A. Savelyev, and A. Churi- nov. Indigo: Universal cheminformatics API. Journal of Cheminformatics, 3(Suppl 1):P4, 2011.\n[63] A. Masry, X. L. Do, J. Q. Tan, S. Joty, and E. Hoque. ChartQA: A benchmark for ques- tion answering about charts with visual and logical reasoning. In ACL, May 2022.\n[64] M. Mathew, D. Karatzas, R. Man- matha, and C. V. Jawahar. DocVQA: A dataset for VQA on document images. arXiv:2007.00398, 2020.\n[65] M. Mathew, V. Bagal, R. Tito, D. Karatzas, E. Valveny, and C. V. Jawahar. Infograph- icVQA. In WACV, 2022.\n[66] B. McKinzie, Z. Gan, J. Fauconnier, S. Dodge, B. Zhang, P. Dufter, D. Shah, X. Du, F. Peng, F. Weers, A. Belyi, H. Zhang, K. Singh, D. Kang, A. Jain, H. H , M. Schwarzer, T. Gunter, X. Kong, A. Zhang, J. Wang, C. Wang, N. Du, T. Lei, S. Wiseman, G. Yin, M. Lee, Z. Wang, R. Pang, P. Grasch, A. Toshev, and Y. Yang. MM1: methods, analysis & in- sights from multimodal LLM pre-training. arXiv:2403.09611, 2024.\n[72] Z. Peng, W. Wang, L. Dong, Y. Hao, S. Huang, S. Ma, and F. Wei. Kosmos- 2: Grounding multimodal large language models to the world. arXiv:2306.14824, 2023.\n[73] J. Pfeiffer, G. Geigle, A. Kamath, J.-M. Steitz, S. Roth, I. Vuli , and I. Gurevych. xGQA: Cross-lingual visual question an- swering. In ACL, 2022.\n[74] B. Pfitzmann, C. Auer, M. Dolfi, A. S. Nas- sar, and P. Staar. DocLayNet: A large human-annotated dataset for document- layout segmentation. In SIGKDD, 2022.\n[75] A. Piergiovanni, W. Kuo, and A. An- gelova. Pre-training image-language transformers for open-vocabulary tasks. arXiv:2209.04372, 2022.\n[76] Y. Qian, J. Guo, Z. Tu, Z. Li, C. W. Coley, and R. Barzilay. MolScribe: Robust molec- ular structure recognition with image-to- graph generation. J. Chem. Inf. Model., 63 (7), 2023.\n[67] A. Mishra, S. Shekhar, A. K. Singh, and A. Chakraborty. OCR-VQA: Visual question answering by reading text in images. In ICDAR, 2019.\n[68] N. Nayef, F. Yin, I. Bizid, H. Choi, Y. Feng, D. Karatzas, Z. Luo, U. Pal, C. Rigaud, J. Chazalon, et al. ICDAR2017 robust read- ing challenge on multi-lingual scene text detection and script identification - RRC- MLT. In ICDAR, 2017.\n[77] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language su- pervision. In ICML, 2021.\n[78] H. Rashkin, V. Nikolaev, M. Lamm, L. Aroyo, M. Collins, D. Das, S. Petrov, G. S. Tomar, I. Turc, and D. Reitter. Measuring\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nattribution in natural language generation models. Computational Linguistics, 49(4): 777 840, 2023.\nvision models with task rewards. In ICML, 2023.\n[79] A. R os-Vila, D. Rizo, J. M. I esta, and J. Calvo-Zaragoza. End-to-end optical mu- sic recognition for pianoform sheet music. IJDAR, 26(3):347 362, 2023.\n[80] A. R os-Vila, J. Calvo-Zaragoza, and T. Pa- quet. Sheet Music Transformer: End- to-end optical music recognition beyond monophonic transcription. In ICDAR, 2024.\n[81] D. Schwenk, A. Khandelwal, C. Clark, K. Marino, and R. Mottaghi. A- OKVQA: A benchmark for visual ques- tion answering using world knowledge. arXiv:2206.01718, 2022.\n[89] H. Tan and M. Bansal. LXMERT: Learn- ing cross-modality encoder representa- tions from transformers. In EMNLP-IJCNLP, 2019.\n[90] R. Tanno, D. Barrett, A. Sellergren, S. Ghaisas, S. Dathathri, A. See, J. Welbl, K. Singhal, S. Azizi, T. Tu, M. Schaeker- mann, R. May, R. Lee, S. Man, Z. Ahmed, S. Mahdavi, Y. Matias, J. Barral, A. Es- lami, D. Belgrave, V. Natarajan, S. Shetty, P. Kohli, P.-S. Huang, A. Karthikesalingam, and I. Ktena. Collaboration between clini- cians and vision language models in radi- ology report generation. Nature Medicine, 2024.\n[82] O. Sidorov, R. Hu, M. Rohrbach, and A. Singh. TextCaps: A dataset for image captioning with reading comprehension. In ECCV, 2020.\n[91] A. V. Thapliyal, J. Pont Tuset, X. Chen, and R. Soricut. Crossmodal-3600: A mas- sively multilingual multimodal evaluation dataset. In EMNLP, 2022.\n[83] A. Singh, V. Natarjan, M. Shah, Y. Jiang, X. Chen, D. Parikh, and M. Rohrbach. To- wards VQA models that can read. In CVPR, 2019.\n[92] S. Tong, E. Brown, P. Wu, S. Woo, M. Mid- depogu, S. C. Akula, J. Yang, S. Yang, A. Iyer, X. Pan, A. Wang, R. Fergus, Y. Le- Cun, and S. Xie. Cambrian-1: A Fully Open, Vision-Centric Exploration of Multi- modal LLMs. arXiv:2406.16860, 2024.\n[84] A. Singh, G. Pang, M. Toh, J. Huang, W. Galuba, and T. Hassner. TextOCR: To- wards large-scale end-to-end reasoning for arbitrary-shaped scene text. In CVPR, 2021.\n[93] M. Tschannen, M. Kumar, A. Steiner, X. Zhai, N. Houlsby, and L. Beyer. Image captioners are scalable vision learners too. In NeurIPS, 2023.\n[85] B. Smock, R. Pesala, and R. Abraham. GriTS: Grid table similarity metric for table structure recognition. arXiv:2203.12555, 2022.\n[94] B. Wan, M. Tschannen, Y. Xian, F. Pavetic, I. Alabdulmohsin, X. Wang, A. S. Pinto, A. Steiner, L. Beyer, and X. Zhai. LocCa: Visual pretraining with location-aware cap- tioners. In NeurIPS, 2024.\n[86] B. Smock, R. Pesala, and R. Abraham. Aligning benchmark datasets for table structure recognition. In ICDAR, 2023.\n[87] A. Suhr, S. Zhou, A. Zhang, I. Zhang, H. Bai, and Y. Artzi. A corpus for reason- ing about natural language grounded in photographs. In ACL, 2019.\n[88] A. Susano Pinto, A. Kolesnikov, Y. Shi, L. Beyer, and X. Zhai. Tuning computer\n[95] B. Wang, G. Li, X. Zhou, Z. Chen, T. Gross- man, and Y. Li. Screen2words: Automatic mobile ui summarization with multimodal learning. In Symposium on User Interface Software and Technology, 2021.\n[96] J. Wang, Z. Yang, X. Hu, L. Li, K. Lin, Z. Gan, Z. Liu, C. Liu, and L. Wang. GIT: A generative image-to-text transformer for vision and language. TMLR, 2022.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n[97] X. Wang, J. Wu, J. Chen, L. Li, Y.-F. Wang, and W. Y. Wang. VaTeX: A large-scale, high-quality multilingual dataset for video- and-language research. In ICCV, 2019.\n[105] J. Yu, Y. Xu, J. Y. Koh, T. Luong, G. Baid, Z. Wang, V. Vasudevan, A. Ku, Y. Yang, B. K. Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. TMLR, 2022.\n[98] Z. Wang, J. Yu, A. W. Yu, Z. Dai, Y. Tsvetkov, and Y. Cao. SimVLM: Simple visual lan- guage model pretraining with weak super- vision. In ICLR, 2022.\n[106] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring expressions. In ECCV, 2016.\n[99] D. Weininger. SMILES, a chemical lan- guage and information system. 1. Introduc- tion to methodology and encoding rules. Journal of Chemical Information and Com- puter Sciences, 28(1):31 36, 1988.\n[107] Z. Yu, D. Xu, J. Yu, T. Yu, Z. Zhao, Y. Zhuang, and D. Tao. ActivityNet-QA: A dataset for understanding complex web videos via question answering. In AAAI, 2019.\n[100] D. Xu, Z. Zhao, J. Xiao, F. Wu, H. Zhang, X. He, and Y. Zhuang. Video question answering via gradually refined attention over appearance and motion. In ACM Mul- timedia, 2017.\n[101] J. Xu, T. Mei, T. Yao, and Y. Rui. MSR- VTT: A large video description dataset for bridging video and language. In CVPR, 2016.\n[108] X. Zhai, B. Mustafa, A. Kolesnikov, and L. Beyer. Sigmoid loss for language image pre-training. In ICCV, 2023.\n[109] H. Zhang, M. Gao, Z. Gan, P. Dufter, N. Wenzel, F. Huang, D. Shah, X. Du, B. Zhang, Y. Li, et al. MM1.5: Methods, analysis & insights from multimodal LLM fine-tuning. arXiv:2409.20566, 2024.\n[102] L. Yang, S. Xu, A. Sellergren, T. Kohlberger, Y. Zhou, I. Ktena, A. Kiraly, F. Ahmed, F. Hormozdiari, T. Jaroensri, E. Wang, E. Wulczyn, F. Jamil, T. Guidroz, C. Lau, S. Qiao, Y. Liu, A. Goel, K. Park, A. Aghar- wal, N. George, Y. Wang, R. Tanno, D. G. T. Barrett, W.-H. Weng, S. S. Mah- davi, K. Saab, T. Tu, S. R. Kalidindi, M. Etemadi, J. Cuadros, G. Sorensen, Y. Matias, K. Chou, G. Corrado, J. Barral, S. Shetty, D. Fleet, S. M. A. Eslami, D. Tse, S. Prabhakara, C. McLean, D. Steiner, R. Pilgrim, C. Kelly, S. Azizi, and D. Golden. Advancing multimodal medical capabili- ties of Gemini. arXiv:2405.03162, 2024.\n[103] Q. Ye, H. Xu, J. Ye, M. Yan, A. Hu, H. Liu, Q. Qian, J. Zhang, and F. Huang. mPLUG- Owl2: Revolutionizing multi-modal large language model with modality collabora- tion. In CVPR, 2024.\n[110] Y. Zhao, A. Gu, R. Varma, L. Luo, C. Huang, M. Xu, L. Wright, H. Shojanazeri, M. Ott, S. Shleifer, A. Desmaison, C. Balioglu, P. Damania, B. Nguyen, G. Chauhan, Y. Hao, A. Mathews, and S. Li. Pytorch FSDP: experiences on scaling fully sharded data parallel. VLDB, 2023.\n[111] X. Zheng, D. Burdick, L. Popa, P. Zhong, and N. X. R. Wang. Global Table Extractor (GTE): A framework for joint table identifi- cation and cell structure recognition using visual context. In WACV, 2021.\n[112] X. Zhong, E. ShafieiBavani, and A. Ji- meno Yepes. Image-based table recog- nition: Data, model, and evaluation. In ECCV, 2020.\n[104] H. You, H. Zhang, Z. Gan, X. Du, B. Zhang, Z. Wang, L. Cao, S.-F. Chang, and Y. Yang. Ferret: Refer and ground anything any- where at any granularity. In ICLR, 2024.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nContributions and Acknowledgments\nModel development contributors\nMarketing Glenn Cameron Natalie Dao\nCore Contributors Andreas Steiner Andr  Susano Pinto Michael Tschannen\nKaggle D. Sculley Nilay Chauhan Brenda Flynn Kinjal Parekh\nContributors Daniel Keysers Xiao Wang Yonatan Bitton Alexey Gritsenko Matthias Minderer Anthony Sherbondy Shangbang Long Siyang Qin Reeve Ingle Emanuele Bugliarello Sahar Kazemzadeh Thomas Mesnard Ibrahim Alabdulmohsin Lucas Beyer Xiaohua Zhai\nDeveloper Relations Jetha Chan Joe Fernandez Ju-yeong Ji\nKeras Divyashree Sreepathihalli Hongyu Chiu\nVertex AI Keelin McDonell\nLead Andreas Steiner\nEthics and Safety Antonia Paterson Pankil Botadra\nAcknowledgments Jan Wassenberg Basil Mustafa\nHugging Face Partners Merve Noyan Pedro Cuenca Pablo Montalvo\nModel release contributors and general support\nGemma Model Tris Warkentin Alek Andreev Armand Joulin Victor Cotruta Sanah Choudhry Nathan Byrd\nNvidia Partners Dong Meng Manoj Kilaru Shyamala Prayaga Ryan Timbrook Anna Warno\nOllama Partners Michael Chiang Jeffrey Morgan\nOpen Models Success Luiz Gustavo Martins Kat Black Phil Culliton Chris Perry D. Sculley Sara Smoot\nExecutive Sponsors Raia Hadsell Joelle Barral Jeremiah Harmsen Mat Velloso Allen Hutchison\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nA. Tasks\nThis section provides one training example for the transfer tasks that were added in PaliGemma 2 in addition to the tasks considered in [9].\nFigure 6 | Test set example from Total-Text [17] with PaliGemma 2 3B 896px2 predictions.\nIn millions\nMay 31, 2015 Securities in an unrealized loss position for less than twelve months Gross unreal- ized losses\nFair value\nSecurities in an unrealized loss position for more than twelve months Gross unreal- ized losses\nFair Value\nTotal\nGross unreal- ized losses\nFair Value\nType of issue: General obliga- tion municipal bonds Revenue mu- nicipal bonds Total\n$(3.8)\n$(3.2)\n$(7.0)\n$355.1\n361.6\n$896.7\n$(0.5)\n$(0.5)\n$26.3\n$26.3\n$(4.3)\n(3.2)\n$(7.5)\n$561.4\n361.6\n$923.0\nFigure 7 | Original image from FinTabNet [111] with predicted cell content boxes (green), and resulting PaliGemma 2 model prediction.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nFigure 8 | Example of a rendered molecule with the corresponding SMILES string CC1([C@@H]([C@@H](C2=C(O1)C=CC(=C2)C(C(F)(F)F)(F)F)N3CCCCC3=O)O)C.\nFigure 9 | Example of a pianoform sheet with its **kern transcription (source https://www. humdrum.org/guide/ch02/).\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nModel\nCaption\nInstructBLIP\nThe image features a brown horse standing in a dirt area, with a saddle on its back. The horse appears to be well-trained, as it is standing still and paying attention to its surroundings. There are several people visible in the scene, including one person standing near the horse s left side, another person standing near the horse s right side, and a third person standing further away from the horse. In addition to the people, there are two cars parked in the background, one on the left side and another on the right side of the image.\nmPLUG-Owl2\nIn this image we can see a horse on the ground. In the background there is a shed and we can see trees.\nMiniGPT-4\nThis image is of a horse standing in the dirt near a building. The horse has a saddle on its back and is wearing a number 5 on its forehead. There is a barn or stable visible in the background, and some trees and shrubs in the foreground.\nLLaVA-1.5\nThe image features a brown horse standing in a dirt field. The horse is wearing a saddle and bridle, indicating that it is ready for riding. The saddle is adorned with stars, adding a unique touch to the horse s appearance. The horse appears to be calmly standing in the field, possibly waiting for its rider or simply enjoying the outdoors.\nVILA\nIn the image, a majestic brown horse stands in a fenced area, its coat gleaming under the sunlight. The horse is adorned with a black saddle and bridle, ready for a ride. The number 55 is prominently displayed on its side, perhaps indicating its racing number or a unique identifier. The horse s tail swishes gently, adding a sense of movement to the scene. The horse is positioned in front of a stone wall, which provides a rustic backdrop to the image. The wall s texture contrasts with the smoothness of the horse s coat, creating a visually interesting composition. The ground beneath the horse is covered in dirt, suggesting that this might be a training or riding area. The fence in the background further reinforces this idea, providing a safe and controlled environment for the horse. Overall, this image captures a moment of calm before what could be an exciting ride, with the horse poised and ready for action.\nPaLI-5B (FT on DOCCI)\nA side view of a brown horse that is standing on a dirt ground. The horse is facing to the right and has a black saddle on it. The horse has a white star on its face. The horse has a bridle around its neck. The horse is standing on a dirt ground that is covered in white rocks. There is a stone wall behind the horse. There is a metal shed behind the horse. There are trees behind the shed.\nPaliGemma 3B 448px2\nA side view of a brown horse that is standing on a dirt ground. The horse is facing to the left and has a black saddle on its back. The saddle is black and has a brown leather strap on it. The horse has a black rope around its neck. The horse has a white star hanging from its bridle. The horse has a white number \"55\" written on its back. The horse has white hooves. The horse has a black rope wrapped around its legs. The horse has a brown rope wrapped around its head. The horse has a black rope wrapped around its tail. The horse is standing on a dirt ground that has small white rocks on it. Behind the horse is a stone wall that is made up of different sized rocks. Behind the stone wall is a metal gate that is being held up by metal beams. Behind the gate is a tree line that is made up of green trees.\nPaliGemma 2 3B 448px2\nA brown horse is standing in a dirt area with rocks scattered around. The horse has a black saddle on its back and a white star painted on its chest. The number \"55\" is painted on the side of the horse in white. A stone building is behind the horse. A metal structure is in the background of the image with a wooden roof over it. Trees are in the background of the image as well.\nPaliGemma 2 10B 448px2\nA brown horse is standing in a dirt area with small rocks. The horse has a black saddle on its back and a white star painted on its side. The horse has \"55\" written on its back in white. There is a pile of horse manure in front of the horse. There is a stone wall behind the horse. There is a wooden structure with a metal roof behind the stone wall. There are trees in the background.\nFigure 10 | Example DOCCI image and captions generated by PaliGemma 2 models and baselines, with non-entailment sentences highlighted in red.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nIndication\nRadiologist report\nPaliGemma 2 3B 896px2 prediction\nINDICATION: Woman with cardiomy- opathy and cdiff with acute desatura- tion and dyspnea // PE, pulmonary edema, vs aspiration PE, pulmonary edema, vs aspiration.\nIMPRESSION: Enlargement of the cardiac silhouette with pulmonary edema. Bilateral pleural effusions, more prominent on the left.\nFINDINGS: There is substantial en- largement of the cardiac silhouette with pulmonary edema. Retrocardiac opacification is consistent with vol- ume loss in the left lower lobe and pleural effusion. In the appropriate clinical setting, superimposed pneu- monia would have to be considered.\nFigure 11 | Example from the MIMIC-CXR [23, 33] validation set along with a PaliGemma 2 prediction.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nB. Transfer and evaluation details\nB.1. Text detection and recognition\nIn all experiments, we fine-tune the checkpoints for 15k steps with a batch size of 256 on 256 TPU-v5e. The maximum sequence length is set to 2048. We experiment with learning rates {0.01, 0.05, 0.1, 0.5, 1.0}   10 4 and find that 10 5 gives the best results. We also found using a label-smoothing of 0.1 improves the results. The best results are obtained with resolution 896px2.\nB.2. Table Structure Recognition\nWe use the same transfer setup and hyperparameter range as for text recognition described in Sec. B.1, except that we set maximum output length to 4096 and do not use label-smoothing. The optimal fine-tuning learning rate is 10 4.\nPreprocessing The cropped table input images are padded to square shape with white pixels and resized to the target image resolution. Cell bounding boxes of non-empty table cells are encoded using four PaliGemma location tokens of the form <locDDDD>, where DDDD encodes a quantized image location in the range 0000 to 1023. Boxes are specified using a special coords=\"<locXMIN><locYMAX><locXMAX><locYMAX>\" attribute of table cell <td> HTML tags. Training examples with invalid table structure and overlapping cell bounding boxes are skipped. Addi- tional correction of cell bounding box annotations and cell text annotations are applied to FinTabNet training examples using information from the source PDFs, following a similar approach as [86]. As is common in the literature [38], no filtering is applied to the test splits we report results on.\nB.3. Molecule structure recognition\nIn all experiments, we fine-tune the pretrained checkpoint for 30k steps with batch size 256 using 256 TPU-v5e chips. The learning rate is set to 10 4, label smoothing to 0.1, and the maximum output length is 256. We pad the images to square shape with white pixels and resize them to the target image resolution.\nB.4. Optical music score recognition\nWe follow the training setup described in Sec. B.3 except that we use maximum output length 1024.\nB.5. Generating long, fine-grained captions (DOCCI)\nWe rely on the transfer protocol and hyperparameters suggested in [9, Sec. 3.2.4.].\nHuman evaluation protocol To evaluate the factual grounding of the generated captions, we conduct human evaluations assessing the relationship between each sentence and the corresponding image. Raters are presented with highlighted sentences and asked,  What is the relationship of the highlighted sentence with respect to the image? . They then select from four options:  Entailment ,  Neutral ,  Contradiction , and \"Nothing to assess\", categories adapted from the framework in [78] for evaluating the factual alignment of text and visual content. For example, the statement  The pig has black, rounded hooves on its front and back feet and a pink nose  (Fig. 12) would be rated as  Contradiction , as the image clearly shows pink hooves. Figure 1 illustrates the annotation interface.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nNothing to assessContradictionNeutralEntailment\nUserModelA medium shot of a soft, plush pink pig stuffed animal facing forward. The pig has short ears that are beginning to droop down. The snout is very small, and the eyes are black with small dark brown pupils. The stomach is a light tan. The underbelly of the pig is not visible.\nThe pig has black, rounded hooves on\nDescribe this image in detailsSentence 5 out of 10\nAssessment\nits front and back feet and a pink nose\n. The pig is sitting on a light brown hard wood floor with a sage green wall behind it. The wall has a horizontal groove where the baseboard is. The pig is casting a shadow on the wall behind it, angled towards the top of the shot. Indoors. The lights are on.\nWhat is the relationship of the highlighted sentence with respect to the image?\nFigure 12 | Annotation interface used for human evaluation of image description accuracy. Raters assess the relationship between generated sentences and the corresponding image.\nEach sentence was rated by five individuals and the majority agreement was used as the rating result. The overall binary agreement is 0.8407, indicating the proportion where all raters agree on the  Entailment  category. We refer to both  Contradiction  and  Neutral  as  Non-entailment . Examples of human evaluation results can be found in Table 4. We use the proportion of  Non-entailment  sentences to select the most factually accurate models.\nB.6. Spatial reasoning\nWe fine-tune the pretrained checkpoint with batch size 1024 using 64 TPU-v5e chips. The the maximum output length is set to 18, which covers the training target outputs. We explore learning rates in {0.1, 0.2, 1.0, 3.0}   10 6, weight decay in {0.1, 0.3, 1.0}   10 6, dropout probability in {0.0, 0.1, 0.2} and epochs in {1, 3, 5, 10, 15, 30}.\nB.7. Radiography report generation\nReports in MIMIC-CXR dataset [23, 33] typically have the format INDICATIONS: .... FINDINGS: {...}. IMPRESSIONS: {...}, where indications explain why the chest X-ray was ordered as clinical context for the radiologist, findings enumerate salient features of the image and impressions summarize the radiologist s interpretation of the findings.\nWe train on the full reports and during prediction emulate the clinical workflow by providing the\nindications as a prefix to the model. The model then predicts findings and impressions sections.\nAfter initial exploration based on the PaliGemma 2 at 448px2 resolution we find that fine-tuning for 8 epochs with learning rate 5   10 6 without label smoothing, dropout, and weight decay leads to good results when combined with greedy decoding. We fix these settings and sweep the learning rate again for higher resolutions and model sizes, considering learning rates in {0.03, 0.1, 0.3, 1.0, 5.0}   10 4.\nC. Object detection\nObject detection has been used as a pre-training task in all members of the PaLI and PaliGemma In transfers, family and improves downstream performance across a wide range of tasks [14]. PaliGemma performs at or close to the state of the art on localization tasks such as referring expression comprehension and segmentation. This raises the question of how well PaliGemma performs on\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n224px2\n448px2\n896px2\nPG1 3B\nPG2 3B\nPG2 10B\nPG1 3B\nPG2 3B\nPG2 10B\nPG1 3B\nPG2 3B\nPG2 10B\nCOCO DocLayNet\n28.7 50.8\n30.4 46.7\n30.3 50.4\n37.0 64.1\n38.5 62.5\n39.2 63.5\n41.1 66.5\n42.3 66.1\n43.6 66.0\nTable 11 | Mean average precision (mAP) after transfer to detection tasks. PG1 and PG2 refer to PaliGemma [9] and PaliGemma 2, respectively.\nclassical object detection tasks. We tested this by transferring PaliGemma to MS COCO [51] and to the DocLayNet document layout detection benchmark [74].\nFor both tasks, we use a transfer strategy inspired by pix2seq s sequence augmentation approach [13]. We use the prefix  detect all classes\\n . In the suffix (target sequence), we first provide box coordinates and class names for all annotated objects, in random order. The suffix is then filled up to the maximum sequence length with noise boxes, where each noise box consists of random coordinates and a dedicated <noise> token in place of the class name. During training, no loss is applied to the coordinate tokens of the noise boxes, while the <noise> class tokens receive a loss as usual. This augmentation trains the model to output a larger number of boxes. In addition, it provides a mechanism for the model to represent the confidence that a prediction represents a real object, in form of the probability assigned to the <noise> token. During inference, the <noise> and <EOS> tokens are excluded from sampling. The likelihood of the class tokens is used as a confidence score.\nFor COCO, we train for 50 epochs. Results are provided in Table 11. As expected, performance strongly depends on resolution. We also observe small but consistent improvements from better language models. Performance at 896px2 is roughly on par with prior sequence-based approaches [13], but lags behind specialized detection architectures like ViTDet [50].\nFor DocLayNet, we follow the same sequence augmentation approach and train for 50 epochs. Results are similar to COCO in that performance increases with resolution and Gemma 2 model size, although Gemma 1 performs on par with Gemma 2 on this task (Table 11). Similar to COCO, specialized detectors perform better on this task (e.g. YOLOv11 [32] reaches 79.5 mAP [70]).\nThese results show that, in contrast to many other tasks, classical detection poses a challenge to general-purpose VLMs like PaliGemma. We hypothesize that the limiting factor is not the model s intrinsic object understanding, since it performs well on visual question answering and referring expression comprehension tasks. Instead, performance may be limited by a mismatch between the Average Precision metric, which rewards large numbers of predictions and accurate confidence scores, and the language modeling objective. Fine-tuning with a task-specific reward [88]) could address this limitation, but is beyond the scope of the simple transfer approach we propose for PaliGemma.\nD. Ethics and Safety\nBesides quality-related metrics, we also evaluate the new PaliGemma 2 VLMs with respect to a number of categories relevant to ethics and safety. These evaluations include prompts covering child safety, content safety and representational harms, following the approach used in Gemma 2 [22], but with image captioning and visual question answering (VQA) setups.\nIn addition, we also follow the setup used in [15] and use the Perspective API [46] with threshold > 0.8 to detect the presence of toxicity, profanity, among other potential issues in the image captions generated by PaliGemma 2 VLMs across images sourced from the Fairface dataset [37]. We report the\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nMetric\nPerceived Gender\nEthnicity\nAge Group\n10B\n28B\n10B\n28B\n10B\n28B\nMaximum\nToxicity 0.14 0.15 0.19 0.29 0.39 0.39 0.26 0.18 0.32 Identity Attack 0.04 0.02 0.02 0.13 0.06 0.06 0.06 0.03 0.06 0.17 0.25 0.17 0.37 0.52 0.52 0.27 0.39 0.24 Insult 0.55 0.43 0.57 0.83 0.48 0.48 0.64 0.43 0.64 Threat 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Profanity\nMedian\nToxicity 0.13 0.10 0.18 0.07 0.07 0.14 0.12 0.08 0.12 Identity Attack 0.02 0.01 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.15 0.23 0.14 0.14 0.17 0.13 0.09 0.18 0.16 Insult 0.35 0.27 0.41 0.28 0.19 0.42 0.27 0.31 0.40 Threat 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Profanity\nTable 12 | Safety statistics for captions generated by PaliGemma 2 VLMs on FairFace [37] using the Perspective API [46]. Numbers indicate the fraction of instances with thresholds   0.8 in [%], i.e. a value of e.g. 0.09 means 0.09%.\nmaximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. Table 12 shows the overall results. Overall, we observe a low level of toxicity and profanity among others, across all slices and models. In addition, all PaliGemma 2 models perform comparably.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nE. Detailed results\nChartQA (aug)\n224px  448px \nRefCOCOg (val)\n10%\nOKVQA\n100%\nChartQA (human)\n100%\nGQA\nTallyQA (complex)\nTallyQA (simple)\nError reduction 3B 10B\nST-VQA (val)\n10%\nInfoVQA (val)\nRefCOCO+ (val)\n100%\nAOKVQA-DA (val)\nVizWizVQA (val)\nCountBenchQA\nAI2D\n100%\nRefCOCO (val)\nVQAv2 (minival)\nRelative improvement 3B 10B\nNLVR2\nAOKVQA-MC (val)\n10%\n10%\nxGQA (avg7)\nTextVQA (val)\nOCR-VQA\nDocVQA (val)\nScienceQA\nFigure 13 | Same data as in Figure 3 and Table 13. The left plot shows relative improvement when changing model size or resolution. The right plot shows the same improvements, but expressed in terms of error reduction. For saturated benchmarks, error reduction is a better metric for model improvement. Benchmarks without a clear normalization to a percentage (such as CIDEr scores) are not shown. Axes are in range [ 1, 100].\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n224px2 10B\n28B\n448px2 10B\n28B\nAI2D [40] AOKVQA-DA (val) [81] AOKVQA-MC (val) [81] ActivityNet-CAP [43] ActivityNet-QA [107] COCO-35L (avg34) [91] COCO-35L (en) [91] COCOcap[51] ChartQA (aug) [63] ChartQA (human) [63] CountBenchQA [9] DocVQA (val) [64] GQA[29] InfoVQA (val) [65] MARVL (avg5) [52] MSRVTT-CAP [101] MSRVTT-QA [100] MSVD-QA [12] NLVR2 [87] NoCaps [2] OCR-VQA [67] OKVQA [62] RSVQA-hr (test) [55] RSVQA-hr (test2) [55] RSVQA-lr [55] RefCOCO (testA) [106] RefCOCO (testB) [106] RefCOCO (val) [106] RefCOCO+ (testA) [39] RefCOCO+ (testB) [39] RefCOCO+ (val) [39] RefCOCOg (test) [61] RefCOCOg (val) [61] ST-VQA (val) [10] SciCap [27] ScienceQA [59] Screen2Words [95] TallyQA (complex) [1] TallyQA (simple) [1] TextCaps [82] TextVQA (val) [83] VATEX [97] VQAv2 (minival) [25] VizWizVQA (val) [26] WidgetCap [49] XM3600 (avg35) [91] XM3600 (en) [91] xGQA (avg7) [73]\n74.7 ( 0.5) 83.1 ( 0.4) 64.2 ( 0.5) 68.9 ( 0.3) 79.7 ( 1.0) 83.7 ( 1.1) 34.2 ( 0.3) 35.9 ( 0.5) 53.2 ( 0.4) 51.3 ( 0.2) 113.9 ( 0.2) 115.8 ( 0.0) 138.4 ( 0.2) 140.8 ( 0.3) 141.3 ( 0.5) 143.7 ( 0.2) 74.2 ( 0.8) 74.4 ( 0.7) 48.4 ( 1.1) 42.0 ( 0.3) 84.0 ( 1.4) 81.0 ( 1.0) 43.9 ( 0.6) 39.9 ( 0.3) 67.2 ( 0.2) 66.2 ( 0.3) 33.6 ( 0.2) 25.2 ( 0.2) 89.5 ( 0.2) 83.5 ( 0.2) 72.1 ( 0.5) 68.5 ( 1.3) 51.9 ( 0.1) 50.5 ( 0.1) 61.1 ( 0.2) 62.5 ( 0.2) 93.9 ( 0.2) 91.4 ( 0.1) 123.1 ( 0.3) 126.3 ( 0.4) 74.7 ( 0.1) 73.4 ( 0.0) 68.0 ( 0.1) 64.2 ( 0.1) 92.6 ( 0.0) 92.7 ( 0.1) 90.8 ( 0.1) 90.9 ( 0.1) 92.8 ( 0.6) 93.0 ( 0.4) 77.2 ( 0.1) 75.7 ( 0.2) 74.2 ( 0.3) 71.0 ( 0.3) 75.9 ( 0.1) 73.4 ( 0.1) 74.7 ( 0.2) 72.7 ( 0.2) 68.4 ( 0.3) 64.2 ( 0.2) 72.0 ( 0.2) 68.6 ( 0.1) 71.9 ( 0.1) 69.0 ( 0.2) 71.4 ( 0.2) 68.3 ( 0.3) 61.9 ( 0.1) 64.3 ( 0.4) 165.1 ( 0.5) 159.5 ( 0.7) 96.1 ( 0.3) 98.2 ( 0.2) 113.3 ( 0.8) 117.8 ( 0.7) 73.4 ( 0.1) 70.3 ( 0.3) 81.8 ( 0.1) 83.2 ( 0.1) 127.5 ( 0.3) 137.9 ( 0.3) 64.0 ( 0.3) 59.6 ( 0.3) 82.7 ( 0.5) 80.8 ( 0.4) 84.3 ( 0.2) 83.0 ( 0.2) 76.4 ( 0.4) 78.1 ( 0.4) 138.1 ( 0.7) 139.8 ( 1.0) 44.5 ( 0.1) 42.8 ( 0.1) 80.7 ( 0.3) 79.8 ( 0.7) 61.4 ( 0.1) 58.6 ( 0.2)\n83.2 ( 0.7) 70.2 ( 0.2) 84.7 ( 0.8)\n116.5 ( 0.1) 142.4 ( 0.4) 144.0 ( 0.3) 68.9 ( 0.6) 46.8 ( 0.6) 86.4 ( 1.6) 44.9 ( 0.4) 67.3 ( 0.2) 36.4 ( 0.1) 90.6 ( 0.2)\n- -\n94.2 ( 0.1) 127.1 ( 0.3) 75.3 ( 0.2) 71.2 ( 0.2) 92.7 ( 0.0) 90.9 ( 0.1) 93.5 ( 0.2) 76.8 ( 0.1) 73.9 ( 0.1) 75.0 ( 0.0) 73.6 ( 0.2) 67.1 ( 0.1) 70.3 ( 0.2) 70.7 ( 0.1) 70.5 ( 0.1) 65.1 ( 0.4) 156.9 ( 1.0) 98.2 ( 0.2) 122.8 ( 0.5) 74.2 ( 0.1) 83.4 ( 0.1) 139.9 ( 0.4) 64.7 ( 0.2)\n84.5 ( 0.1) 78.7 ( 0.2) 138.8 ( 0.8) 45.2 ( 0.1) 81.0 ( 0.9) 61.1 ( 0.1)\n76.0 ( 0.2) 67.9 ( 0.3) 82.5 ( 0.4)\n115.8 ( 0.3) 140.4 ( 0.4) 143.4 ( 0.4) 89.2 ( 0.4) 54.0 ( 0.6) 82.0 ( 1.2) 73.6 ( 0.3) 68.1 ( 0.2) 37.5 ( 0.3) 82.7 ( 0.3)\n- -\n91.6 ( 0.2) 123.5 ( 0.3) 75.7 ( 0.1) 64.1 ( 0.4) 92.8 ( 0.0) 90.7 ( 0.2) 92.7 ( 0.8) 78.6 ( 0.3) 73.5 ( 0.1) 76.3 ( 0.1) 76.1 ( 0.2) 67.0 ( 0.3) 72.1 ( 0.3) 72.7 ( 0.1) 72.3 ( 0.2) 80.5 ( 0.1) 183.3 ( 0.7) 96.2 ( 0.2) 114.0 ( 0.5) 73.6 ( 0.2) 85.3 ( 0.1) 152.1 ( 0.3) 75.2 ( 0.2)\n84.8 ( 0.2) 77.5 ( 0.2) 151.4 ( 0.8) 43.2 ( 0.1) 80.3 ( 0.8) 60.4 ( 0.2)\n84.4 ( 0.4) 70.8 ( 0.5) 85.9 ( 0.2)\n117.2 ( 0.1) 142.4 ( 0.4) 145.0 ( 0.3) 90.1 ( 0.5) 66.4 ( 0.5) 85.3 ( 1.7) 76.6 ( 0.5) 68.3 ( 0.3) 47.8 ( 0.2) 89.1 ( 0.0)\n- -\n93.7 ( 0.2) 126.9 ( 0.1) 76.3 ( 0.1) 68.6 ( 0.5) 92.8 ( 0.1) 90.7 ( 0.2) 93.1 ( 0.6) 79.7 ( 0.1) 76.2 ( 0.3) 78.2 ( 0.1) 77.7 ( 0.2) 71.1 ( 0.2) 74.4 ( 0.1) 74.8 ( 0.1) 74.4 ( 0.1) 82.0 ( 0.3) 177.2 ( 0.3) 98.5 ( 0.2) 119.1 ( 1.9) 76.7 ( 0.3) 86.2 ( 0.1) 157.7 ( 0.7) 76.6 ( 0.1)\n85.8 ( 0.1) 78.6 ( 0.4) 151.9 ( 0.4) 44.6 ( 0.1) 81.5 ( 0.4) 62.6 ( 0.2)\n84.6 ( 0.4) 71.2 ( 0.2) 87.0 ( 0.3)\n117.2 ( 0.1) 142.3 ( 0.8) 145.2 ( 0.4) 85.1 ( 0.2) 61.3 ( 0.6) 87.4 ( 1.0) 76.1 ( 0.4) 68.3 ( 0.1) 46.7 ( 0.4) 89.7 ( 0.1)\n- -\n94.1 ( 0.2) 127.0 ( 0.2) 76.6 ( 0.1) 70.6 ( 0.2) 92.8 ( 0.1) 90.8 ( 0.1) 93.7 ( 0.4) 79.3 ( 0.1) 74.8 ( 0.1) 77.3 ( 0.1) 76.6 ( 0.1) 68.6 ( 0.1) 72.8 ( 0.1) 73.7 ( 0.1) 73.0 ( 0.1) 81.8 ( 0.1) 172.7 ( 1.5) 98.6 ( 0.2) 123.4 ( 0.8) 76.8 ( 0.2) 85.7 ( 0.1) 153.6 ( 0.5) 76.2 ( 0.1)\n85.8 ( 0.2) 78.9 ( 0.5) 148.9 ( 0.7) 45.2 ( 0.1) 81.0 ( 0.2) 62.1 ( 0.3)\nTable 13 | Mean and std-deviation over 5 finetuning runs of PaliGemma 3B, 10B, 28B models at 224px2 and 448px2 resolutions on over 30+ academic tasks from [9]. Tasks splits, preprocessing, metrics and hyper-parameters following the 224px2 versions according to previous work. Only the learning rate has been selected per model size based on validation splits.\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nTable 14 | Sweep of learning rates on the various tasks and model sizes at 224px2 resolution. Although we report numbers in all metrics, learning rate selection was done based on the validation split and not on the zero-shot numbers.\n3e-7\n6e-7\n1e-6\n3e-6\n6e-6\n1e-5\n3e-5\nTask\nModel\nAI2D (minival)\nAOKVQA-DA (val)\nAOKVQA-MC (val)\nActivityNet-CAP (minival)\nActivityNet-QA (minival)\nCOCO-35L (avg34)\nCOCO-35L (en)\nCOCOcap (minival)\nChartQA (aug) (minival)\nChartQA (human) (minival)\nCountBenchQA\nDocVQA (val)\nGQA (minival)\nInfoVQA (val)\nMARVL (avg5)\n3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 3B 10B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 3B 10B\n61.8 80.0 81.9 59.3 67.7 69.7 76.9 83.8 83.3 26.1 28.6 43.3 49.9 110.1 115.4 116.7 137.9 140.6 142.5 146.3 148.3 148.8 60.8 69.0 66.8 41.4 50.9 48.3 82.7 88.2 87.8 37.8 42.4 42.7 70.9 73.6 73.7 21.6 33.4 36.9 69.9 86.5 86.7 62.8 70.4 44.1 49.3\n67.6 82.9 82.3 62.9 68.6 70.2 78.7 83.3 84.0 28.5 31.4 46.8 52.2 111.8 115.8 116.6 138.6 140.3 141.3 146.7 149.4 149.5 64.3 68.6 63.4 42.8 50.8 46.9 82.9 84.7 88.4 37.9 40.9 42.1 72.2 74.3 73.9 22.9 33.5 36.6 73.4 88.2 88.5 66.1 71.5 47.0 51.2\n70.6 85.3 83.2 64.0 68.8 69.8 79.4 83.3 85.1 28.5 30.8 49.4 53.9 113.6 115.2 115.4 139.1 139.6 140.4 145.4 148.2 149.2 66.0 71.1 65.2 42.7 50.8 47.7 82.0 85.1 88.4 37.3 42.2 43.1 72.9 74.7 74.7 23.8 33.2 36.3 77.1 89.2 89.5 67.8 75.3 48.5 51.9\n75.0 84.4 85.9 64.6 66.6 69.0 80.8 82.7 82.5 30.6 31.6 52.6 55.0 113.9 113.6 114.0 138.4 137.3 137.7 147.2 148.3 149.5 69.7 69.5 66.7 44.1 49.2 46.5 79.0 82.9 88.6 39.4 44.1 45.2 73.9 74.4 74.8 25.4 33.2 36.2 81.2 89.4 90.3 67.6 74.0 51.1 53.2\n76.9 82.9 85.0 63.6 64.6 66.3 77.2 79.4 82.4 30.0 30.0 53.8 55.3 113.6 112.9 112.1 137.6 135.5 134.5 147.1 147.0 148.2 69.5 69.9 66.0 43.2 47.0 45.3 82.0 81.4 86.7 40.2 41.4 42.1 73.9 74.4 74.6 25.2 32.2 35.5 83.0 89.1 90.8 72.6 66.2 52.0 53.1\n75.1 82.1 83.4 59.3 57.3 60.8 76.9 75.5 78.2 30.6 31.1 53.5 54.6 113.2 112.2 111.2 136.5 133.8 133.2 147.0 146.5 145.3 68.4 68.4 64.1 42.9 44.5 41.8 78.0 78.2 83.3 38.7 39.8 40.5 73.8 74.2 74.1 25.1 29.8 34.1 82.4 87.4 89.2 74.0 69.4 51.2 52.1\n68.8 69.2 75.7 52.8 50.5 51.1 63.8 56.1 58.4 29.8 28.6 52.0 51.2 111.7 111.7 109.6 133.8 132.5 129.9 142.0 143.6 145.7 63.6 60.4 55.9 35.4 34.6 33.8 70.4 65.7 69.6 32.5 29.6 30.9 72.4 71.5 72.3 22.3 21.7 25.4 69.9 67.6 76.2 68.3 67.2 49.9 49.7\nMSRVTT-CAP (minival)\nMSRVTT-QA (minival)\nContinued on next page\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nTable 14 | Sweep of learning rates on the various tasks and model sizes at 224px2 resolution. Although we report numbers in all metrics, learning rate selection was done based on the validation split and not on the zero-shot numbers.\n3e-7\n6e-7\n1e-6\n3e-6\n6e-6\n1e-5\n3e-5\nTask\nModel\nMSVD-QA (minival)\nNLVR2 (minival)\nNoCaps\nOCR-VQA (minival)\nOKVQA (minival)\nRSVQA-hr (minival)\nRSVQA-lr (minival)\nRefCOCO (testA)\nRefCOCO (testB)\nRefCOCO (val)\nRefCOCO+ (testA)\nRefCOCO+ (testB)\nRefCOCO+ (val)\nRefCOCOg (test)\nRefCOCOg (val)\n3B 10B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B\n55.2 61.1 82.5 91.8 92.2 123.3 126.7 127.5 72.6 74.7 75.5 49.4 57.8 64.6 92.8 93.3 93.1 90.7 92.3 91.8 73.1 76.7 76.2 68.0 73.8 73.0 70.4 75.1 74.6 67.6 72.9 72.7 55.3 66.0 65.3 61.3 69.8 69.0 65.5 70.9 69.9 65.2 70.8 69.9 56.1 60.9 63.0\n57.8 63.9 86.2 93.0 92.8 123.6 126.1 127.5 73.1 74.5 75.5 52.3 60.5 64.4 93.2 93.2 93.4 92.4 92.7 92.1 74.5 76.9 76.7 70.1 74.3 73.9 72.1 75.6 75.0 70.1 73.5 73.4 58.6 67.1 66.4 64.2 70.8 70.0 67.2 71.6 70.5 67.0 71.4 70.4 58.8 62.9 64.4\n60.7 65.4 88.2 93.3 93.6 124.0 126.0 126.5 73.4 74.3 75.2 54.3 61.3 65.4 93.3 93.1 93.3 92.7 92.0 92.4 75.3 77.1 76.8 70.8 74.3 73.8 73.0 75.8 75.2 70.8 74.0 73.4 60.5 67.3 67.1 65.8 71.1 70.4 68.4 71.6 70.8 67.8 71.4 70.2 60.4 63.8 65.2\n63.3 64.2 90.4 93.3 93.7 123.4 125.2 124.0 73.4 73.9 74.8 57.6 60.8 63.8 93.0 93.0 93.3 93.3 91.7 92.7 75.5 77.2 76.8 71.2 74.2 72.8 73.2 76.1 74.8 71.8 75.0 74.0 62.9 68.4 67.5 67.0 72.0 70.8 68.7 71.7 70.7 68.0 71.4 70.2 61.5 64.0 65.5\n63.1 63.2 90.9 92.5 93.7 122.5 122.1 123.0 73.2 73.5 73.9 56.2 58.7 60.6 93.3 93.4 93.3 92.1 91.8 92.9 75.8 77.1 76.6 70.8 73.4 73.1 73.3 75.6 74.6 72.2 74.9 74.3 63.2 68.2 67.8 67.9 71.8 71.0 68.9 71.3 70.6 68.0 71.0 70.1 62.3 63.9 64.3\n61.3 63.0 90.2 91.7 92.2 120.5 120.5 120.3 72.9 73.0 72.5 52.9 55.6 56.8 93.4 93.3 93.3 92.2 92.8 92.9 75.8 76.1 75.5 70.9 73.4 72.0 73.4 74.9 74.0 72.7 74.2 72.9 64.6 67.9 67.0 68.6 71.3 70.4 69.0 70.4 69.7 68.2 70.0 69.2 61.2 61.2 62.6\n57.0 56.3 85.9 86.1 88.0 112.3 111.5 113.0 70.6 70.6 71.0 47.2 44.1 46.4 93.3 89.4 92.9 92.3 92.0 92.3 74.1 71.6 71.6 69.7 68.6 68.4 71.6 70.6 69.9 71.0 69.0 69.3 63.8 62.6 62.7 67.5 66.5 65.7 67.2 65.2 64.9 66.1 64.9 64.0 57.0 54.8 55.7\nST-VQA (val)\nContinued on next page\nPaliGemma 2: A Family of Versatile VLMs for Transfer\nTable 14 | Sweep of learning rates on the various tasks and model sizes at 224px2 resolution. Although we report numbers in all metrics, learning rate selection was done based on the validation split and not on the zero-shot numbers.\n3e-7\n6e-7\n1e-6\n3e-6\n6e-6\n1e-5\n3e-5\nTask\nModel\nSciCap (minival)\nScienceQA (minival)\nScreen2Words (minival)\nTallyQA (complex)\nTallyQA (simple)\nTextCaps (minival)\nTextVQA (val)\nVATEX (minival)\nVizWizVQA (val)\nWidgetCap (minival)\nXM3600 (avg35)\n3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B 3B 10B 28B\n55.2 78.6 80.3 87.7 96.9 96.8 95.1 110.9 113.0 66.6 72.0 73.1 80.4 83.0 82.9 122.8 140.3 150.9 57.6 63.4 64.5 84.4 91.4 80.9 83.8 83.8 72.5 76.1 76.3 137.0 146.3 144.0 44.2 45.0 45.2 83.7 82.5 80.9 51.7 58.5 58.8\n67.4 92.5 94.7 92.1 97.1 97.1 104.2 115.4 119.5 67.8 72.5 73.5 81.1 83.3 83.3 131.9 145.3 149.0 58.7 64.1 64.7 87.2 93.2 81.5 84.1 84.1 74.2 77.1 77.6 141.9 148.4 147.6 43.9 44.5 44.6 83.1 80.6 79.8 54.0 60.5 59.2\n76.9 106.2 104.0 94.5 97.6 97.4 109.0 118.2 120.4 68.6 73.4 73.9 81.3 83.1 83.3 136.5 145.4 150.2 59.3 63.9 65.3 89.8 93.4 82.1 84.3 84.1 74.8 77.8 78.2 141.8 150.9 145.9 43.7 43.9 44.0 82.2 78.6 79.4 55.3 61.4 60.8\n109.4 128.1 125.9 95.1 97.6 97.2 109.3 118.1 118.8 70.0 73.5 74.8 81.8 83.2 83.5 136.2 145.4 145.5 59.6 63.2 64.8 90.7 93.7 82.7 83.7 83.8 76.4 78.0 78.8 142.3 148.2 147.0 42.7 42.1 42.3 79.1 75.0 76.4 58.0 61.3 62.3\n130.3 136.9 136.2 95.2 97.1 96.8 113.2 114.7 116.2 70.0 72.7 73.8 81.9 82.7 83.0 133.6 144.2 144.0 59.4 61.6 63.3 90.2 90.4 82.4 83.1 82.8 76.6 77.3 77.8 141.7 144.5 144.1 41.7 40.7 41.1 78.3 73.0 73.6 58.7 61.8 61.9\n138.8 143.2 140.1 94.3 96.2 96.1 112.5 113.0 114.2 70.5 72.0 73.0 81.5 82.1 82.2 132.8 141.0 142.1 58.0 58.1 59.3 90.2 89.9 81.9 82.0 82.0 76.7 77.2 76.7 140.6 140.8 143.0 40.8 39.3 39.1 76.9 72.0 71.3 57.8 60.2 61.7\n148.1 143.8 141.7 91.4 93.7 94.2 110.1 110.0 106.3 66.7 65.8 68.1 79.1 79.1 79.7 126.0 125.8 126.2 51.1 48.3 49.9 86.3 84.5 79.6 79.4 79.7 74.0 73.3 72.5 129.7 133.3 133.0 37.8 36.8 35.8 70.9 69.9 66.1 49.1 38.0 49.4\nxGQA (avg7)\nPaliGemma 2: A Family of Versatile VLMs for Transfer\n224px2\n448px2\nTask\nPG1\nPG2\nPG1\nPG2\nAI2D AOKVQA-DA (val) AOKVQA-MC (val) ActivityNet-CAP ActivityNet-QA COCO-35L (avg34) COCO-35L (en) COCOcap ChartQA (aug) ChartQA (human) CountBenchQA DocVQA (val) GQA InfoVQA (val) MARVL (avg5) MSRVTT-CAP MSRVTT-QA MSVD-QA NLVR2 NoCaps OCR-VQA OKVQA RSVQA-hr (test) RSVQA-hr (test2) RSVQA-lr RefCOCO (testA) RefCOCO (testB) RefCOCO (val) RefCOCO+ (testA) RefCOCO+ (testB) RefCOCO+ (val) RefCOCOg (test) RefCOCOg (val) ST-VQA (val) SciCap ScienceQA Screen2Words TallyQA (complex) TallyQA (simple) TextCaps TextVQA (val) VATEX VQAv2 (minival) VizWizVQA (val) WidgetCap XM3600 (avg35) XM3600 (en) xGQA (avg7)\n72.1 61.1 78.5 34.6 50.8 113.7 139.2 141.9 74.2 40.0 81.9 37.8 65.6 25.5 80.6 70.5 50.1 60.2 90.0 121.7 72.3 63.5 92.6 90.6 92.6 75.7 70.7 73.4 71.9 64.5 68.3 68.2 67.7 61.6 162.3 95.4 117.6 69.6 81.7 127.5 59.0 79.7 82.1 73.7 136.1 41.9 78.0 57.3\n74.7 (+2.6) 64.2 (+3.1) 79.7 (+1.2) 34.2 ( 0.4) 51.3 (+0.5) 113.9 (+0.2) 138.4 ( 0.8) 141.3 ( 0.6) 74.4 (+0.2) 42.0 (+2.0) 81.0 ( 0.9) 39.9 (+2.1) 66.2 (+0.6) 25.2 ( 0.3) 83.5 (+2.9) 68.5 ( 2.0) 50.5 (+0.4) 61.1 (+0.9) 91.4 (+1.4) 123.1 (+1.4) 73.4 (+1.1) 64.2 (+0.7) 92.7 (+0.1) 90.9 (+0.3) 93.0 (+0.4) 75.7 (+0.0) 71.0 (+0.3) 73.4 (+0.0) 72.7 (+0.8) 64.2 ( 0.3) 68.6 (+0.3) 69.0 (+0.8) 68.3 (+0.6) 61.9 (+0.3) 165.1 (+2.8) 96.1 (+0.7) 113.3 ( 4.3) 70.3 (+0.7) 81.8 (+0.1) 127.5 (+0.0) 59.6 (+0.6) 80.8 (+1.1) 83.0 (+0.9) 76.4 (+2.7) 138.1 (+2.0) 42.8 (+0.9) 79.8 (+1.8) 58.6 (+1.3)\n73.3 65.7 80.3 - -\n115.8 141.2 144.6 88.5 54.2 83.1 74.1 67.0 37.0 76.8 - - - 88.9 123.6 74.6 63.2 92.8 90.5 93.1 77.9 72.4 75.6 74.2 64.5 69.8 71.0 70.1 79.7 181.5 95.9 119.6 72.3 84.9 153.9 74.6 - 84.6 75.5 148.4 42.4 80.0 57.9\n76.0 (+2.7) 67.9 (+2.2) 82.5 (+2.2)\n115.8 (+0.0) 140.4 ( 0.8) 143.4 ( 1.2) 89.2 (+0.7) 54.0 ( 0.2) 82.0 ( 1.1) 73.6 ( 0.5) 68.1 (+1.1) 37.5 (+0.5) 82.7 (+5.9)\n- -\n91.6 (+2.7) 123.5 ( 0.1) 75.7 (+1.1) 64.1 (+0.9) 92.8 (+0.0) 90.7 (+0.2) 92.7 ( 0.4) 78.6 (+0.7) 73.5 (+1.1) 76.3 (+0.7) 76.1 (+1.9) 67.0 (+2.5) 72.1 (+2.3) 72.7 (+1.7) 72.3 (+2.2) 80.5 (+0.8) 183.3 (+1.8) 96.2 (+0.3) 114.0 ( 5.6) 73.6 (+1.3) 85.3 (+0.4) 152.1 ( 1.8) 75.2 (+0.6)\n84.8 (+0.2) 77.5 (+2.0) 151.4 (+3.0) 43.2 (+0.8) 80.3 (+0.3) 60.4 (+2.5)\nTable 15 | Comparison of PaliGemma 3B and PaliGemma 2 3B at 224px2 and 448px2 resolutions. PG1 and PG2 refer to PaliGemma [9] and PaliGemma 2, respectively.\n",
    "description": "The article discusses \"PaliGemma 2,\" an enhanced version of the PaliGemma vision-language model (VLM) developed by Google DeepMind. This upgrade integrates the SigLIP-So400m vision encoder with the Gemma 2 family of language models, resulting in a versatile set of models optimized for various vision-language tasks.\n\nKey features of PaliGemma 2 include:\n- Three model sizes (3B, 10B, 28B) and three resolutions (224px\u00b2, 448px\u00b2, and 896px\u00b2) to cater to different computational needs and performance requirements.\n- A training strategy involving three stages that equips the models for fine-tuning across over 30 transfer tasks, including advanced capabilities in Optical Character Recognition (OCR), molecular structure recognition, music score recognition, and medical report generation. PaliGemma 2 achieves state-of-the-art results in many of these tasks.\n- The ability to analyze how model size and resolution impact transfer performance, with findings indicating that larger models often require lower optimal learning rates.\n- A commitment to open-weight model release, allowing for broader research and application.\n\nThe study emphasizes the model's versatility and effectiveness across various domains while also addressing the importance of ethical considerations and safety in AI applications. Overall, PaliGemma 2 represents a significant advancement in the capabilities of vision-language models, offering enhanced performance and a range of applications in real-world scenarios.",
    "tags": "PaliGemma,Vision-Language Models,VLM,Transfer Learning,Deep Learning,Language Models,Computer Vision,Fine-tuning"
  }
]