DocVQA-2026 / README.md
mserrao's picture
Update README.md
4ead4e0 verified
|
raw
history blame
5.93 kB
metadata
task_categories:
  - visual-question-answering
  - document-question-answering
language:
  - en
tags:
  - multimodal
  - benchmark
  - document-understanding
configs:
  - config_name: default
    data_files:
      - split: val
        path: val.parquet

DocVQA 2026

ICDAR2026 Competition on Multimodal Reasoning over Documents in Multiple Domains

Hugging Face Dataset

Building upon previous DocVQA benchmarks, this evaluation dataset introduces challenging reasoning questions over a diverse collection of documents spanning eight domains, including business reports, scientific papers, slides, posters, maps, comics, infographics, and engineering drawings.

By expanding coverage to new document domains and introducing richer question types, this benchmark seeks to push the boundaries of multimodal reasoning and promote the development of more general, robust document understanding models.

Load & Inspect the Data

from datasets import load_dataset

# 1. Load the dataset
dataset = load_dataset("VLR-CVC/DocVQA-2026", split="val")

# 2. Access a single sample (one document)
sample = dataset[5]

doc_id = sample["doc_id"]
category = sample["doc_category"]
print(f"Document ID: {doc_id} ({category})")

# 3. Access Images
# 'document' is a list of PIL Images (one for each page)
images = sample["document"]
print(f"Number of pages: {len(images)}")
images[0].show()  

# 4. Access Questions and Answers
questions = sample["questions"]
answers = sample["answers"]

# 5. Visualize Q&A pairs for a document
for q, a in zip(questions, answers):
    print("-" * 50)
    print(f"Question: {q['question']}")
    print(f"Answer: {a['answer']}")
    print("-" * 50)

Structure of a Sample

Click to expand the JSON structure
{
    'doc_id': 'comics_1',
    'doc_category': 'comics',
    'document': [,
        <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1240x1754 at 0x7F...>,
        ...
        <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1240x1754 at 0x7F...> 
      
    ],
    'questions': [
        {
            'question_id': 'comics_1_q1', 
            'question': "How many times do people get in the head in Nyoka and the Witch Doctor's Madness?"
        }
    ],
    'answers': [
        {
            'question_id': 'comics_1_q1', 
            'answer': '4'
        }
    ]
}

Results

DocVQA 2026 Results Chart
Figure 1: Performance comparison across domains.

Category Gemini 3 Pro Preview GPT-5.2 Gemini 3 Flash Preview GPT-5 Mini
Overall Accuracy 0.375 0.350 0.3375 0.225
Business Report 0.400 0.600 0.200 0.300
Comics 0.300 0.200 0.400 0.100
Engineering Drawing 0.300 0.300 0.500 0.200
Infographics 0.700 0.600 0.500 0.500
Maps 0.000 0.200 0.000 0.100
Science Paper 0.300 0.400 0.500 0.100
Science Poster 0.300 0.000 0.200 0.000
Slide 0.700 0.500 0.400 0.500

Evaluation Parameters:

  • GPT Models: "High thinking" enabled, temperature set to 1.0.
  • Gemini Models: "High thinking" enabled, temperature set to 0.0.

API Constraints: > Both models were evaluated via their respective APIs. If a sample fails because the input files are too large, the result counts as a failure. For example, the file input limit for OpenAI models is 50MB, and several comics in this dataset surpass that threshold.

Dataset Structure

The dataset consists of:

  1. Images: High-resolution PNG renders of document pages located in the images/ directory.
  2. Annotations: A Parquet file (val.parquet) containing the questions, answers, and references to the image paths.