You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for IndoorCrowd

Dataset Summary

IndoorCrowd is a multi-scene dataset designed for indoor human detection, instance segmentation, and multi-object tracking. It captures diverse challenges such as viewpoint variation, partial occlusion, and varying crowd density across four distinct campus locations (ACS-EC, ACS-EG, IE-Central, R-Central). Faces are explicitly blurred to preserve privacy, making it suitable for safe research into intelligent crowd management and behaviour tracking.

The dataset consists of 31 videos sampled at 5 FPS, totalling 9,913 frames.

Subsets

  1. Object Detection and Segmentation: 9,913 frames featuring bounding boxes and instance segmentation masks. Includes a rigorously annotated 620-frame pure-human control subset for foundation-model benchmarking.
  2. Multi-Object Tracking (MOT): A 2,552-frame tracking subset providing continuous identity tracks following the MOTChallenge format.

Supported Tasks

  • object-detection: Detecting human bounding boxes (Baselines benchmarked: YOLOv8n, YOLOv26n, RT-DETR-L).
  • image-segmentation: Generating instance-level masks for people in crowded indoor geometries.
  • video-object-tracking: Maintaining human identity across consecutive frames via tracking algorithms (Baselines benchmarked: ByteTrack, BoT-SORT, OC-SORT).

Dataset Creation

Curation Rationale

Outdoor datasets currently dominate development. Indoor environments introduce a new set of challenges like camera view obstructions (pillars, furniture), structural occlusions, near-to-distal scale variance, and abrupt density fluctuations.

Annotations

Annotations were produced using a semi-automated pipeline:

  1. Auto-labelling: Uses foundation models such as SAM3, GroundingSAM, and EfficientGroundingSAM to generate initial candidate masks and tracklets.
  2. Human Correction: Expert human reviewers used SAM 2.1 to manually delete false positives, append missing masks, correct identity switches, and linearly interpolate gaps, ensuring high-fidelity ground truth.

Data Splits

The dataset provides varied crowd density regimes:

  • ACS-EC: A dense multi-level atrium setting with small instance scales and high occlusion ($79.3%$ dense frames).
  • ACS-EG: A narrow ground-level corridor with substantial person scale variations lengthways.
  • IE-Central: An intermediate seating/entrance hall environment.
  • R-Central: An overhead-view atrium with prominent structural columns causing regular occlusions.

Personal and Sensitive Information

All human faces in the raw footage have been strictly blurred by an automated de-identification pipeline prior to release. No audio, demographic attributes, or personal identifiers are collected.

Additional Information

Licensing Information

The dataset is released under a license restricting its use strictly to non-commercial computer vision research. It prohibits surveillance and any re-identification of individuals.

Citation

If you use IndoorCrowd, please cite:

@article{nae2026indoorcrowd,
  title   = {IndoorCrowd: A Multi-Scene Dataset for Human Detection, Segmentation, and Tracking with an Automated Annotation Pipeline},
  author  = {Nae, Sebastian-Ion and Moldoveanu, Radu and Ghita, Alexandra Stefania and Florea, Adina Magda},
  journal = {arXiv preprint arXiv:2604.02032},
  year    = {2026},
  url     = {https://arxiv.org/abs/2604.02032}
}

Hugging Face loading

Parquet shards are organized by config (subset) and scene (split).

Hub config Labels
obj_det_seg SAM3 (default)
obj_det_seg_grounded_sam Grounded SAM
obj_det_seg_efficient_grounded_sam Efficient Grounded SAM
tracking MOT boxes + track IDs (no masks)

Splits are scene names: acs_ec, acs_eg, ie_central, r_central (not a single train split).

Each row includes image, type, scene, objects, and metadata. For detection/segmentation, objects contains bbox, category, area, id, score, track_id, and rle_mask (COCO RLE JSON per instance).

from datasets import load_dataset
import json
from pycocotools import mask as mask_utils

ds = load_dataset("sebnae/IndoorCrowd", "obj_det_seg", split="acs_ec")
row = ds[0]
rle = json.loads(row["objects"]["rle_mask"][0])
mask = mask_utils.decode(rle)
Downloads last month
72

Paper for sebnae/IndoorCrowd