THÖR-MAGNI: A Large-scale Indoor Motion Capture Recording of Human Movement and Interaction

  • Tim Schreiter (Creator)
  • Tiago Rodrigues de Almeida (Creator)
  • Yufei Zhu (Creator)
  • Eduardo Gutiérrez Maestro (Creator)
  • Andrey Rudenko (Creator)
  • Tomasz Kucner (Supervisor)
  • Martin Magnusson (Supervisor)
  • Luigi Palmieri (Contributor)
  • Kai O. Arras (Contributor)
  • Achim J. Lilienthal (Contributor)

Dataset

Description

The THÖR-MAGNI Dataset Tutorials THÖR-MAGNI datasets is a novel dataset of accurate human and robot navigation and interaction in diverse indoor contexts, building on the previous THÖR dataset protocol. We provide position and head orientation motion capture data, 3D LiDAR scans and gaze tracking. In total, THÖR-MAGNI captures 3.5 hours of motion of 40 participants on 5 recording days. This data collection is designed around systematic variation of factors in the environment to allow building cue-conditioned models of human motion and verifying hypotheses on factor impact. To that end, THÖR-MAGNI encompasses 5 scenarios, in which some of them have different conditions (i.e., we vary some factor): Scenario 1 (plus conditions A and B): Participants move in groups and individually; Robot as static obstacle; Environment with 3 obstacles and lane marking on the floor for condition B; Scenario 2: Participants move in groups, individually and transport objects with variable difficulty (i.e. bucket, boxes and a poster stand); Robot as static obstacle; Environment with 3 obstacles; Scenario 3 (plus conditions A and B): Participants move in groups, individually and transporting objects with variable difficulty (i.e. bucket, boxes and a poster stand). We denote each role as: Visitors-Alone, Visitors-Group 2, Visitors-Group 3, Carrier-Bucket, Carrier-Box, Carrier-Large Object; Teleoperated robot as moving agent: in condition A, the robot moves with differential drive; in condition B, the robot moves with omni-directional drive; Environment with 2 obstacles; Scenario 4 (plus conditions A and B): All participants, denoted as Visitors-Alone HRI interacted with the teleoperated mobile robot; Robot interacted in two ways: in condition A (Verbal-Only), the Anthropomorphic Robot Mock Driver (ARMoD), a small humanoid NAO robot on top of the mobile platform, only used speech to communicate the next goal point to the participant; in condition B the ARMoD used speech, gestures and robotic gaze to convey the same message; Free space environment Scenario 5: Participants move alone (Visitors-Alone) and one of the participants, denoted as Visitors-Alone HRI, transport objects and interact with the robot; The ARMoD is remotely controlled by an experimenter and proactively offers help; Free space environment; Preliminary steps Before proceeding, make sure to download the data from ZENODO 1. Directory Structure ├── docs │ ├── tutorials.md
Date made available23 Jan 2023
PublisherZenodo

Dataset Licences

  • CC-BY-4.0

Cite this