GERD: Geometric event response data generation

arXiv:2412.03259v2 Announce Type: replace Abstract: Event-based vision sensors offer high time resolution, high dynamic range, and low power consumption, yet event-based vision models lag behind conventional frame-based vision methods. We argue that this gap is partly due to the lack of principled study of the transformation groups that govern event-based visual streams. Motivated by the role that geometric and group-theoretic methods have played in advancing computer vision, we present GERD: a simulator for generating event-based recordings of objects under precisely controlled affine, Galilean, and temporal scaling transformations. By providing ground-truth transformations at each timestep, GERD enables hypothesis-driven and controlled studies of geometric properties that are otherwise impossible to isolate in real-world datasets. The simulator supports three noise models and sub-pixel motion as a complement to real sensor datasets. We demonstrate its use in training and evaluating models with geometric guarantees and release GERD as an open tool available at github.com/ncskth/gerd

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top