D²iCE researchers co-organise the 4th Omnidirectional Computer Vision Workshop at CVPR'2023
Posted on June 30, 2023
D²iCE researchers Kaavya Rekanar and Ciarán Eising co-organised the 4th Omnidirectional Computer Vision (OmniCV) workshop took place on June 19th, 2023, as part of the prestigious Conference on Computer Vision and Pattern Recognition (CVPR) in Vancouver, Canada. This workshop aimed to delve into the latest developments and applications of omnidirectional imaging, showcasing its potential to solve real-world problems across various fields. With an emphasis on maximizing a camera’s field of view, the workshop aimed to bridge the gap between research and commercial products, encouraging the advancement of algorithms and applications in this exciting imaging modality. Omnidirectional imaging has gained significant interest in recent years, driven by the desire to capture a comprehensive amount of content and context within a single image. The advent of fisheye cameras in modern vehicles and the availability of commodity omnidirectional cameras from companies like Ricoh and Insta360 have further fuelled the popularity of this imaging technique.
Enabling Progress through Research and Technology
The Omnidirectional Computer Vision workshop sought to unite the foundational research supporting omnidirectional imaging with the development of commercial products leveraging this technology. By fostering collaboration and knowledge sharing, the workshop aimed to drive progress and inspire the creation of new algorithms and applications. The attendees had the opportunity to hear from esteemed speakers who shared their expertise and insights on various aspects of omnidirectional computer vision.
Insightful Talks and Presentations
The workshop featured esteemed speakers who shared their expertise and insights on various aspects of omnidirectional computer vision. Dr Robin Jenkin from NVIDIA discussed the image quality of fisheye cameras, highlighting the challenges and advancements in capturing high-quality images with these wide-angle lenses. Prof Wolfgang Förstner from Stuttgart University explored the techniques and algorithms used to recover geometry from omnidirectional camera systems. Prof Jongwoo Lim, a professor at Hanyang University and CEO of MultiplEYE Co., ltd., shared valuable insights on omnidirectional depth estimation and visual SLAM using multiple ultra-wide field-of-view cameras. Prof Adrian Hilton from the University of Surrey discussed audio-visual 360 scene analysis for acoustic modelling and augmentation. Shubhankar Borse from Qualcomm was a last-minute replacement for Jacob Roll, but gave an excellent talk on detection and segmentation in BEV using near-field surround cameras.
Challenges to Foster Advancements
The workshop hosted three challenges to stimulate innovation in omnidirectional computer vision:
Multi-view 360 Layout Estimation Challenge: Sponsored by the Taiwan AI Center of Excellence, participants were tasked with developing models that leverage multi-view consistency from an ego-motion of a 360-camera in real-world-based scenes. The challenge provided extensive datasets for training, testing, and algorithm development.
Woodscape & Parallel Domain Challenge: This challenge was hosted in collaboration with Valeo and Parallel Domain. It focused on training a single model capable of performing optimally on both real and synthetic data for moving object detection. Collaboration with Parallel Domain facilitated addressing the domain adaptation aspect of moving object detection tasks.
Omnidirectional Drone Challenge: In collaboration with Spleenlab, the workshop organized a challenge centred around odometry and multi-sensor data analysis using drone data. Participants were provided with omnidirectional video footage captured by fisheye lenses, along with LiDAR and RTK GPS data. The challenge emphasized odometry and SLAM using omnidirectional input 360° in off-road scenarios.
Best paper award for “Graph-CoVis: GNN-Based Multi-View Panorama Global Pose Estimation”, accepted by Will Hutchcroft (presented to Will by Shubhankar Borse, from the best paper award sponsors Qualcomm)