D²iCE at the Electronic Imaging Symposium in San Francisco

Posted on February 02, 2024 

The Electronic Imaging 2024 Symposium took place from January 21 -25 in Burlingame, California hosting a series of intertwined imaging science events that allow attendees to expand their knowledge and networks through a rich symposium programme.

University of Limerick PhD students Daniel Jakab and Sushil Sharma presented at the Autonomous Vehicles and Machines (AVM) conference, chaired by its co-founder and UL Associate Professor Patrick Denny and Intel AI Algorithms Lead Peter VanBeek.

The AVM programme showcased several innovations which were presented at AVM by organisations including NVIDIA, Imatest, Imaging Engineering, Purdue University, Stanford University, Oak Ridge National Laboratory (ORNL), Qualcomm, Texas Instruments, Iowa State University, Eindhoven University of Technology, Fraunhofer IPA, University of California, University of Korea, University of Applied Sciences Upper Austria, RD Buy, Cizen Tech, Tokyo Institute of Technology and NTT Corporation, with plenaries from Google, Stanford and NASA.

Advancements in sensing, computing, image processing, and computer vision technologies are enabling unprecedented growth and interest in autonomous vehicles and intelligent machines, from self-driving cars to unmanned drones, to personal service robots. These new capabilities have the potential to fundamentally change the way people live, work, commute, and connect with each other, and will undoubtedly provoke entirely new applications and commercial opportunities for generations to come.

The main focus of AVM is perception. This begins with sensing. While imaging continues to be an essential emphasis in all EI conferences, AVM also embraces other sensing modalities important to autonomous navigation, including radar, LiDAR, and time of flight. Realization of autonomous systems also includes purpose-built processors, e.g., ISPs, vision processors, DNN accelerators, as well as core image processing and computer vision algorithms, system design and architecture, simulation, and image/video quality. AVM topics are at the intersection of these multi-disciplinary areas. AVM is the Perception Conference that bridges the imaging and vision communities, connecting the dots for the entire software and hardware stack for perception, helping people design globally optimized algorithms, processors, and systems for intelligent “eyes” for vehicles and machines.

In 2024, the conference sought high-quality papers featuring novel research in areas intersecting sensing, imaging, vision and perception with applications including, but not limited to, autonomous cars, ADAS (advanced driver assistance system), drones, robots, and industrial automation. Due to high demand from AVM participants, we are particularly interested in topics related to new forms of sensors like LiDAR, Radar, and multi-modal sensor fusion, validation for autonomous vehicles and the perception related processors and algorithms, and the evolution of Image Signal Processor (ISP) with new techniques such as CNNs and Transformer architectural approaches. AVM welcomes both academic researchers and industrial experts to join the discussion. In addition to technical presentations, AVM includes open forum discussions, moderated panel discussions, demonstrations, and exhibits.



Patrick chairing the Autonomous Vehicles and Machines Conference at the EI Symposium

Sushil presenting his research at the conference

Research Presented at Electronic Imaging


Optimizing Ego Vehicle Trajectory Prediction: The Graph Enhancement Approach

 

Authors: Sushil Sharma, Aryan Singh, Ganesh Sistu, Mark Halton and Ciaran Eising

Article: https://library.imaging.org/ei/articles/36/17/AVM-115

 

Sushil Sharma, a member of D2iCE, participated in the  Electronic Imaging Conference held in San Francisco, California, USA, from January 21 to 25, 2024. The author explains ‘In autonomous driving systems, accurately predicting the trajectory of an ego vehicle is crucial. Commonly, state-of-the-art methods employ Deep Neural Networks (DNNs) and sequential models using front-view images for trajectory prediction. However, these approaches face challenges with perspective issues affecting object features. Our blog post advocates for using Bird's Eye View (BEV) perspectives, offering unique advantages in capturing spatial relationships and object homogeneity. Leveraging Graph Neural Networks (GNNs) and positional encoding in a BEV, our approach achieves competitive performance compared to traditional DNN-based methods. Despite losing some detailed front-view information, we compensate by enriching BEV data through effective capture of object relationships in a scene represented as a graph.’

Measuring Natural Scenes SFR of Automotive Fisheye Cameras

Authors: Daniel Jakab, Eoin Martino Grua, Brian Micheal Deegan, Anthony Scanlan, Pepijn Van De Van and Ciaran Eising

 

Article: https://library.imaging.org/ei/articles/36/17/AVM-109

 

Daniel Jakab, a member of D2iCE, participated in the  Electronic Imaging Conference held in San Francisco, California, USA, from January 21 to 25, 2024. The author explains ‘The Modulation Transfer Function (MTF) is an important image quality metric typically used in the automotive domain. However, despite the fact that optical quality has an impact on the performance of computer vision in vehicle automation, for many public datasets, this metric is unknown. Additionally, wide fieldof- view (FOV) cameras have become increasingly popular, particularly for low-speed vehicle automation applications. To investigate image quality in datasets, this paper proposes an adaptation of the Natural Scenes Spatial Frequency Response (NS-SFR) algorithm to suit cameras with a wide field-of-view.