D²iCE at the Irish Machine Vision and Image Processing (IMVIP) Conference

Posted on September 1, 2023 

The D²iCE team recently attended the Irish Machine Vision and Pattern Recognition Conference 2023, held at the University of Galway from August 29th to September 1st. This is the flagship conference of the Irish Pattern Recognition and Classification Society (IPRCS). The event brought together experts and researchers from computer vision, offering a platform to showcase Irish research in the vision space and exchange knowledge. D²iCE researchers presented six papers that tackled diverse challenges in the realm of computer vision.

D²iCE’s strong presence at the IMVIP 2023 conference highlights its commitment to advancing the field of computer vision. These six papers represent a spectrum of innovative solutions and insights, ranging from medical image classification to camera calibration, autonomous vehicle safety, and visual question answering. Their contributions underscore the team’s dedication to computer vision research and finding practical applications for their research.

Below are short descriptions of the papers presented.

The team at IMVIP (L-R): Mark Halton, Ciarán Eising, Ken Power, Kaavya Rekanar, Aryan Singh, Sushil Sharma, Arindam Das, and Ciarán Hogan 

Compact & Capable: Harnessing Graph Neural Networks and Edge Convolution for Medical Image Classification

Authors: Aryan Singh, Pepijn van de Ven, and Ciarán Eising

Award: The paper received the Jonathan Campbell Best Paper Award at the conference.

Summary: This research delves into the application of Graph Neural Networks (GNNs) for medical image classification. The authors introduced a model that combines GNNs and edge convolution. Impressively, this model achieved performance on par with state-of-the-art Deep Neural Networks (DNNs) but with 1000 times fewer parameters, significantly reducing training time and data requirements.

Full paper: https://arxiv.org/abs/2307.12790


The Impact of Glare on End of Production Line Camera Calibration Algorithms: A Brief Analysis

Authors: Payal Bhattacherjee, Anbuchezhiyan Selvaraju, Sudarshan Paul, Arindam Das, Ishan Vermani

Summary: Arindam Das presented this paper, which explores the impact of glare on camera calibration, particularly in end-of-production line scenarios. The research identifies specific challenges and proposes innovative solutions. By leveraging gradient filters and line detection algorithms, the study aims to enhance the accuracy of calibration parameters. Additionally, the paper underscores the role of optimal lighting conditions and quality control in production line environments.

Full paper: https://drive.google.com/file/d/1_9ZnFobhkRQtNdaKClYFnOnby6acMWNL/view

Navigating Uncertainty: The Role of Short-Term Trajectory Prediction in Autonomous Vehicle Safety

Authors: Sushil Sharma, Ganesh Sistu, Lucie Yahiaoui, Arindam Das, Mark Halton, Ciarán Eising

Summary: In this paper, the authors emphasize the critical importance of precise and reliable trajectory predictions for the safety and efficiency of autonomous vehicles. They introduce a synthetic dataset that incorporates complex scenarios, including pedestrian crossings and vehicle overtaking, to evaluate an end-to-end prediction model’s robustness. The study’s findings shed light on the model’s ability to handle challenging corner cases, making it a valuable contribution to autonomous driving research.

Full paper: https://arxiv.org/abs/2307.05288


Towards a Performance Analysis on Pre-trained Visual Question Answering Models for Autonomous Driving

Authors: Kaavya Rekanar, Ciarán Eising, Ganesh Sistu, Martin Hayes

Summary: This paper offers a preliminary analysis of three prominent Visual Question Answering (VQA) models within the context of driving scenarios. ViLBERT, ViLT, and LXMERT are evaluated based on the similarity of their responses to reference answers provided by computer vision experts. The research explores the utilization of transformers in multimodal architectures, showcasing the promising potential for improved answers within autonomous driving.

Full paper: https://arxiv.org/abs/2307.09329


Self-Supervised Online Camera Calibration for Autonomous Driving Applications

Authors: Ciarán Hogan, Ganesh Sistu, Ciarán Eising

Summary: This paper proposes a novel deep-learning framework for self-supervised camera calibration. The framework learns the intrinsic and extrinsic camera parameters from unlabeled driving data. This allows the framework to calibrate the camera in real time, without the need for any physical targets or special planar surfaces. This could cut the cost of re-calibration for both the manufacturer and consumer as the vehicle can re-calibrate itself over the lifetime of the vehicle, which also improves the accuracy and safety of the ADAS systems on the car.

Full paper: https://arxiv.org/abs/2308.08495

Hardware Accelerators in Autonomous Driving

Authors: Ken Power, Shailendra Deva, Ting Wang, Julius Li, Ciarán Eising

Summary: D2ICE Researchers Ken Power and Ciarán Eising presented this work which provides an overview of ML accelerators with examples of their use for machine vision in autonomous vehicles. The paper offers recommendations for researchers and practitioners and highlights a trajectory for ongoing and future research in this emerging field.

Full paper: https://arxiv.org/abs/2308.06054