D²iCE ITS highlights at the Resilient and Robust Sensing for Automated Systems for Transportation Workshop (University of Warwick)

Posted on June 27, 2024 

On the 19th of June 2024, a team of D²iCE researchers Daniel Jakab, Sushil Sharma, Seamie Hayes, Sanjay Kumar, Anam Manzoor and Reenu Mohandas led by Ciarán Eising attended the 3rd  Resilient and Robust Sensing for Automated Systems for Transportation conference organized by the Warwick Manufacturing Group (WMG) in association with Automotive Electronics Innovation (AESIN) at the University of Warwick, Coventry, United Kingdom.

Daniel Jakab showcased a poster titled “ARDÁN: Automated Reference-free Defocus Characterization for Automotive Near-field Cameras”. The poster contained novel research on optical quality experiments on public fisheye datasets using a new method of obtaining measurements from natural scenes.

Abstract: Measuring optical quality in automotive camera lenses is critical for safety. ARDÁN evaluates Horizontal Slanted Edges for ISO12233:2023 standards in four public datasets, using Region of Interest (ROI) selection and the mean of 50% of the Modulation Transfer Function (MTF50) for optical quality, and Regional Mask to Lens Alignment (RMLA) to remove occlusion and vignetting.

Sushil Sharma presented a poster titled “BEVSeg2GTA: Joint Vehicle Segmentation and GNNs for Ego Vehicle Trajectory Prediction in BEV”.

Abstract: The BEVSeg2GTA framework improves ego vehicle trajectory prediction by combining an encoder-decoder transformer with a Graph Neural Network (GNN). It starts with a projection module that converts multi-camera views and map data into a Bird’s Eye View (BEV) perspective using an encoder-decoder transformer. The resulting segmented output is then passed to the GNN, which builds a graph of spatial information. This graph is used by a Spatio-Temporal Prediction Network (STPN) to predict the ego vehicle's future trajectory.

Anam Manzoor presented a poster titled "Deformable Convolution Based Road Scene Semantic Segmentation of Fisheye Images in Autonomous Driving."

Abstract: The research focuses on advancing semantic segmentation for fisheye imagery in autonomous driving. The study compares Deformable Convolutional Neural Networks (DCNNs) with traditional architectures like Vanilla U-Net and Residual U-Net. Results highlight DCNNs' effectiveness in handling fisheye lens distortions, significantly improving segmentation accuracy. These findings provide foundational insights for enhancing the reliability of autonomous vehicle perception systems in navigating complex environments.

Seamie Hayes presented a poster titled “Velocity Driven Vision: Asynchronous Sensor Fusion Birds Eye View Models for Autonomous Vehicles”.  

Abstract: Fusing different sensor modalities can be a difficult task, particularly if they are asynchronous. Difficulties arise in the fact that the sensor modalities have captured information at different times and also at different positions in space. Therefore, they are not spatially nor temporally aligned. This paper will investigate the challenge of radar and LiDAR sensors being asynchronous relative to the camera sensors, for various time latencies, and resolve the issue of spatial and temporal alignment. Our approach to resolving the issue of sensor asynchrony yields promising results, namely via utilising radar velocity information.

Sanjay Kumar presented a poster titled “Camera-Radar Fusion in Autonomous Vehicles for Perception Tasks” 

Abstract: In the field of autonomous driving, developing robust and accurate perception systems is essential for ensuring safety and efficiency. These systems integrate multiple sensors: cameras for high-resolution visuals, LiDAR for depth information though less effective in adverse weather, and radar for reliable long-range detection in challenging conditions. The combination of these technologies enables comprehensive environmental perception, vital for complex driving tasks like motion prediction, path planning, and automated control. However, integrating these diverse inputs involves complex challenges in sensor selection and data fusion at various levels (early, intermediate, and late), which are key research areas to advance autonomous vehicle capabilities.

Finally, Ciarán Eising was part of a discussion panel on “Sensing in the era of the Software-Defined Vehicle”, with the other esteemed panelists Kashif Siddiq (Oxford RF Solutions Ltd) and Taufiq Rahman ( National Research Council Canada / Conseil National de Recherches Canada). The discussion was led by Luca Cenciotti of JLR.