Intelligent Transportation Systems

The D²iCE Intelligent Transport Systems research domain covers many aspects of sensing and automation for driver assistance systems and self-driving cars. The domain area is led by Ciarán Eising, who spent more than 12 years working in industry developing sensing solutions for autonomous vehicles. Machine learning has already and promises to continue to meaningfully contribute to high-quality vehicle automation. We collaborate with many companies working in the automotive sensing space, ensuring our innovations have the best chance of making a real end-user impact.

Ongoing Projects

Impact of Camera Production Tolerances on Computer Vision

Investigate the impact of camera production tolerance in surround-view cameras for the application of autonomous driving.

RadNet - Automotive detection, tracking and prediction using radar data

Radar sensors works brilliantly in low light and adverse weather conditions, unlike other automotive sensor modalities. This project looks at using radar data for pedestrian detection, tracking and prediction to avoid road accidents.

Vision based language tasks in Autonomous Driving

We aim to design a visual common-sense model that interacts with end-user in autonomous driving scenarios and answers queries on the driving decisions made by the vehicle.

MultiPoseNet

A robust multi-modal approach to pedestrian pose and activity recognition for autonomous driving.

Visual trajectory prediction for autonomous vehicles

To design an end-to-end network for trajectory prediction for autonomous vehicles which will improves the sophisticated patterns such as corner cases.

Native surround view automotive fusion, mapping and training

Fisheye cameras are commonly deployed on vehicles in a surround-view configuration, with one camera at both the front and rear of the vehicle and on each wing mirror.