Projects
Monocular image-based 3D Object Detection
Boosting monocular 3D object detector with auxiliary depth and uncertainty prediction
(ranked 1st, 3rd among published monocular methods on KITTI BEV, 3D detection benchmark as of 2021 April)
Bird’s-eye-view (BEV) object detection via Inversep Perspective Mapping (IPM) image
Sensor fusion-based 3D Object Detection
Low-level Range-Azimuth radar heatmap and monocular image fusion
Robust radar point cloud and monocular image fusion using gating mechanism
Autonomous Driving Dataset Acquisition
The dataset consists of 7,520 frames and 86,078 annotations collected through more than 10 hours driving considering various time (day, night) and environments (urban, suburb, motorway). The dataset contains following sensors: 3D LiDAR (Ouster OS1-64), point-level radar (Continental ARS408), low-level radar (INRAS RadarBook2), camera (FLIR BlackFly), DGPS with IMU (Novatel Flexpak6).
In collaboration with: Sangmin Sim (Low-level radar, data collection), Sihwan Hwang (data collection)
Traffic Light Detection
Learning-based detector + Rule-based classifier
1) Extract RoIs (traffic lights) using one-stage object detector
2) Extract pixels of the lit blub on traffic light using histogram-based thresholding on HSV color space
2-1) Classify the color of bulb among the color templates using clustering algorithm
2-2) Classify the left-turn signal using rule-base kernel if the light is green
3) Remove false alarm by moving average filter
Sensor Calibration
Camera intrinsic calibration using MATLAB ToolBox
Camera-LiDAR extrinsic calibration using genetic algorithm based optimization