Purpose of this competition
SUBARU has developed EyeSight, its Preventive Safety technology, using a stereo camera.
By using images from two stereo cameras on the left and right, it is possible to recognize objects and measure distances to them with high degree of accuracy. In addition, technologies to estimate object distances and velocities combined with AI are also becoming more common. Up until now EyeSight’s algorithms have been designed for implementation on specific hardware. However, with the rapid development of technologies, it is necessary to consider a wider range of possibilities in the future, and we have decided to hold the contest with open innovation.
In this competition, we provide you a dataset of the actual left and right images obtained from EyeSight and the Disparity maps generated from them. Real world images include various low visibility scenes such as night and rain. Your challenge is to create an algorithm using that image dataset to detect the velocity of objects that can be used in a real environment.
Description
You are required to create an algorithm to infer the velocity of the leading vehicle in each frame using video image data, etc.
Dataset
Video Data | Annotation | |
---|---|---|
File Format | mp4, raw, png | json |
Contents | - Right camera video images(10[fps]) - Left camera video images(10[fps]) - Disparity map(raw data and 2D image) | - Vehicle velocity - Steering angle - Velocity of the leading vehicle and distance between vehicles, etc. |
Sample Size | (Train data) 737 scenes (Test data) 239 scenes | (Train data) 737 scenes (Test data) 239 scenes |
Notes | - Total number of frames in each video is about 100-200. - Scene ID 533 and 601 have been deleted. | - Test data does not include the speed and distance of the vehicle ahead, but the rectangular coordinates of the vehicle ahead are given only in the first frame. - Scene ID 163 and 209 have been deleted. |