Canadian Adverse Driving Conditions Dataset

Open-source dataset for autonomous driving in wintry weather.

  • Waterloo University logo
  • Wiselab logo
  • Watcar logo
  • Scale logo
  • University of Toronto logo
  • Trail logo
  • Scene #1
  • Scene #2
  • Scene #3
  • Scene #4
  • Scene #5
  • Scene #6
Overview

High quality data for adverse driving conditions

The CADC dataset aims to promote research to improve self-driving in adverse weather conditions. This is the first public dataset to focus on real world driving data in snowy weather conditions.

It features:

  • 56,000 camera images

  • 7,000 LiDAR sweeps

  • 75 scenes of 50-100 frames each

  • 10 annotation classes

    • 28194 Cars
    • 62851 Pedestrians
    • 20441 Trucks
    • 4867 Bus
    • 4808 Garbage Containers on Wheels
    • 3205 Traffic Guidance Objects
    • 705 Bicycle
    • 638 Pedestrian With Object
    • 75 Horse and Buggy
    • 26 Animals
  • Full sensor suite: 1 LiDAR, 8 Cameras, Post-processed GPS/IMU

  • Adverse weather driving conditions, including snow

About Autonomoose

The Autonomoose is an autonomous vehicle platform created as a joint effort between the Toronto Robotics and AI Laboratory (TRAIL) and Waterloo Intelligent Systems Engineering Lab (WISE Lab) at the University of Waterloo. This platform has enabled us to test various software modules to autonomously drive on public roads.

Data Collection

Complex Driving Scenarios in Adverse Conditions

For this dataset, routes were chosen with various levels of traffic, a variety of vehicles and always with snowfall.

Sequences were selected from data collected within the Region of Waterloo, Canada.

Car Setup

Vehicle, Sensor and Camera Details

We collected data using the Autonomoose, a Lincoln MKZ Hybrid mounted with a full suite of LiDAR, inertial and vision sensors.

Please refer to the figure below for the sensor configuration of the Autonomoose.

    • 10 Hz capture frequency
    • 1/1.8” CMOS sensor of 1280x1024 resolution
    • Images are stored as PNG
    8Wide Angle Cameras
    • 10 Hz capture frequency
    • 32 channels
    • 200m range
    • 360° horizontal FOV; 40° vertical FOV (-25° to +15°)
    1LiDAR
  • 1Post-processed GPS and IMU

More on Autonomoose: The University of Waterloo's self-driving research platform


Sensor Calibration

Data alignment between sensors and cameras

To achieve a high quality multi-sensor dataset, it is essential to calibrate the extrinsics and intrinsics of every sensor.

We express extrinsic coordinates relative to the ego frame, i.e. the midpoint of the rear vehicle axle.

The most relevant steps are described below:

  • LiDAR extrinsics

  • Camera extrinsics

  • Camera intrinsic calibration

  • IMU extrinsics

Data Annotation

Complex Label Taxonomy

Scale’s data annotation platform combines human work and review with smart tools, statistical confidence checks and machine learning checks to ensure the quality of annotations.

The resulting accuracy is consistently higher than what a human or synthetic labeling approach can achieve independently as measured against seven rigorous quality areas for each annotation.

Scene #1

The resulting accuracy is consistently higher than what a human or synthetic labeling approach can achieve independently as measured against seven rigorous quality areas for each annotation.

The CADC includes 3D Bounding boxes for X object classes and a rich set of class attributes related to X, Y. For detailed definitions of each class and example images, please see the annotation instructions.

waterloo cover

Get Started with CADC Dataset

View our paper here and download the development kit.

If you use our dataset please cite our paper.