Datasets > Symphony Lake Dataset - Visual Localisation Benchmark

A subset of the Symphony dataset is available to evaluate long-term visual localisation. It provides images taken from the boat’s traversals of the lake shore over various seasonal and illumination conditions. One single traversal with a reference season-light condition (overcast winter with no foliage) is used to represent the scene. For this reference traversal, the dataset provides the 6DOF poses of the camera in the file `db.txt`.

The estimates query poses are evaluated through the long-term localisation evaluation server available at: https://www.visuallocalization.net/benchmark/

Using the Symphony Seasons Dataset

The Symphony Seasons builds on top of the Symphony dataset created by Shane Griffith, Georges Chahine and Cedric Pradalier. The original dataset can be found here.

If you are using the Symphony Seasons dataset in a publication, please cite both of the following two sources:

@article{griffith2017symphony,
title={Symphony lake dataset},
author={Griffith, Shane and Chahine, Georges and Pradalier, C{\'e}dric},
journal={The International Journal of Robotics Research},
volume={36},
number={11},
pages={1151--1158},
year={2017},
publisher={SAGE Publications Sage UK: London, England}
}
@inproceedings{pradalierPairs,
title={Multi-session lake-shore monitoring in visually challenging conditions},
author={Cédric Pradalier and Stéphanie Aravecchia and François Pomerleau},
booktitle={International Conference on Field and Service Robotics},
year={2019}
}

Image Details

The Symphony dataset consists of visual traversals of the shore of the Symphony Lake in Metz, France. The 1.3 km shore is surveyed using a pan-tilt-zoom (PTZ) camera and a 2D LiDAR mounted on an unmanned surface vehicle. The list of query images to localize are named after the season-illumination conditions of the image. For example, `autumn-dawn.txt` holds the list of the query images sampled during this condition. The set of query images is provided in the archive `img.tar.gz` (6.7G).

Camera Coordinate Systems

The models use the camera coordinate system typically used in computer vision. In this camera coordinate system, the camera is looking down the z-axis, with the x-axis pointing to the right and the y-axis pointing downwards. The coordinate (0, 0) corresponds to the top-left corner of an image.

For the evaluation of poses estimated on the CMU Seasons dataset, you will need to provide pose estimates in this coordinate system.

Intrinsics

Please refer to the intrinsics.txt file for a description of how the camera intrinsics are specified. Note that non-linear distortion are present in the images. The `intrinsics.txt` file contains information about this distortion.

 

Database List

The reference images are specified in the file `db.txt`. It holds one line per database image specifying the image relative name and the camera pose.

Here is an example

 

140122/0002/0409.jpg 0.56572247 -0.46257553 0.43210496 0.52845745 42.87325061 0.79579649 -3.92683606 winter-overcast

 

The image name is `140122/0002/0409.jpg`. The seven numbers after the image name is the camera pose: the first four are the components of a rotation quaternion, corresponding to a rotation R, and the last three are the camera center C. The rotation R corresponds to the first 3×3 subblock of the corresponding camera matrix (i.e. the rotation from the camera to the world) and the camera center C is related to the fourth column t of the camera matrix according to C = -R^T * t, where R^T denotes the transpose of R, and t is the translation from the camera to the world’s origin.

Query Images

A list of all query images for each slice is provided in the `conditionX.txt` files. For the evaluation, a `.txt` file should be submitted containing a list of all query images in the same format as the database list, as specified above.

Please submit your results as a text file using the following file format. For each query image for which your method has estimated a pose, use a single line. This line should store the result as `name.jpg qw qx qy qz tx ty tz`. Here, `name` corresponds to the file name of the image. `qw qx qy qz` represents the rotation from world to camera coordinates as a unit quaternion. `tx ty tz` is the camera translation (not the camera position). An example for such a line is

 

151214/0003/0772.jpg 0.58382477 -0.47737728 0.41569430 0.50838748 -46.02836611 0.07886557 -0.38916001

Please adhere to the following naming convention for the files that you submit:

 

Symphony_eval_[yourmethodname].txt

Here, `yourmethodname` is some name or identifier chosen by yourself. This name or identifier should be as unique as possible to avoid confusion with other methods. It will be used to display the results of your method.

 

**IMPORTANT**: The evaluation tools expect that the coordinate system in which the camera pose is expressed is the NVM coordinate system. If you are using the Bundler or .info coordinate system to estimate poses, you will need to convert poses to the NVM coordinate system before submission. A good sanity check to ensure that you are submitting poses in the correct format is to query with a
reference image and then check whether the pose matches the reference pose defined in the NVM model after converting the stored camera position to a translation (as described above).

© 2019 – 2021 DREAM Lab – Georgia Institute of Technology Lorraine