With the rapid advancement of autonomous driving technology, vehicle-to-everything (V2X) communication has emerged as a key enabler for enhancing driving safety and efficiency. By allowing ego-vehicles to exchange real-time information with surrounding infrastructure and other road users, V2X communication extends perception capabilities beyond the line of sight and mitigates the limitations of onboard sensors. However, integrating multi-source sensor data from both ego-vehicles and infrastructure in a practical and efficient manner remains a challenging task, especially under constrained communication bandwidth.
This challenge aims to tackle the problem of end-to-end autonomous driving with V2X cooperation by leveraging both ego-vehicle and infrastructure sensor data. Specifically, we formulate the problem as a planning-centric optimization, where multiple-view sensor inputs are fused to generate robust driving planning results. By addressing the complexities of sensor fusion and communication constraints, this challenge seeks to advance the state-of-the-art in cooperative autonomous driving.

Challange Pre-Registration Link: Google Form

Table of Content

Track1: Temporal Perception


Task Overview

The Cooperative Temporal Perception Challenge focuses on advancing V2X-enabled detection and multi-object tracking. Built on the open-source UniV2X framework, this challenge provides participants with pre-recorded multi-view sensor data from both ego-vehicles and infrastructure. The primary objective is to leverage V2X communication to enhance temporal perception under bandwidth-constrained conditions. Participants are required to process the provided data to generate high-quality detection and tracking results.
Task Input: Vehicle data (sequential images), Roadside data (sequential images), and corresponding time stamps and calibration files.
Task Output: 3D position, heading angel, tracking ID.

Evaluation

Participants' solutions will be assessed based on three common metrics: mAP, AMOTA, and transmission cost.
The final evaluation score for this track will be the weighted average of these three indicators after normalization, with weights of 0.4, 0.4, and 0.2, respectively.

Baseline and Data Illustration

UniV2X provides a baseline for cooperative temporal perception in autonomous driving systems.
The code is open-sourced in this page.
The dataset is based on V2X-Seq-SPD, you may see more details here.

Track2: End-to-End AD


Task Overview

The Cooperative End-to-End Autonomous Driving Challenge aims to advance V2X-enabled end-to-end autonomous driving. Built on the open-source UniV2X framework, this challenge allows participants to develop and test end-to-end autonomous driving agents through V2X cooperation. The goal is to optimize driving policies that seamlessly integrate ego-vehicle and infrastructure sensor data, enabling adaptive planning in complex urban environments.
Task Input: Vehicle data (sequential images), Roadside data (sequential images), and corresponding time stamps and calibration files.
Task Output: planning results (waypoints of future 5 seconds).

Evaluation

Participants' solutions will be assessed based on four metrics: L2 error, Collision Rate, Off-road Rate, and Transmission Cost. The final evaluation score for this track will be the weighted average of these four indicators after normalization, with weights of 0.3, 0.3, 0.2 and 0.2, respectively.

Baseline and Codebase

UniV2X is the first cooperative end-to-end autonomous driving framework that seamlessly integrates all key driving modules across diverse driving perspectives into a unified network.
The code is open-sourced in this page.
The dataset is based on V2X-Seq-SPD, you may see more details here.

Challenge Timeline


  • Release of the test dataset: March 15, 2025
  • Submission Instruction: March 15, 2025
  • Submission Instruction:   00:00 (GMT+8), April 1, 2025
  • Submission open:               00:00 (GMT+8), April 10, 2025
  • Release of Test Dataset:   00:00 (GMT+8), May 17, 2025
  • Submission deadline:       23:59 (GMT+8), May 24, 2025
  • Decision to authors:          20:00 (GMT+8), June 4, 2025

Award Setting


  • Figure of goldOutstanding Champion, USD $1500
  • Figure of silverHonorable Runner-up, USD $1000
  • Figure of bronzeExceptional Merit Award, USD $500

Organizer Committee



Contact


  • Challenge Process: Ruiyang Hao, THU, email: hry20@tsinghua.org.cn
  • Track1 Support: Jiaru Zhong, BIT, email: zhongjiaru@bit.edu.cn
  • Track2 Support: Haibao Yu, HKU, email: yuhaibao94@gmail.com