With the rapid advancement of autonomous driving technology, vehicle-to-everything (V2X) communication has emerged as a key enabler for enhancing driving safety and efficiency. By allowing ego-vehicles to exchange real-time information with surrounding infrastructure and other road users, V2X communication extends perception capabilities beyond the line of sight and mitigates the limitations of onboard sensors. However, integrating multi-source sensor data from both ego-vehicles and infrastructure in a practical and efficient manner remains a challenging task, especially under constrained communication bandwidth.
This challenge aims to tackle the problem of end-to-end autonomous driving with V2X cooperation by leveraging both ego-vehicle and infrastructure sensor data. Specifically, we formulate the problem as a planning-centric optimization, where multiple-view sensor inputs are fused to generate robust driving planning results. By addressing the complexities of sensor fusion and communication constraints, this challenge seeks to advance the state-of-the-art in cooperative autonomous driving.

Challange Pre-Registration Link: Google Form

Table of Content

Track1: Temporal Perception


Task Overview

The Cooperative Temporal Perception Challenge focuses on advancing V2X-enabled detection and multi-object tracking. Built on the open-source UniV2X framework, this challenge provides participants with pre-recorded multi-view sensor data from both ego-vehicles and infrastructure. The primary objective is to leverage V2X communication to enhance temporal perception under bandwidth-constrained conditions. Participants are required to process the provided data to generate high-quality detection and tracking results.
Task Input: vehicle data (front-view image), infrastructure data (image), command, ego states, and calibration files.
Task Output: 3D boxes information, tracking ID.

Evaluation

Participants' solutions will be assessed based on two common metrics: mAP and AMOTA.
The final evaluation score for this track will be the weighted average of these two indicators, with weights of 0.5, and 0.5, respectively. (Since the values of these two indicators are relatively close, normalization will not be applied.)
Notes: We only evaluate the AP of merged class "Car". (In uniV2X repo's data processing pipeline, we assign original class "Car", "Truck", "Van", "Bus" to a merged class "Car")

Baseline and Data Illustration

UniV2X provides a baseline for cooperative temporal perception in autonomous driving systems.
The code is open-sourced in this page.
The dataset is based on V2X-Seq-SPD, you may see more details here.

Track2: End-to-End AD


Task Overview

The Cooperative End-to-End Autonomous Driving Challenge aims to advance V2X-enabled end-to-end autonomous driving. Built on the open-source UniV2X framework, this challenge allows participants to develop and test end-to-end autonomous driving agents through V2X cooperation. The goal is to optimize driving policies that seamlessly integrate ego-vehicle and infrastructure sensor data, enabling adaptive planning in complex urban environments.
Task Input: vehicle data (front-view image), infrastructure data (image), command, current ego states, and calibration files.
Task Output: planning results (waypoints of future 5 seconds).

Evaluation

Participants' solutions will be assessed based on three metrics: L2 error, Collision Rate, and Off-road Rate. The final value for each metric will be represented by the average of the values measured at 2.5s, 3.5s, and 4.5s. The final evaluation for this track will be the weighted average of these three normalized metrics, with weights of 0.5, 0.25, and 0.25, respectively.
Normalization Notes: To ensure fair evaluation, all three metrics (L2 Error, Collision Rate, and Off-road Rate) will be normalized using Min-Max normalization. the reference values for each metric are: L2 Error: 3.5m; Collision Rate: 2%; Off-road Rate: 2.5%. the reference improvement range for each metric are: L2 Error: 1.0m; Collision Rate: 1.5%; Off-road Rate: 2.5%.

For a given team's results, each metric \(x\), it will be normalized as follows:

\[ x' = min(\frac{x_{\text{ref}} - x}{x_{\text{range}}}, 1.0) \]

where: \(x\) is the team's result for the given metric; \(x_{\text{ref}}\) is the reference performance for that metric; \(x_{\text{range}}\) is the reference improvement range for that metric; \(x'\) is the normalized score, with higher values indicating better performance.

Baseline and Codebase

UniV2X is the first cooperative end-to-end autonomous driving framework that seamlessly integrates all key driving modules across diverse driving perspectives into a unified network.
The code is open-sourced in this page.
The dataset is based on V2X-Seq-SPD, you may see more details here.

Update on Track 2 Submission

Based on community feedback and responses collected from the survey, and to ensure fairness and inclusiveness, final rankings and awards will be determined based on Track 1 results, without considering Track 2 Results. Track 2 submissions are optional and will be displayed on the leaderboard but not required.

Submission Instruction


Input/Output Requirements, Illustration Document Requirements, and Submission Portal are shown in this page

Challenge Timeline


  • Release of the test dataset: March 15, 2025
  • Submission Instruction: March 15, 2025
  • ✓ Submission Instruction:   00:00 (GMT+8), April 1, 2025
  • ✓ Submission open:               00:00 (GMT+8), April 10, 2025
  • ✓ Release of Test Dataset:   18:00 (GMT+8), May 17, 2025
  • Submission deadline:            23:59 (GMT+8), May 25, 2025
  • Submission deadline:            23:59 (GMT+8), June 6, 2025
    Submissions received by this time will be considered for final ranking and awards. The leaderboard will remain publicly accessible and actively maintained after the deadline.
  • Decision to authors:               20:00 (GMT+8), June 8, 2025

Award Setting


  • Figure of goldOutstanding Champion, USD $1500
  • Figure of silverHonorable Runner-up, USD $1000
  • Figure of bronzeExceptional Merit Award, USD $500

Organizer Committee



Contact


  • Challenge Process: Ruiyang Hao, THU, email: hry20@tsinghua.org.cn
  • Track1 Support: Jiaru Zhong, BIT, email: zhongjiaru@bit.edu.cn
  • Track2 Support: Haibao Yu, HKU, email: yuhaibao94@gmail.com