Guidelines for evaluation of complex multi agent test scenarios

Authors: Ana Isabel Garcia Guerra, Teng Sung Shiuan

License: CC BY-NC-ND 4.0

Abstract: To support the testing of AVs, CETRAN has created a guideline for the evaluation of complex multi agent test scenarios presented in this report. This allows for a clear structured manner in evaluating complexity elements based on the corresponding difficulties an AV might encounter in Singapore traffic. This study aims to understand the source of complexity for AVs from traffic hazard, by breaking down the difficulties on AV capabilities as perception, situation awareness and decision-making. Guidelines created through this study are composed by a list of elements to be considered in the future as selection criteria to evaluate complexity of scenarios to support AV behaviour assessment. This study is intended to be a guide to understand the sources of complexity for Avs and can be used to challenge the risk management ability of autonomous vehicles in a scenario-based test approach or traffic situations faced on road trials. The report includes the usage of the guidelines created as application to evaluate the complexity of a set of 5 real events that occur on Singapore roads from Resembler webtool which is a database of real human accidents/incidents. Four scenarios were also designed for creation in simulation by the CETRAN team, applying the guidelines for complexity elements created in this work, to illustrate the difficulties an ADS could experience with such scenarios.

Submitted to arXiv on 17 May. 2024

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.