Large-Scale Simulation and Validation With CARLA

February 8, 2022

CARLA simulator is a popular open-source simulation tool for advanced driver-assistance systems (ADAS) and autonomous vehicles (AVs). It allows development teams to run virtual tests and evaluate their prediction, planning, and control stacks. As CARLA is an open-source tool, it is flexible and readily available for anyone to use. But core simulation tools alone do not include all of the features necessary to successfully deploy safe AV systems. In addition to running individual virtual tests, AV teams need to scale their simulations to thousands per day to avoid regressions in their stack. They also need to validate the AV stack against a multitude of system requirements and safety protocols. This is where Applied’s products can help.


CARLA users can leverage Applied’s continuous integration (CI) and verification & validation (V&V) tools without using Applied’s core simulator Simian. Applied tools integrate with other simulators like CARLA and complement their functionality, allowing AV teams to scale and validate their AV development efficiently and successfully.


This blog post will explore how AV teams can use Applied’s CI and V&V tools, Orbis and Basis, together with CARLA to run large-scale simulations and validate their AV stack. The blog will outline a workflow that allows teams to manage their entire simulation and validation life cycle (Figure 1).


Figure 1: Workflow for simulation and validation with CARLA, Orbis, and Basis

Managing Requirements and Scenarios

To successfully verify and validate an AV stack, development teams must ensure that it meets specific safety requirements. Teams may use simulation tools that run hundreds of scenarios to assess and verify system safety. Still, they also need a solution to analyze performance and trace results back to each safety requirement. 


With Applied’s V&V tool Basis, AV development teams can create and execute scenarios in CARLA, analyze test coverage and performance, and trace results back to safety requirements in a unified workflow. Basis supports the OpenSCENARIO (OSC) V1.1 and the OSC V2.0 open standards for scenario editing and management. This way, teams can create and edit OSC scenarios at scale in Basis (Figures 2a, b) and then execute those scenarios in CARLA (Figure 2c).

Figure 2a: Basis graphical user interface (GUI) to create and edit OSC V1.1 scenarios
Figure 2b: Basis GUI to create and edit OSC V2.0 scenarios
Figure 2c: Basis GUI to execute scenarios in CARLA

Running Large-Scale Simulations in the Cloud

AV teams that experiment with or develop new features might find it beneficial to run individual simulations locally. When entire teams use simulation to validate an AV stack, however, they don’t only need to run one-off tests but hundreds or thousands of simulations on a single merge request to avoid regressions. Teams also need a way to scale these simulations with high performance and low latencies to preserve their developer velocity. This can only be achieved by running simulations in the cloud.


AV teams can use CARLA together with Applied’s CI tool Orbis to run large-scale simulations. Orbis provides test automations that link easily to any AV team’s CI system. This way, Orbis can kick off simulations automatically whenever a code change occurs or run simulations at recurring intervals.


Even if teams aren’t using Simian, Orbis still provides a highly scalable Kubernetes backend. Its frontend is optimized to make rich data available to the user immediately. This way, teams can run simulations in CARLA and then play back results, view logs and plots, and analyze observer rules directly in Orbis (Figure 3).

Analyzing Performance and Coverage

When development teams work on new features or improvements to their current AV stack, they need to understand its overall performance (i.e., how well the software is performing on simulation tests) and how this performance has progressed or regressed compared to previous versions. To decide which simulations to run next, teams also need to measure their AV stack’s coverage (i.e., how much of the possible scenario space is already accounted for).

Figure 3: Orbis shows the CI results of a CARLA simulation, including a playback UI, logs, plots, and red markers to indicate problematic incidents

Based on CARLA simulation results, Basis allows AV teams to analyze their stack’s performance and coverage (Figure 4). Teams can apply the same evaluation rules to scenarios and extract important safety, comfort, and performance metrics for rigorous analysis. They can then combine this performance and coverage analytics information with results from real-world drives and track testing to build a comprehensive safety case.

Figure 4: Basis GUI to analyze AV stack performance based on CARLA simulation results

Conclusion

The proper workflows and tools can help AV teams successfully run large-scale simulations and validate their entire stack. Orbis and Basis support these workflows while easily integrating with CARLA simulator. This way, teams can save hundreds of engineering hours otherwise spent catching regressions or waiting for simulations to finish running.


Schedule a product demo with our engineering team if you use CARLA or another simulation tool and would like to learn more about integrating your workflow with Orbis or Basis.