How Good Design Can Prevent Autonomous Vehicle Accidents Before They Happen

February 16, 2022

As Product Designers at Applied Intuition, it’s our role to understand the deeply complex workflows of our users—software engineers who develop autonomous vehicles (AVs). One of our users’ core challenges is ensuring that their vehicles will operate safely as new capabilities are added. This is why we built Basis—a tool that helps our users discover errors in their AV stack before they cause real-world problems. 

In this blog, we’ll test a demo AV stack on its ability to safely make an unprotected left turn. We will demonstrate the same step-by-step process our users follow in Basis when testing their AVs for safety. 

Along the way, you’ll find that designers at Applied have to balance the inherent complexity of the AV development process with simple user interface (UI) controls that span both two-dimensional (2D) and three-dimensional (3D) spaces.

Step 1: Create the Unprotected Left Turn Scenario

The first step in testing our demo AV’s ability to make an unprotected left turn is to create a scenario, a digital environment where we can place different objects and vehicles into the scene and specify their locations and behaviors (e.g., speed or lane changes) to test how our demo AV responds. The vehicle we are testing—in this case, our demo AV—is called the “ego” or “ego vehicle.”

To test our ego, we create a scenario with four oncoming vehicles in two separate lanes, each passing through the intersection at a different time (Figure 1).

Figure 1: An aerial view of the scenario shows the layout of vehicles.

For an added layer of complexity, we place two trucks stopped in the turn lane to partially obscure the ego’s view of the oncoming traffic (Figure 2).

Figure 2: A view from above the ego shows that two trucks obstruct the oncoming traffic.

As a simple measure of safety, we decide that the ego will fail the test if it causes a collision. Otherwise, it will pass the test. When we simulate the created scenario, the ego doesn’t cause a collision. It thus passes the test, although it gets a little too close to the oncoming vehicle (Figure 3). More on that later.

Figure 3: Close call! The ego chooses to “shoot the gap” between oncoming traffic.

While the ego doesn’t cause a collision in this specific instance, what would happen in a slightly different variation of this scenario, where the oncoming traffic:

  • Traveled at faster speeds?
  • Was located further away from or closer to the ego?
  • Accelerated before entering the intersection?

It would be impossible for one person to anticipate and create all the edge cases required to confidently believe that the ego is always able to safely make an unprotected left turn. So, instead of requiring users to create all these variations by hand, we’ve built a way for our users to automatically generate thousands of variations of a scenario in Basis (Figure 4).

Figure 4: Three separate tests (indicated by a white, red, and purple ego) overlaid at once. The ego’s behavior differs based on the scenario variation.

Design insight: For our users who are earlier in their AV development, we found it critical to add the ability to precisely drag and drop objects into a scenario and define their behaviors. For users who work on more mature AV programs, we needed a more scalable solution. At Applied, we often provide solutions that satisfy users on both ends of the spectrum.

Step 2: Generate and Test Thousands of “Left Turn” Variations

Of course, not all of the thousands of automatically generated variations yield meaningful insights into the ego’s behavior. To help our users save valuable time and compute cost, our engineering team has developed algorithms that intelligently tweak scenario parameters to generate the most effective set of variations based on what the user wants to learn about their AV stack.

For our users, an explanation of how these algorithms work is less important than understanding the types of variations that will be generated as a result of their choice. So, instead of presenting users with a selection of algorithms, we present them with a choice for their desired output. From a dropdown menu, users can select whether they want to optimize tests for:

  • Finding failures (testing any situation where the ego fails, e.g., by causing a collision)
  • Achieving coverage (testing a diversity of situations)
  • Measuring sensitivity (testing how the ego will respond to a specific parameter, e.g., the speed of oncoming traffic)

To test our unprotected left turn scenario, we choose to generate variations that optimize for coverage.

Design insight: Because our users are highly technical, they often want to understand how the features they’re using actually work. However, offering users an explanation in-product is not always the most helpful due to the amount of nuance required for an accurate explanation. Oftentimes, we need to figure out the simplest possible explanation for our in-product UI while linking to documentation that gives users a more detailed description with the ability to ask our team for clarification.

Step 3: Review Results and Identify Patterns of Failure

Once users have generated their scenario variations and the tests have finished running, they can view the results. The most basic way we’ve designed for users to interact with these tests is through a list of test results, sorted by “pass” and “fail” outcomes. The list below shows the test results of our unprotected left turn scenario (Figure 5).

Figure 5: In this list of test results, users can click into each row to watch a visualization of the corresponding simulation test.

Lists are great for simplicity, but they aren’t as useful for identifying trends and patterns. This is why we’ve designed other interactive data visualizations. The default visualization is a heatmap of pass and fail results, where each cell corresponds to a specific test result (Figure 6).

Figure 6: In this heatmap visualization, green indicates that the ego passed the test (i.e., that it followed all the predefined rules, such as avoiding a collision). Red indicates that it failed.

We select one scenario parameter to map along each axis. Figure 6 shows the speed of oncoming traffic on the x-axis and the distance between each car on the y-axis. This uncovers a cluster of scenarios where the ego is failing (indicated by the red cells in the top right corner). It appears that the ego can’t safely make the unprotected left turn when the oncoming traffic is traveling at faster speeds and the oncoming vehicles are closer together. By clicking into the cells of the failing scenario variations, we can watch the ego causing a collision (Figure 7).

Figure 7: The ego causes a collision when the oncoming vehicles drive faster and closer together.

Design insight: To get a holistic understanding of their AV behavior, our users often need to view metrics of discrete number values paired with a 3D visualization. Viewing one without the other is insufficient. Our UI ensures that while looking at number values, our users can easily view the corresponding visualization and vice versa.

Step 4: Dive Deep Into Specific Parameters

Given that a collision occurs in some of the scenario variations above, we also want to investigate the “near misses” that weren’t categorized as explicit failures. As mentioned at the beginning of this blog, just because the ego avoids a collision, this doesn’t mean it is turning safely

Instead of visualizing the ego’s pass and fail rates, we explore the same parameters through a different metric: how close the ego gets to other vehicles over the duration of the scenario. In the heatmap below (Figure 8), purple cells indicate that the ego came closer than two meters (approx. six feet) to another vehicle, while yellow cells indicate that the ego kept a distance further away than two meters.

Figure 8: This heatmap visualization indicates that the ego often ends up too close to other vehicles.

As the sea of purple in the top right of Figure 8 indicates, there is a large number of scenario variations where the ego gets uncomfortably close to the oncoming traffic, even though not all of these instances result in a collision. Again, the most severe cases happen when the oncoming vehicles are traveling faster (x-axis maximum) with shorter distances between them (y-axis maximum). By clicking into one of the purple cells, we discover that even in cases where the ego waits for traffic to clear, it nudges too far into the oncoming lane before stopping (Figure 9).

Figure 9: Even in situations where the ego doesn’t cause a collision and stops before making an unsafe turn, it nudges too far into the oncoming lane. In this case, the ego is less than one meter away from an oncoming vehicle.

To give users an even more nuanced view of the parameter space, we’ve designed an easy way for them to toggle between the 2D heatmap visualization (Figure 8) and a 3D visualization (Figure 10). In 3D space, flat planes indicate that regardless of how circumstances change, the ego responds in the same, consistent manner. Slopes, on the other hand, indicate that the ego is sensitive to changes in parameters like other vehicles’ speed and distance.

Figure 10: This fully interactive 3D visualization indicates whether the ego is sensitive to changes in other vehicles’ speed and distance.

For our unprotected left turn scenario, a safe, reliable AV stack would result in a flat, yellow plane. This would indicate that regardless of the speed and gaps of oncoming traffic, the ego is always able to maintain a safe distance of four meters while turning. In Figure 10, we can see that this is not the case. Instead, there’s a steep descent into a purple valley, which highlights the threshold at which the ego starts to make particularly unsafe choices. The ego passes this threshold when oncoming traffic travels above 15 meters per second and the gap between oncoming vehicles is smaller than 30 meters (Figure 11).

Figure 11: Users can hover over areas of the 3D visualization to see specific values.

Design insight: At Applied, we aim to strike a balance between making common tasks as easy as possible for our users (e.g., through automation) and maintaining maximum flexibility to support multiple types of approaches and development workflows.

Step 5: Form a Hypothesis and Issue a Fix

Our simulation test results point to a broader issue in the ego’s decision-making process beyond just making unprotected left turns. In general, the ego is too inconsistent when inching forward and commits to shooting the gap without adequately taking into account oncoming traffic speed.

Once our users have identified a pattern of behavior and created a hypothesis as to why the ego is behaving a certain way, they can make changes to their AV software and re-test it using the same set of scenario variations. By repeating this type of process multiple times using different optimization methods, scenarios, and software versions, our users are able to methodically increase the safety of their AVs.

Product Design at Applied Intuition

While this blog post describes only one example, it also highlights the broader workflow that our users leverage to identify critical issues in their own AV stacks. For us as designers, it’s an immensely gratifying experience to deliver solutions that make a real impact for real users—especially in a space as complex and important as AV development.

If the challenges of designing across 2D and 3D space, diving into the complexity of autonomous systems, and working closely with motivated users sound exciting to you, our design team is hiring. Apply for our Product Designer role today!