Free Shipping on orders over US$49.99

The pitfalls of mixing formal and simulation: Examples of the trouble


Driven by the need to objectively measure the progress of their verification efforts and the contributions of different verification techniques, IC designers have adopted coverage as a metric. However, what exactly is being measured is different depending on the underlying verification technology in use.

In Part 1 we outlined the textbox definitions of simulation and formal coverage, and then we briefly introduced the risks inherent in arbitrarily merging these metrics together. In this article, we will use progressively more complex RTL device under test (DUT) code examples to illustrate the trouble that can occur when comparing the results from simulation and formal side-by-side.

Example 1: Basic illustration of simulation vs. formal code coverage

Recall that code coverage is simply the percentage of RTL code measuring the number of statements in a body of code that have been executed through a test run. While it’s important that the testbench can exercise all of the RTL code—there is no dead code, implying a bug in the DUT—the design can still be missing important functionality, and/or the paths to key functions can be in violation of the specification.

The following simple design will be used to illustrate the different code coverage results that simulation and formal analyses deliver when run on the same DUT. This design has a single clock and reset. There is a 2-bit input: “sel”. There are three single bit outputs: “A”, “B”, and “C”; and each output can be active for only one clock cycle. The “sel” signal specifies which outputs to drive per the following truth table:

Table 1 This is how the “sel” signal specifies the outputs. Source: Siemens EDA

To simplify and focus the discussion of results, we will confine attention to output “B”. Obviously, in a real verification flow, all requirements and outputs would be tested.

Imagine there is a property for verifying the requirement that output “B” can be active for only one clock cycle. A simulation test is run to stimulate the “B” output, and formal is run on this property. Code coverage results for this circuit, written in Verilog, are shown below for simulation (left-hand-side of Figure 1 below) and formal (right-hand-side). Note that VHDL designs would exhibit similar coverage metrics in this example, as well as the subsequent examples in this article.

Figure 1 In code coverage results from simulation (left) and formal (right), lines with green checkmarks and green shading are covered. Lines with red Xs and red shading are not covered. Source: Siemens EDA

Referring to the simulation results on the left of Figure 1, you can see that most of the design is covered, even though only one of the outputs was driven. The only thing not covered was related to the “sel” input not being driven with “01” or “11”. This reveals the generous nature of simulation code coverage in that logic unrelated to what is being tested can be covered too. For example, all statements of a block are covered. Why is this? In short, the simulation stimulus—input vectors—is driven into the inputs of the Verilog model by the simulation tool, with the analysis progressing from the inputs to the outputs of the design being verified.

Clearly, the formal code coverage results are quite different, reflecting the nature of formal analysis, which effectively works backwards from the given DUT output being verified using a System Verilog Assertion (SVA) property that specifies the behavior of the node(s) designated therein. Specifically, the way that formal proof algorithms work is they start by looking at a small amount of logic that is in the immediate fan-in cone of the output signal specified by the SVA property.

The property may be proven right at that point, and the analysis would be complete. But if the property isn’t proven in this first step, more logic is “pulled-in” just ahead of this area. If this is enough logic to obtain a proof, you are done. If not, the process continues until primary inputs and, potentially, “assumptions”—constraints—are used in the analysis. Again, you can think of the formal code coverage analysis as proceeding from the node of interest—output “B” in this example—toward the inputs of the design. Consequently, the only lines of code declared “covered” by the formal analysis are those that were needed to obtain the proof of the property; in this case those that directly control output “B”.

Given these very different approaches—and the subsequent very different results—it is critical to keep the simulation and formal results “parallel and separate” from each other when exporting this data to a common verification results database. Only in this way can users accurately assess progress and make use of downstream verification management techniques for tracking and test plan completion.

Example 2: Simulation vs. formal code coverage of a finite state machine

The simple finite state machine (FSM) shown in Figure 2 will be used for another illustration of the different code coverage results that simulation and formal analyses deliver when run on the same DUT.

Figure 2 State diagram of the example FSM illustrates different code coverage results. Source: Siemens EDA

The following property was run in simulation and formal against a Verilog model of this state machine:

a_mout_mutex: assert property (@(posedge clk) $onehot0(mout) );

Simulation was run with one value of the “sel” signal used, which exercised the output. The property passed in simulation and was proven in formal. The coverage for both is shown below in Figure 3.

Figure 3 In FSM code coverage results from simulation (left) and formal (right), lines with green checkmarks and green shading are covered. Lines with red Xs and red shading are not covered. Source: Siemens EDA

Referring to the simulation results, again you can see that most of the design is covered. Perhaps the uncovered lines of code suggest that more vectors or “unreachability” analysis is needed; but all the green in this result gives the impression that FSM design is in good shape.

Clearly the formal analysis results paint a different picture. In this case, formal analysis was focused on proving that the “mout” output is mutex, per spec. Hence, formal used only the logic that was needed for the proof; in this case, none of the FSM state bits were needed, only some of the signal assignments were. While this mutex requirement is needed, its correctness doesn’t rely on much of the design. As such, the FSM design itself obviously needs a reexamination and further testing. From a formal verification perspective, more properties are needed, or a property that tests higher order requirements to make fuller use of the FSM should be applied.

It should also be noted that the formal coverage shown here is based on the logic needed to prove the property, commonly referred to as a “proof core”. A similar result would be achieved using mutation coverage analysis, outlined in our upcoming Part 3 article.

A crude form of formal code coverage is based on the structural cone of influence (COI) of the property, which would show coverage from the property all the way back to the inputs, closer to what is shown above for the simulation result. Clearly, it’s not enough for signoff coverage, and hence useful only very early in the verification process, when adding properties (and in many cases, this step can be ignored completely).

Example 3: Closing code coverage holes

In this example, imagine the user is looking to close coverage by using formal analysis to fill a coverage “hole” found in simulation. For purposes of illustration, we will show an initial failed attempt at this exercise—then the corrected version.

First, consider the following simple design coded in Verilog, where outputs are again mutex-based on an encoding of the three inputs. The simulation code coverage results for the following Verilog design are almost perfect—only one hole remains.

Figure 4 Code coverage results from simulation showing only one remaining hole; lines with green shading are covered and lines with red shading are not covered. Source: Siemens EDA

Rather than spend time manipulating the parameters of the simulation test, the verification engineer elects to run a formal analysis to try to cover this hole. Hence, the following property is written:

a_bogus: assert property (@(posedge clk) in1 |-> n1 );

When run in formal, this property yields the results shown on the left of Figure 5.

Figure 5 Code coverage results with formal due to trivial property targeting only that code (shown on the left), and code coverage from a property that’s checking the design’s requirements (shown on the right). Source: Siemens EDA

As per the example, on the left of Figure 5, in the rush to plug the simulation hole in the “n1” signal path, the verification engineer tried to use a trivial property in formal that would plug the hole in simulation. While combining this result with simulation would give 100% coverage of this logic, this property is actually useless. It tests nothing; it’s not tied to a test plan, or to the verification of any design requirements. Closing coverage this way is bogus, to use a colloquialism.

A better approach is to write and apply a property tied to the test plan and the verification of design requirements. In this case the design requires the three outputs to be mutex, thus a more useful property, which checks the requirements, would be written as follows:

a_good: assert property (@(posedge clk) $onehot0({A,B,C}) );

This property, when proven in formal, yields the coverage shown on the right of Figure 5. Ideally, the simulation hole would be filled by running a test that leveraged that logic; or if that logic was deemed unreachable, further work would not be needed. The hole points to a weakness in the test plan or testbench of the verification engine being used. If it is planned—as part of the test plan and the requirements needing verification—that part of this design be verified with simulation and part with formal. Then, the above coverage from each engine could be safely merged to give an accurate and complete coverage picture for the overall verification status analysis.

Beware of false victory

The above examples show that simple blending of simulation and formal coverage data on a one-to-one line/object/point basis can mislead you—and your managers—to think that you are done; when the reality is that there are actually unverified areas of your code and test plan.

But it can get even worse. In Part 3 of this series, we will demonstrate how even properly merged 100% code and functional coverage results can still tempt you to prematurely conclude that your verification effort is over and it’s safe to declare victory. Specifically, with a simple FSM and supporting logic example, we will show just how easy it is to fall into this trap; and how to use mutation analysis to exhaustively prove both your DUT and your verification plan itself is bug free. Plus, we will summarize recommendations for properly using simulation and formal coverage together.

Editor’s Note: The last article of this three-part series on the pitfalls of mixing formal and simulation coverage will demonstrate how even properly merged code and functional coverage don’t mean that verification effort is over. The first part outlined the risks of arbitrarily merging simulation and formal coverage.

Mark Eslinger is a product engineer in the IC Verification Systems division of Siemens EDA, where he specializes in assertion-based methods and formal verification.

 

Joe Hupcey III is product marketing manager for Siemens EDA’s Design & Verification Technologies formal product line of automated applications and advanced property checking.

 

Nicolae Tusinschi is a product manager for formal verification solutions at Siemens EDA.

 

 

Related Content





Source link

We will be happy to hear your thoughts

Leave a reply

AmElectronics
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart