Formal equivalence verifiers for combinational circuits have seen
wide-spread acceptance and are today in active use throughout the
industry. In addition to many other techniques, such tools rely heavily
on BDD algorithms for establishing signal equivalence. However, building
monolithic BDDs is often not feasible for today's complex circuits.
Thus, a common approach is to apply divide-and-conquer strategies based
on cut-points. One of the main drawbacks with these algorithms is their
tendency to produce false negatives. As a result, significant effort
must usually be spent in trying to determine whether the failures are
indeed real or simply artifacts of the algorithm. In particular, if the
design is actually incorrect, many cut-point based algorithms perform
very poorly. In this paper we present a new algorithm that completely
removes the problem of false negatives by introducing normalized functions
instead of free variables at cut points. This results in simpler and
more efficient counter-example generation for incorrect circuits.
Furthermore, the approach also handles the propagation of input
assumptions to cut-points and is also significantly more accurate in
finding cut-points. Although, naively, our algorithm appears to be more
expensive than traditional cut-point algorithms, the empirical data on
more than 900 complex signals from a recent microprocessor design, shows
rather the opposite. For large and complex signal cones, our approach
proves more efficient compared to traditional cut-point algorithms.