Worst-case distance
In fabrication the yield is one of the most important measures. Also in the design phase engineers already try to maximize the yield by using simulation techniques and statistical models. Often the data follows the well-known bell-shaped normal distribution, and for such distributions there is a simple direct relation between the design margin and the yield. If we express the specification margin in terms of standard deviation sigma, we can immediately calculate yield Y according this specification. The concept of worst-case distance (WCD) extends this simple idea for applying it to more complex problems (like having non-normal distributions, multiple specs, etc.). The WCD[1] is a metric originally applied in electronic design for yield optimization and design centering, nowadays also applied as a metric for quantifying electronic system and device robustness. [2]
For yield optimization in electronic circuit design the WCD relates the following yield influencing factors to each other:
- Statistical distribution of design parameters usually based on the used technology process
- Operating range of operating conditions the design will work in
- Performance specification for performance parameters
Although the strict mathematical formalism may be complex, in a simple interpretation the WCD is the maximum of all possible (i.e. being within the specification limits) performance variances divided by the distance to the performance specification, given that the performance variances are evaluated under the space spanned by the operating range range. Note: This interpretation is valid for normal (Gaussian) distributed variables and performances, luckily the "specification-margin" of a design is almost intuitively related to the yield, e.g. if we have a larger "safety margin" in our design to the limit(s) we are more on the safe side and the production will contain less fail samples. Actually the advantage of WCD is that it offers an elegant method to treat also non-normal and multi-variate distributions and still offerering a picturial, intuitive understanding!
Most simple non-trivial example
In the most simple non-trivial case there is only one normally distributed performance parameter with mean and standard deviation and one single upper limit for the performance specification .
The WCD then calculates to:
In this example it is assumed that only statistical variances contribute to the observed performance variations, and that the performance parameter does not depend operating conditions. Once we found the WCD, we can (approximately) calculate from it the yield by using the error function (which is related to the cummulative distribution function of the normal Gaussian distribution) or by using look-up tables (e.g. WCD=3 is equivalent to Y=99.87%).
For the discussion of any case, more complex than the above-mentioned example, see Antreich et al., 1993.
Relation to process capability index
In the above-mentioned one-dimensional example the WCD is closely related to the process capability index value:
- ,
which is used in process control and from process yield can be derived. Note: The Cpk is also defined for having an lower and upper specification limit, but for WCD we have to treat both specifications separately (which is actually no real disadvantage).
Limitations of the WCD concept
If we run a WCD analysis on multiple specifications (like for power consumption, speed, distortion, etc.) we will have at least as many WCDs as specifications, but usually the worst-case (thus lowest WCD) dominates the yield. However, the assumption that the lowest WCD accurately represents the total yield is violated in several difficult cases, e.g. with nonlinear specifications or in case of many highly competings specifications.
Examples: For a specification like offset voltage < 30mV=f, we get for a normal distribution with mean=0 and sigma=10mV a WCD of 3 - which is equivalent to Y=99.87%. However, for a spec like |Voffset| < 30mV we would get again WCD=3, but the yield loss is now 2x higher, because now the region of fail is split. As real-world designs can be very complex and highly nonlinear, there are also examples where the WCD can be much more wrong, e.g. in case of an ADC or DAC and e.g. specifications on differential nonlinearity (DNL). Also for CMOS timing analysis a WCD analysis is very difficult.
On the other hand: Although the WCD might be wrong compared to the true yield, it can be still a very useful optimization criteria to improve a design. The WCD concept also offers really defining the set of statistical parameters to choose as worst-case, being a perfect measure to start an optimization.
However a very important limitation is on just finding the WCD point, i.e. the set of statistical variable values which hits the spec-region, because even small real-world problems can have thousands (instead of one or two) of such variables (plus the condition variables like temperature, supply voltage, etc.). This makes a slow brute-force search impractical, and very robust optimizers are needed to find the WCDs (e.g. even in the presence of local optima or split fail regions, etc.).
Of course, even the concept of WCD is questionable to some degree, it covers e.g. not what happens beyond the WCD. Surely a design is better if it not completely breaks for "outliers", but remains at least functionable (e.g. the amplification factor may drop below spec limit, but the circuit still behaves at least as an amplifier - not e.g. as oscillator). So WCD is a helpful piece in the whole design flow and does not replace understanding.
In opposite, random Monte-Carlo is a concept which comes with much less restricting prerequisites. It even works for any mix of any kind of variables, even with an infinite number of them or even with a random number of random variables! All advanced methods typically need to exploit extra assumptions to be faster - there is no free lunch. This is the reason give e.g. WCD can offer sometimes a huge speed-up, but sometimes fail hopelessly.
Alternative concepts
WCD allows to simplify yield problems, but it is not the only way to do this. A simpler way is not to find the margin in terms of sigma in the space of statistical variables, but just to evaluate the performance margin(s) itself (like the Cpk does). The worst-case performance margin WPM is much easier to obtain, but here the problem is usually, that although your statistical variables might be normal Gaussian distributed, the performances will be often not follow that distribution type, usually it will be an unknown more difficult distribution. For this reason the performance margin in terms of sigma is at best a relative criteria for yield optimization. This often leads to pure Monte-Carlo methods for solving the WPM problem, whereas WCD allows a more elegant mathematical treatment, only partially based on Monte-Carlo.
Random Monte-Carlo becomes inefficient for high yield estimation if the distribution type is uncertain. One method to speed-up MC is using non-random sampling methods like Latin hyper-cube or low-discrepancy sampling. However, the speed-up is quite limited in real design problems. A promising newer technique is e.g. scale-sigma sampling. With SSS there is a higher chance to hit the fail region and more samples in that will lead to a more stable statistic, thus tighter confidence intervals. In opposite to importance sampling or WCD SSS makes no assumptions on the fail boundary shape or number of such fail regions, so it is (in relation to other methods) most efficient in cases with many variables, strong nonlinearity, difficult and many specifications.
External links
References
- ↑ Antreich, K.; Graeb, H. E. & Wieser, C. U. (1994), 'Circuit analysis and optimization driven by worst-case distances.', IEEE Trans. on CAD of Integrated Circuits and Systems 13 (1) , 57-71 .
- ↑ T Nirmaier, J Kirscher, Z Maksut, M Harrant, M Rafaila, G Pelz (2013). "Robustness Metrics for Automotive Power Microelectronics" (PDF). Design, Automation and Test in Europe, RIIF Workshop (Grenoble).