# A Note on the Reliability and Risk Based Optimization of Statistical Quality Control

The statistical quality control (QC) optimization of an analytical process can be translated into a probabilistic risk assessment problem that requires reliability analysis of the analytical system and estimation of the risk caused by measurement error. Reliability analysis of an analytical system should include a quantitative fault tree analysis to define the critical-failure modes and estimate the critical-failure time and measurement error probability density functions and their dependencies. Critical failure of an analytical system in a clinical laboratory setting can initiate hazard when total measurement error of a result of a patient exceeds the medically acceptable measurement error. An incorrect result can cause harmful medical decisions. The risk of a critical failure is associated with the probability that it will occur and with the time that it will persist. A QC procedure detects a critical failure with a certain probability. As residual risk can be considered the risk of the measurement process, assuming the application of a QC procedure. We can define risk measures based on the partial moments of the measurement error with reference to the medically acceptable measurement error. Then we can estimate the risk before the application of the QC and the residual risk assuming QC is applied.

There is a certain financial cost associated with QC, including the cost of the control materials and their measurements and the cost of the repetitions because of the rejections. Therefore, an operational approach to optimal QC sampling planning could be based on minimization of the QC related cost while the residual risk is acceptable.

To explore the estimation of the QC sampling time intervals using a residual risk measure, we have developed an algorithm that estimates the residual risk of any sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any probability density function of critical-failure time and measurement error for each mode. Furthermore, it estimates the optimal QC sampling time intervals that minimize a QC related cost measure.

The algorithm we have developed offers an insight in the relation among a QC procedure, the reliability of an analytical system, the risk of the measurement error and the QC related cost. Furthermore, it demonstrates a method for the rational estimation of the QC sampling time intervals of analytical systems with an arbitrary number of failure modes. Therefore, given the reliability analysis of an analytical system, the risk analysis of the measurement error and a QC procedure, there is an optimal QC sampling time interval approach that can sustain an acceptable residual risk, while minimizes the QC related cost.

The needed quantitative fault tree analysis and the estimation of the critical-failure time probability density functions of modern analytical systems may be overly complex. It is possible though to derive at least their upper bounds, using techniques handling uncertainty. A more complex issue is the estimation of the dependencies among the critical-failure time probability density functions as well as of the respective measurement error probability density functions. Although the failure time probability density functions of some critical-failure modes may be independent, as for example the failure of an optical component of a photometric module and the purely mechanical failure of a sampling module, others will be dependent. There are techniques that can be used to estimate dependencies.

If the measurement error probability density functions are dependent, then multivariate distributions could be used, and the respective covariance matrices could be estimated.

This is a large-scale procedure that can be accomplished by the industry. Then a database could be established with reliability analysis data, which could be continuously updated with failure data from the analytical systems in the field, operated by different operators, in different environments. Possibly a substantial commitment is required for such an effort to succeed, giving priority to the safety of the patient.

For the rigorous QC design and estimation of the optimal QC sampling time intervals, it is necessary a risk analysis to be performed to correlate the size of the measurement error with the risk that can cause. Then the medically acceptable analytical error, the risk function which can be even a simple step or fuzzy function, and the acceptable risk and residual risk measures can be defined. Risk analysis is an extremely complex task too. It can be subjective or objective, quantitative or semi-quantitative and should be accomplished by the medical profession. In the future, as the potential of the data analysis will increase exponentially, appropriate risk functions should be estimated using evidence-based methods.

Therefore, to optimize the QC planning process, reliability analysis of the analytical system and risk analysis of the measurement error are needed. Then it is possible to rationally estimate the optimal QC sampling time intervals to sustain an acceptable residual risk with minimum QC related cost. Since 2009, we have developed a theoretical framework and a symbolic computation algorithm for the optimization of statistical quality control of an analytical process, based on the reliability of the analytical system and the risk of the analytical error^{1} (see HCSL publications on statistical QC, reliability, and risk).

Aristeidis T. Chatzimichail, M.D., Ph.D.,

## References

1. Hatjimihail AT. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure. PLoS ONE 2009 4(6): e5770.