# A Note on the Reliability and Risk Based Optimization of Statistical Quality Control

The statistical quality
control (QC) optimization of an analytical process can be translated into a probabilistic
risk assessment problem that
requires the reliability analysis of the analytical system and the estimation of the risk caused by the measurement error. The reliability analysis of an analytical system should include a quantitative fault tree analysis to define the critical-failure modes and estimate the critical-failure time and measurement error probability density functions and their dependencies. A critical failure of an analytical system in a clinical laboratory setting can initiate hazard when the total measurement error of a result of a patient exceeds the medically acceptable measurement error. This incorrect result can cause harmful medical decisions. The risk of a critical failure is associated with the probability that it will occur and with the time that it will persist. The applied QC procedure detects a critical failure with a certain probability. As residual risk can be considered the risk of the measurement process, assuming the application of the QC procedure. We can define risk measures based on the partial moments of the measurement error with reference to the medically acceptable measurement error. Then we can estimate the risk before the application of the QC and the residual risk assuming QC is applied.

There is a certain financial cost associated with the QC, including the cost of the control materials and their measurements and the cost of the repetitions because of the rejections. Therefore, an operational approach to the optimal QC sampling planning could be based on the minimization of the QC related cost while the residual risk is acceptable.

To explore the estimation of the QC sampling time intervals using a residual risk measure we developed an algorithm that estimates the residual risk of any sampling time interval of QC procedures applied to analytical systems with an arbitrary number of critical-failure modes, assuming any probability density function of critical-failure time and measurement error for each mode. Furthermore, it can estimate the optimal QC sampling time intervals that minimize a QC related cost measure.

The algorithm we developed offers an insight in the relationship among a QC procedure, the reliability of an analytical system, the risk of the measurement error and the QC related cost. Furthermore, it demonstrates a method for the rational estimation of the QC sampling time intervals of analytical systems with an arbitrary number of failure modes. Therefore, given the reliability analysis of an analytical system, the risk analysis of the measurement error and a QC procedure, there is an optimal QC sampling time interval approach that can sustain an acceptable residual risk, while minimizes the QC related cost.

The needed quantitative fault tree analysis and the estimation of the critical-failure time probability density functions of the modern analytical systems may be very complex. It is possible though to derive at least upper bounds of them using techniques handling uncertainty. A more complex issue is the estimation of the dependencies between the critical-failure time probability density functions as well as of the respective measurement error probability density functions. Although the failure time probability density functions of some critical-failure modes of an analytical system may be independent, as for example the failure of an optical component of a photometric module and the purely mechanical failure of a sampling module, others will be dependent. There are techniques that can be used to estimate dependencies.

If the measurement error probability density functions are dependent then multivariate distributions could be used and the respective covariance matrices could be estimated.

This is a large scale procedure that can be accomplished by the industry. Then a database could be established with the reliability analysis data that could be continuously updated with the failure data from the analytical systems in the field, operated by different operators, in different environments. Possibly a substantial commitment is required for such an effort to succeed, giving priority to the safety of the patient.

For the rigorous QC design and estimation of the optimal QC sampling time intervals it is necessary a risk analysis to be performed to correlate the size of the measurement error with the risk that can cause. Then the medically acceptable analytical error, the risk function that can be even a simple step function or a fuzzy function, and the acceptable risk and residual risk measures can be defined. The risk analysis is a very complex task too. It can be subjective or objective, quantitative or semi-quantitative and should be accomplished by the medical profession. In the future, as the potential of the data analysis will increase exponentially, appropriate risk functions should be estimated using evidence based methods.

In conclusion, to optimize the QC planning process a reliability analysis of the analytical system and a risk analysis of the measurement error are needed. Then it is possible to rationally estimate the optimal QC sampling time intervals to sustain an acceptable residual risk with the minimum QC related cost.

The theoretical framework and the symbolic computation algorithm for the optimization of the statistical quality control of an analytical process, based on the reliability of the analytical system and the risk of the analytical error, are described in Hatjimihail AT. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure. PLoS ONE 2009 4(6): e5770 (see HCSL publications on statistical QC, reliability, and risk).

Aristeidis T. Chatzimichail, M.D., Ph.D.,

ath@hcsl.com