A Note on the Reliability and Risk Based Optimization of Statistical Quality Control
Optimizing statistical quality control (QC) of an analytical process can be approached as a probabilistic risk assessment problem that involves the reliability analysis of the analytical system and estimation of risk caused by measurement error. Reliability analysis of an analytical system should encompass a quantitative fault tree analysis to identify critical-failure modes and estimate critical-failure time, measurement error probability density functions, and their dependencies. In a clinical laboratory setting, critical failure of an analytical system can initiate hazard when the total measurement error of a patient's result exceeds medically acceptable measurement error, potentially leading to harmful medical decisions
The risk of a critical failure is related to the probability of its occurrence and the duration of its persistence. A statistical QC procedure detects critical failures with a certain probability, and residual risk can be considered as the risk of the measurement process, assuming the application of a QC procedure. Risk measures can be defined based on the partial moments of the measurement error concerning the medically acceptable measurement error. Subsequently, the risk before applying the QC procedure and the residual risk assuming its application can be estimated.
QC procedures involve financial costs, including the cost of control materials, measurements, and repetitions due to rejections. An operational approach to optimal QC sampling planning could be based on minimizing QC-related costs while maintaining acceptable residual risk levels.
We have developed an algorithm that estimates the residual risk of any sampling time interval of a statistical QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any probability density function of critical-failure time and measurement error for each mode. Additionally, it estimates the optimal QC sampling time intervals that minimize a QC-related cost measure.
Our algorithm provides insight into the relationship between a QC procedure, the reliability of an analytical system, the risk of measurement error, and QC-related costs. It also demonstrates a method for rational estimation of QC sampling time intervals for analytical systems with an arbitrary number of failure modes. Given the reliability analysis of an analytical system, risk analysis of measurement error, and a statistical QC procedure, an optimal QC sampling time interval approach can maintain acceptable residual risk while minimizing QC-related costs.
Quantitative fault tree analysis and estimation of critical-failure time probability density functions for modern analytical systems can be quite complex. However, upper bounds can be derived using techniques that address uncertainty. Estimating dependencies between critical-failure time probability density functions and their respective measurement error probability density functions presents an even greater challenge.
This large-scale procedure can be undertaken by the industry, leading to the establishment of a database containing reliability analysis data. This database could be continuously updated with failure data from analytical systems in the field, operated by various operators in diverse environments. A significant commitment is likely necessary for such an endeavor to succeed, with a focus on ensuring patient safety.
For rigorous QC design and estimation of optimal QC sampling time intervals, a risk analysis correlating the size of the measurement error with the associated risk is essential. This allows for the definition of medically acceptable analytical error, risk functions (which could be simple step or fuzzy functions), and acceptable risk and residual risk measures. Risk analysis itself is a highly complex task that can be subjective or objective, quantitative or semi-quantitative, and should be carried out by medical professionals. As the potential for data analysis grows exponentially in the future, evidence-based methods should be used to estimate appropriate risk functions.
To optimize the QC planning process, reliability analysis of the analytical system and risk analysis of the measurement error are needed. Subsequently, the optimal QC sampling time intervals can be rationally estimated to maintain an acceptable residual risk with minimum QC-related costs. Since 2009, we have developed a theoretical framework and a symbolic computation algorithm for optimizing statistical quality control of an analytical process based on the reliability of the analytical system and the risk of analytical error1 (see HCSL publications on statistical QC, reliability, and risk).
1. Hatjimihail AT. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure. PLoS ONE 2009 4(6): e5770.