Notes
A Note on Reliability and Risk Based Optimization of Statistical Quality Control
Optimizing statistical quality control (QC) of an analytical process can be approached as a probabilistic risk assessment problem involving the reliability analysis of the analytical system and estimation of risk caused by measurement uncertainty1. The reliability analysis of an analytical system should include a quantitative fault tree analysis to identify critical failure modes and estimate critical failure times, as well as the probability density functions of measurement uncertainty and their dependencies. In a clinical laboratory setting, critical failure of an analytical system can initiate a hazard when the total measurement uncertainty of a patient's result exceeds medically acceptable measurement uncertainty, potentially leading to harmful medical decisions.
The risk of a critical failure is related to the probability of its occurrence and the duration of its persistence. A statistical QC procedure detects critical failures with a certain probability, and residual risk can be considered as the risk of the measurement process, assuming the application of a QC procedure. Risk measures can be defined based on the partial moments of measurement uncertainty with reference to medically acceptable measurement uncertainty. Subsequently, the risk before applying the QC procedure and the residual risk assuming its application can be estimated.
Due to rejections, QC procedures incur financial costs, including the cost of control materials, measurements, and repetitions. An operational approach to optimal QC sampling planning could minimize these costs while maintaining acceptable residual risk levels.
We have developed an algorithm that estimates the residual risk of any sampling time interval of a statistical QC procedure applied to analytical systems with an arbitrary number of critical failure modes, assuming any probability density function of critical failure time and measurement uncertainty for each mode. Additionally, it estimates the optimal QC sampling time intervals that minimize a QC related cost measure.
Our algorithm provides insight into the relationship among a QC procedure, the reliability of an analytical system, the risk of measurement uncertainty, and QC related costs. It also demonstrates a method for rational estimation of QC sampling time intervals for analytical systems with an arbitrary number of failure modes. Given the reliability analysis of an analytical system, risk analysis of measurement uncertainty, and a statistical QC procedure, an optimal QC sampling time interval approach can maintain acceptable residual risk while minimizing QC related costs.
Quantitative fault tree analysis and estimation of critical-failure time probability density functions for modern analytical systems can be very complex. However, upper bounds can be derived using techniques that address uncertainty. Estimating dependencies between critical-failure time probability density functions and their respective probability density functions of measurement uncertainty presents an even more significant challenge.
The industry can undertake this large-scale procedure, leading to the establishment of a database containing reliability analysis data. This database could be continuously updated with failure data from analytical systems in the field operated by various operators in diverse environments. A significant commitment is likely necessary for such an endeavor to succeed, with a focus on ensuring patient safety.
For rigorous QC design and estimation of optimal QC sampling time intervals, a risk analysis correlating the size of the measurement uncertainty with the associated risk is essential. This analysis defines medically acceptable analytical uncertainty, risk functions (even simple step or fuzzy functions), and acceptable risk and residual risk measures. Risk analysis is a highly complex task that can be subjective or objective, quantitative or semi-quantitative, and should be carried out by medical professionals. As the potential for data analysis grows exponentially, evidence-based methods should be used to estimate appropriate risk functions.
Reliability analysis of the analytical system and risk analysis of the measurement uncertainty are needed to optimize the QC planning process. Subsequently, the optimal QC sampling time intervals can be rationally estimated to maintain an acceptable residual risk with minimum QC related costs. Since 2009, we have developed a theoretical framework and a symbolic computation algorithm for optimizing statistical quality control of an analytical process based on the reliability of the analytical system and the risk of measurement uncertainty2.
Aristides T. Hatjimihail, M.D., Ph.D.,
ath@hcsl.com
Related Publications
1. Nichols JH, Altaie, SS, Cooper G, Glavina P, Halim A-B, Hatjimihail AT, et al. Laboratory Quality Control Based on Risk Management. Approved Guideline. CLSI document EP23-A. Clinical and Laboratory Standards Institute; 2011:1-73.
Abstract
2. Hatjimihail AT. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure. PLoS ONE. 2009;4(6):e5770.
Abstract