Notes on HCSL Research

A Note on Design and Optimization of Statistical Quality Control

Statistical quality control (QC) procedures are essential for testing whether a process conforms to quality specifications (null hypothesis) or is out of control (alternative hypothesis). In this context, a type I error occurs when a true null hypothesis is rejected, resulting in a false rejection of the process run. The probability of a type I error is known as the probability of false rejection. A type II error occurs when a false null hypothesis is accepted, leading to a failure to detect significant changes in the probability density function of a process's quality characteristic. The probability of rejecting a false null hypothesis is equivalent to the probability of detecting nonconformity in the process.

The statistical QC procedure to be designed or optimized can be represented as:

Q = Q1(n1, X1) # Q2(n2, X2) # ... # Qq(nq, Xq)

Here, Qi(ni, Xi) denotes a statistical decision rule, ni refers to the size of the sample Si the rule is applied to, and Xi denotes the vector of rule-specific parameters, including the decision limits. The symbol # represents either the Boolean operator AND or the operator OR. For # denoting AND, and for n1 < n2 < ... < nq, that is for S1 S2 ⊂ ... ⊂ Sq, the procedure Q denotes a q‑sampling statistical QC procedure.

Various statistics, such as a single value, range, mean, standard deviation, cumulative sum, smoothed mean and smoothed standard deviation, are used to evaluate each statistical decision rule by calculating the respective statistic of the measured quality characteristic of the sample. The decision rule is considered true if the statistic falls outside the interval between the decision limits. The statistical QC procedure is then evaluated as a Boolean proposition, and if true, the null hypothesis is considered false, the process is deemed out of control, and the run is rejected.

An optimum statistical QC procedure minimizes (or maximizes) a context-specific objective function, which depends on the probabilities of detecting nonconformity and false rejection. These probabilities rely on the parameters of the procedure Q and the probability density function of the measured quality characteristic of the process.

The probabilities of detecting nonconformity and false rejection can be estimated using simulation [1] or calculated using numerical methods [2].

Generally, algebraic methods are unsuitable for optimizing statistical QC procedures. Enumerative methods can be cumbersome, especially with multi-rule procedures, due to the exponential increase in the parameter space.

Genetic algorithms (GAs) provide an attractive alternative, as they are robust search algorithms that can efficiently search through large spaces without requiring knowledge of the objective function. GAs have been inspired by the molecular biology processes of the gene and the evolution of life. Their operators, cross-over, mutation, and reproduction, are isomorphic with synonymous biological processes. GAs have been employed to address a wide range of complex optimization problems. Furthermore, the design process for novel QC procedures is inherently more complex than optimizing predefined ones. Classifier systems and the genetic programming paradigm have demonstrated that GAs can be applied to tasks as intricate as program induction. Since 1993, we have successfully used the GAs to optimize and design novel statistical QC procedures [3].

Artificial neural networks (NNs) are another class of adaptive computational algorithms, inspired by the structure and function of the brain. These algorithms excel in statistical data modeling and high-precision classification tasks. NNs consist of interconnected input, processing, and output nodes, where information flows from input to output through weighted, directed connections. Each node computes an output during forward propagation by applying an activation function to the sum of weighted inputs and a bias term, which is then propagated through the network. Backpropagation adjusts the weights and biases of nodes iteratively to minimize an output-dependent loss function. Through repeated cycles of forward and backward propagation, NNs can be trained using appropriate datasets.

NNs have been extensively investigated for their utility in QC, yet their practical application requires at least five QC measurements, increasing associated costs. To address this limitation, we have explored using NNs with very small samples of QC measurements. Since 2022, we have successfully employed one-dimensional convolutional neural networks (1D-CNNs) on samples of two to four QC measurements to effectively enhance the detection of process nonconformity with quality specifications [4].

Both GAs and NNs offer transformative potential for QC by reducing measurement costs and improving the efficiency of the QC process.

Aristeidis T. Chatzimichail, MD, PhD,
ath@hcsl.com

Rallou A. Chatzimichail, MEng, MSc, PhD,
rc@hcsl.com

Related Publications

1. Hatjimihail AT. A tool for the design and evaluation of alternative quality control procedures. Clin Chem. 1992;38:204-210.c

Abstract
Full Text in Clinical Chemistry

2. Hatjimihail AT. Tool for Quality Control Design and Evaluation. Wolfram Demonstrations Project. Wolfram Research; 2010.

Abstract
Demonstration at Wolfram Demonstrations Project

3. Hatjimihail AT. Genetic algorithms based design and optimization of statistical quality control procedures. Clin Chem. 1993;39:1972-1978.

Abstract
Full Text in Clinical Chemistry

4. Chatzimichail RA, Hatjimihail AT. Quality control using convolutional neural networks applied to samples of very small size. Stochastics and Quality Control. 2023;38(2):63-78.

Abstract
Full Text in Stochastics and Quality Control

Terms of Use

The material made freely available by Hellenic Complex Systems Laboratory is subject to its Terms of Use.