This curriculum spans the technical and organisational challenges of deploying statistical process control in complex, cross-functional environments, comparable to a multi-phase operational excellence initiative involving process engineers, quality teams, and data analysts across manufacturing and regulated production settings.
Module 1: Defining Process Performance Metrics and Baselines
- Selecting primary versus secondary metrics based on operational ownership and data availability across departments.
- Establishing baseline performance using historical data while accounting for seasonal variation and known process changes.
- Resolving conflicts between operational teams on metric definitions, such as yield versus throughput in manufacturing lines.
- Implementing data validation rules to prevent inclusion of outlier events (e.g., equipment failure) in baseline calculations.
- Aligning metric precision with measurement system capability, particularly when gage R&R results indicate marginal reliability.
- Determining the required sample size and data collection frequency to achieve statistically stable baselines without overburdening operations.
Module 2: Measurement System Analysis and Data Integrity
- Designing gage R&R studies that reflect actual production conditions, including multiple operators and shift patterns.
- Addressing non-replaceable measurement devices by adapting ANOVA methods for nested or expanded studies.
- Classifying measurement errors as systematic versus random based on control chart patterns and correlation with process variables.
- Implementing calibration schedules that balance cost, regulatory requirements, and process sensitivity to drift.
- Handling attribute data in inspection processes through Kappa studies, especially when pass/fail decisions involve subjective judgment.
- Integrating MSA results into data governance policies to restrict use of unreliable data in performance dashboards.
Module 3: Process Capability and Specification Limits
- Distinguishing between natural process limits and specification limits when customer requirements conflict with process stability.
- Calculating Pp/Ppk versus Cp/Cpk based on whether the process is in statistical control during the assessment period.
- Negotiating revised specification limits with customers using capability data and risk-based justification.
- Handling unilateral tolerances in capability analysis, particularly in safety-critical or regulatory environments.
- Adjusting for non-normal data using transformations or non-parametric methods without masking underlying process issues.
- Documenting capability assumptions and limitations for audit purposes, especially in regulated industries like pharmaceuticals.
Module 4: Control Chart Selection and Implementation
- Choosing between X-bar/R, X-bar/S, I-MR, or attribute charts based on subgroup size and data type consistency.
- Setting initial control limits using phase 1 data and defining criteria for phase 2 monitoring transitions.
- Responding to repeated out-of-control signals when root causes are operationally unavoidable (e.g., material batch changes).
- Managing false alarm rates by adjusting rules (e.g., Western Electric) based on process criticality and detection sensitivity.
- Integrating control charts into real-time SCADA or MES systems with automated alerting and escalation protocols.
- Handling missing data points due to equipment downtime or sensor failure without compromising chart validity.
Module 5: Root Cause Analysis Using Statistical Tools
- Selecting between ANOVA, regression, and DOE based on the number of suspected factors and feasibility of controlled experiments.
- Using multi-vari studies to isolate positional, cyclical, and temporal variation sources in high-volume processes.
- Interpreting interaction effects in factorial designs when process steps are interdependent or sequential.
- Applying logistic regression to model binary outcomes (e.g., pass/fail) when continuous response data is unavailable.
- Validating root cause hypotheses with confirmation runs that account for process drift since initial analysis.
- Managing stakeholder resistance to statistical conclusions that contradict long-standing operational beliefs.
Module 6: Design of Experiments for Process Optimization
- Defining factor ranges that are both statistically meaningful and operationally feasible within equipment constraints.
- Choosing between full factorial, fractional factorial, or response surface designs based on resource and time limitations.
- Blocking known sources of variation (e.g., shift, raw material lot) to isolate experimental effects.
- Handling hard-to-change factors by using split-plot designs and adjusting error term calculations.
- Optimizing multiple responses using desirability functions when trade-offs exist between quality and throughput.
- Translating experimental results into standard operating procedures with clear control parameter settings.
Module 7: Sustaining Gains and Control Plan Development
- Assigning ownership of control chart monitoring and response actions to specific roles within shift operations.
- Integrating control plans into change management systems to assess impact of equipment, material, or personnel changes.
- Defining response plans for out-of-control conditions that escalate based on severity and frequency.
- Updating process capability and control limits after implemented improvements, with documented revalidation.
- Using automated data collection systems to reduce manual entry errors in ongoing monitoring activities.
- Conducting periodic audits of control plan adherence and effectiveness during internal quality reviews.
Module 8: Advanced Topics in Non-Normal and Multivariate Processes
- Applying non-parametric control charts (e.g., run charts, CUSUM on ranks) when data transformation fails to normalize distributions.
- Using multivariate control charts (e.g., T²) to detect shifts in correlated process variables without inflating false alarm rates.
- Interpreting contribution plots to identify root variables driving multivariate out-of-control signals.
- Modeling time-series data with autocorrelation using ARIMA-based control schemes instead of traditional Shewhart charts.
- Handling processes with multiple operating modes by implementing mode-specific control limits and baselines.
- Validating stability in low-volume or custom production environments using cumulative sum or Bayesian updating methods.