MSA7A — Attribute MSA for Test Automation

MSA7 for Automated Test Systems (1 Test Automation, 6 Runs)

Markdown

MSA 7A — Attributive MSA for Automated Test Systems

MSA 7A is an MSA 7 for automated test systems (automated testers). While MSA 7 examines agreement between multiple human inspectors, MSA 7A evaluates the repeatability and correctness of an automated test system — that is, a single automated system that makes pass/fail decisions.


Overview

Purpose and Application

MSA 7A is used when an automated test system (automated tester) performs attributive assessments and its capability needs to be demonstrated. Typical use cases:

  • Camera-based test systems — Automated Optical Inspection (AOI)
  • Sorting systems — Automated pass/fail sorting
  • Inline test stations — Automated 100% inspection in manufacturing
  • Automated crack testing — Eddy current or ultrasonic test equipment
  • Robot-based testing — Automated surface inspection

Since an automated tester is not influenced by different operators, analysis of comparability between inspectors is eliminated. Instead, 6 runs (instead of 3 × 3 inspectors) are used to test the repeatability of the automated system.

Distinction from MSA 7

Characteristic MSA 7 MSA 7A
Application Manual attributive inspection by persons Automated inspection by automated tester
Inspectors 3 inspectors (A, B, C) 1 automated tester
Runs 3 runs per inspector 6 runs (A1–A6)
Data columns 9 (A1–A3, B1–B3, C1–C3) + Reference 6 (A1–A6) + Reference
Result categories 8 (AA, BB, CC, PP, AR, BR, CR, PR) 2 (AA, AR)
Comparability between inspectors Is analyzed (PP, BB, CC) Eliminated — only 1 tester
Focus Repeatability + Comparability + Correctness Repeatability + Correctness

Typical Workflow

  1. Select parts (good, bad, and borderline parts)
  2. Establish reference assessment for each part
  3. Run all parts 6 times through the automated tester
  4. Enter assessments in my8data
  5. Start calculation
  6. Interpret results

Info: The 6 runs replace the 3 × 3 inspector runs of MSA 7. This increases the statistical validity for the repeatability of the automated system, even though only one "inspector" is involved.


Input

Configuration

The configuration of MSA 7A is designed for a single automated tester:

Field Description Note
Automated Tester Designation of the automated test system e.g., "AOI-Station 3" or "Sorting System LK-200"
Assessment Categories The possible assessments Typically binary: "OK" / "Not OK" (pass/fail)
Reference Assessment Known correct assessment per part Required for correctness analysis

Info: The "Automated Tester" field replaces the three inspector fields of MSA 7. Enter the designation of the automated test system here.

Data Table

The data table has the following columns:

Column Description
Part Designation or number of the test part
A1 Assessment in 1st run
A2 Assessment in 2nd run
A3 Assessment in 3rd run
A4 Assessment in 4th run
A5 Assessment in 5th run
A6 Assessment in 6th run
Reference Reference assessment (known correct value)

Each row corresponds to one part. For each run, enter the assessment made by the automated tester (e.g., 1 = OK, 0 = Not OK).

Tip: Use the dropdown selection in the cells to ensure consistent data entry. Alternatively, you can paste data from Excel using copy & paste.

Recommendations for Part Selection

Part selection is critical for meaningful MSA 7A results:

  • Select parts that cover the entire assessment range — clearly good, clearly bad, and borderline parts.
  • Borderline parts are particularly important: they show how reliably the automated system operates at the decision boundary.
  • Avoid selecting only obviously good or bad parts.

Warning: Part selection without borderline cases leads to inflated agreement rates and falsely suggests better test capability than actually exists. This applies to automated testers just as much as to manual inspection.


Results

Result Categories

MSA 7A provides two result categories — significantly more compact than the eight categories of MSA 7:

Category Description What is tested?
AA — Repeatability Agreement of the automated tester with itself across the 6 runs Does the automated tester assess the same part the same way in each run?
AR — Correctness Agreement of the automated tester with the reference assessment Does the automated tester assess parts according to the reference standard?

For each category, the following metrics are calculated:

Metric Description
Kappa (K) Fleiss' Kappa value — measure of agreement (adjusted for chance)
Number Correct Absolute number of correct agreements
Number Tested Total number of comparisons tested
Percent Correct Percentage agreement rate
CI Low / CI High 95% confidence interval of the Kappa value

Assessment Table

Kappa Value Assessment Meaning
≥ 0.75 Good (green) Automated tester operates reliably
0.40 – 0.74 Conditionally acceptable (yellow) Improvements recommended
< 0.40 Not acceptable (red) Automated tester is not capable

Interpretation of Results

AA — Repeatability:
- High Kappa value → The automated tester provides consistent results on repeat measurements.
- Low Kappa value → The automated tester assesses the same part differently in different runs. Possible causes: unstable sensors, fluctuating lighting, mechanical inaccuracies.

AR — Correctness:
- High Kappa value → The automated tester agrees with the reference assessment.
- Low Kappa value → The automated tester systematically deviates from the reference. Possible causes: incorrectly set thresholds, outdated test programs, contaminated sensors.

Tip: If repeatability (AA) is good but correctness (AR) is poor, the automated tester needs to be recalibrated or the decision threshold adjusted. If both values are poor, there is a fundamental problem with the measurement equipment.

Error Analysis

For poor results, systematically examine individual parts:

  • False acceptance (missed defect): The automated tester assesses a bad part as good → Risk of defective shipment.
  • False rejection (false alarm): The automated tester assesses a good part as bad → Unnecessary scrap, higher costs.

Important: If a result is "conditionally capable" or "not capable," consider the following corrective actions:
- Review and recalibrate test program and thresholds
- Clean sensors and check their condition
- Stabilize lighting and environmental conditions
- Update reference patterns
- Check mechanical positioning and clamping

Info: The results of MSA 7A can be saved, exported, and shared in my8data like all other analyses. Use the Excel export for documentation in your quality management system.

Jetzt selbst ausprobieren

Create your own MSA, SPC and capability analyses with my8data — the web-based platform for quality management.

Register now