Skip to content-main content

Common 6 Sigma Implementation Errors to Avoid

Over 20+ years in manufacturing, I have seen numerous examples where the use of 6 Sigma techniques has paid dividends in controlling quality and consistency of product. It is critical, however, to ensure that your measurement system accurately illuminates the variability you are trying to control in your process.

I have in the past visited a baking facility where a key quality measure is the amount of topping applied to the product, as measured by a percentage of total finished product weight. So far so good, as this is a common measure in the industry and the plant’s QA team checked the incoming topping to gain confidence that its intensity of flavor was within a narrow and acceptable range.

Questions started cropping up when I observed the QA Tech measuring the application percentage. The Tech was instructed to take a certain number of units pulled prior to topping, and the same number of units after topping, and weigh them. The data was entered into a system which calculated percentage of topping.

I observed two issues, each of which illustrates a common failing in 6 Sigma implementation:

6 Sigma Common Failure #1: “I want our data to look good”

The QA Tech was well aware of the specification limits for pre-topping and post-topping weight. Techs tended to collect more product than was required for the measurement, and if the weight of their sample was too high or too low upon initially weighing, they would swap out units in the scale pan with some of the extra they collected until the weight was in spec.

6 Sigma Common Failure #2: “Our measurement procedure didn’t consider all sources of variation”

The product that was being produced was by its very nature irregular in size and shape due to the forming process. Having a couple more broken units in one sample or the other would artificially increase or decrease the calculated topping percentage.


The goal of a 6 Sigma program is to identify and react to variation before it gets to the customer – ensure your measurement system supports this goal.


The “I want our data to look good” failure usually stems from employees with good intentions who have faith in the quality of the product they are making and want to show “success” in process measures; sometimes it stems from employees who don’t want to deal with the red tape involved in reporting an out-of-control or out-of-spec condition.

The “Our measurement procedure didn’t consider all sources of variation” failure likely stemmed from an operations manager with good intentions trying to keep the process simple for the benefit of his/her employees. Stopping to consider that “pieces” might not be the best unit of measure (rather than, say, the density of a sample ground up in a consistent manner or tagging, topping, and weighing the same pieces that were weighed before topping) ended up making the data far less useful than it would have been had the implementation team taken a few minutes to consult with, train, and coach the QA Techs in how to identify and remove the impact of the variation in piece weight from the data.

In both cases, the failure reflects an environment where the people taking and reacting to the measurement have not internalized the primary goal of a 6 Sigma implementation: to identify and react to poor quality and variability in the product prior to that product reaching the customer.

6 Sigma is a powerful tool. With proper forethought and guidance even a limited, line- or item-targeted implementation can yield tremendous benefit. Need help? Contact us!

 

About the Author

Gunther Brinkman is an agile senior manager with extensive experience developing and continuously improving operational processes. Gunther brings with him documented success in turnaround and fast-growth environments, for both publicly traded and private equity sponsored companies.