• 沒有找到結果。

An overview of theory and practice on process capability indices for quality assurance

N/A
N/A
Protected

Academic year: 2021

Share "An overview of theory and practice on process capability indices for quality assurance"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

An overview of theory and practice on process capability indices

for quality assurance

Chien-Wei Wu

a,



, W.L. Pearn

b

, Samuel Kotz

c a

Department of Industrial Engineering and Systems Management, Feng Chia University, 100, Wenhwa Road, Seatwen, Taichung 40724, Taiwan b

Department of Industrial Engineering and Management, National Chiao Tung University, Taiwan c

School of Science and Engineering, George Washington University, Washington, DC, USA

a r t i c l e

i n f o

Article history: Received 11 April 2007 Accepted 2 January 2009 Available online 10 December 2008 Keywords:

Expected relative loss Fraction non-conforming Process capability indices Process consistency Process relative departure Quality assurance

a b s t r a c t

Process capability indices (PCIs), Cp, Ca, Cpk, Cpm, and Cpmkhave been developed in certain

manufacturing industry as capability measures based on various criteria, including process consistency, process departure from a target, process yield, and process loss. It is noted in certain recent quality assurance and capability analysis works that the three indices, Cpk, Cpm, and Cpmkprovide the same lower bounds on the process yield. In this

paper, we investigate the behavior of the actual process yield, in terms of the number of non-conformities (in ppm), for processes with fixed index values of Cpk¼Cpm¼Cpmk,

possessing different degrees of process centering. We also extend Johnson’s [1992. The relationship of Cpm to squared error loss. Journal of Quality Technology 24, 211–215]

result formulating the relationship between the expected relative squared loss and PCIs. Also a comparison analysis among PCIs is carried out based on various criteria. The result illustrates some advantages of using the index Cpmkover the indices Cpkand Cpmin

measuring process capability (yield and loss), since Cpmk always provides a better

protection for the customers. Additionally, several extensions and applications to real world problem are also discussed. The paper contains some material presented in the Kotz and Johnson [2002. Process capability indices—a review, 1992–2000. Journal of Quality Technology 34(1), 1–19] survey but from a different perspective. It also discusses the more recent developments during the years 2002–2006.

&2009 Elsevier B.V. All rights reserved.

1. Introduction

Understanding the structure of a process and quantify-ing process performance no doubt are essential for successful quality improvement initiatives. Process cap-ability analysis has become—in the course of some 20 years—an important and well-defined tool in applica-tions of statistical process control (SPC) to a continuous improvement of quality and productivity. The relationship

between the actual process performance and the specifi-cation limits (or tolerance) may be quantified using suitable process capability indices. Process capability indices (PCIs), in particular Cp, Ca, Cpk, Cpmand Cpmk, which

provide numerical measures of whether or not a manu-facturing process is capable to meet a predetermined level of production tolerance, have received substantial atten-tion in research activities as well as an increased usage in process assessments and purchasing decisions during last two decades. By now (2006) there are several books (on different levels) cited in the references, which provide discussions of various PCIs. A number of authors have promoted the use of various process capability indices and examined (with a various degree of completeness) their properties.

Contents lists available atScienceDirect

journal homepage:www.elsevier.com/locate/ijpe

Int. J. Production Economics

0925-5273/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.ijpe.2008.11.008

Corresponding author. Tel.: +886 4 24517250x3626; fax: +886 4 24510240.

(2)

The first process capability index appearing in the engin-eering literature was presumably the simple ‘‘precision’’ index Cp (Juran, 1974; Sullivan, 1984, 1985; Kane, 1986).

This index considers the overall process variability relative to the manufacturing tolerance as a measure of process precision (or product consistency).1 Another index Ca, a

function of the process mean and the specification limits, referred to as an ‘‘accuracy’’ index, is geared to measure the degree of process centering relative to the manufac-turing tolerance (see, e.g.,Pearn et al., 1998). This index is closely related to an earlier measure originally introduced in the Japanese literature (see Section 3). Formally: Cp¼

USL  LSL

6

s

; Ca¼1 

j

m

mj

d , (1)

where

m

is the process mean,

s

is the process standard deviation, USL and LSL are the upper and the lower specification limits, d ¼ (USLLSL)/2 is the half specifica-tion width related to the manufacturing tolerance and m ¼ (USL+LSL)/2 is the midpoint between the upper and lower specification limits. Due to its simplicity, Cp

cannot provide an assessment of process centering (targeting). The index Cpk, on the other hand, takes both

the magnitude of process variance and the process departure from the midpoint m into consideration. It may be written as Cpk¼CpCaa product of the two basic

indices Cpand Ca. The standard definition is

Cpk¼min USL 

m

3

s

;

m

LSL 3

s

  ¼d  j

m

mj 3

s

. (1) 0

As alluded above the index Cpkwas developed because

Cp does not adequately deal with cases where process

mean

m

is not centered (the mean does not equal to the midpoint m). However, Cpk by itself still cannot provide

an adequate measure of process centering. That is, a large value of Cpk does not provide information about the

location of the mean in the tolerance interval USLLSL. The Cp and Cpk indices are appropriate measures of

progress for quality improvement situations when reduc-tion of variability is the guiding factor and process yield is the primary measure of a success. However, they are not related to the cost of failing to meet customers’ require-ment of the target. A well-known pioneer in the quality control, G. Taguchi, on the other hand, pays special attention on the loss in product’s worth when one of product’s characteristics deviates from the customers’ ideal value T.

To take this factor into account, Hsiang and Taguchi (1985) introduced the index Cpm, which was also later

proposed independently byChan et al. (1988). The index is motivated by the idea of squared error loss and this loss-based process capability index Cpm, sometimes called the

Taguchi index. The index is geared towards measuring the ability of a process to cluster around the target, and reflects the degrees of process targeting (centering). The index Cpmincorporates the variation of production items

relative to the target value and the specification limits

which are preset in a factory. The index Cpmis defined as

Cpm¼ USL  LSL 6 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 q ¼ d 3

t

, (2)

where as above USLLSL is the allowable tolerance range of the process, d ¼ (USLLSL)/2 is the half-interval length, and

t

¼ ½

s

2þ ð

m

2

1=2 is a measure of the average product deviation from the target value T. This index Cpm can also be expressed as a function of the two

basic indices Cp and Ca, explicitly Cpm¼Cp=f1 þ ½3Cp

ð1  CaÞ2g1=2. The quantity

t

2¼E½ðX  TÞ2combines two

variation components: (i) variation relative to the process mean (

s

2) and (ii) deviation of the process mean from the

target (ð

m

TÞ2).

Pearn et al. (1992) proposed the process capability index Cpmk, which combines the features of the three

earlier indices Cp, Cpkand Cpm. The index Cpmk(motivated

by the structure of Cpk(1)0) alerts the user whenever the

process variance increases and/or the process mean deviates from its target value. The index Cpmk has been

referred to as the third-generation capability index, and is defined as Cpmk¼min USL 

m

3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 q ;

m

LSL 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 q 8 > < > : 9 > = > ; ¼ d  j

m

mj 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 q . (3)

Comparing the pair of indices ðCpmk;CpmÞ, analogously to

ðCpk;CpÞ, we obtain the relation Cpmk¼CpmCa¼

(CpmCpk)/Cp. Consequently, Cpmk can be expressed

as Cpmk¼CpCa=f1 þ ½3Cpð1  CaÞ2g1=2 in terms of the

‘‘elementary indices’’. More recently, Va¨nnman (1995)

has proposed a superstructure Cpðu; vÞ ¼ ðd  uj

m



mjÞ=f3½

s

2þ

m

2

1=2g of capability indices for pro-cesses based on normal distribution, which includes Cp,

Cpk, Cpmand Cpmkas particular cases. By setting u, v ¼ 0

and 1, we obtain the four indices Cpð0; 0Þ ¼ Cp,

Cpð1; 0Þ ¼ Cpk, Cpð0; 1Þ ¼ Cpm, and Cpð1; 1Þ ¼ Cpmk. These

indices are effective tools for process capability analysis and quality assurance. Two basic process characteristics: the process location in relation to its target value, and the process spread (i.e. the overall process variation) are combined to determine formulas for these capability indices. The closer the process output is to the target value and the smaller is the process spread, the more capable the process is. The first feature (closeness to the target) is reflected in the denominator while the second one (the process spread) appears in the numerators of these four indices. In other words, the larger the value of a PCI, the more capable is the process. In this paper, all derivations are carried out assuming that the process is in a state of statistical control and the characteristics under investigation arise from a normal distribution. Moreover, the target value is taken to be the midpoint of the specification limits: T ¼ m (which is common in practical situation) unless stated otherwise.

During the last two decades many authors have promoted the use of various PCIs and examined them 1

We have not been able to discover any publications on Cpbetween

(3)

with a different degree of completeness. These contribu-tions include (in the chronological order): Chan et al. (1988), Chou et al. (1990), Boyles (1991), Pearn et al. (1992), Kushler and Hurley (1992), Rodriguez (1992),

Kotz and Johnson (1993), Va¨nnman and Kotz (1995),

Bothe (1997),Kotz and Lovelace (1998),Franklin (1999),

Palmer and Tsui (1999),Wright (2000),Jessenberger and Weihs (2000), Pearn and Shu (2003), Va¨nnman and Hubele (2003),Pearn and Wu (2005),Wu (2007)as well as references therein. Applications of these indices range over a great variety of situations and productions such as manufacturing of semiconductor products (Hoskins et al., 1988), head gimbals assembly for memory storage systems (Rado, 1989), jet-turbine engine components (Hubele et al., 1991), flip-chips and chip-on-board (Noguera and Nielsen, 1992), rubber edge (Pearn and Kotz, 1994), wood products (Lyth and Rabiej, 1995), aluminum electrolytic capacitors (Pearn and Chen, 1997a), audio-speaker drivers (Chen and Pearn, 1997), Pulux Surround (Pearn and Chang, 1998), liquid crystal display module (Chen and Pearn, 2002), and couplers and wavelength division multiplexers (Wu and Pearn, 2005a).

Kotz and Johnson (2002)provided a compact survey (with interpretations and comments) of some 170 pub-lications on PCIs, during 1992–2000.Spiring et al. (2003)

consolidated the research findings of process capability analysis and provide a bibliography of papers for the period 1990–2002. We shall attempt to describe, in an organized manner the interconnection between the PCIs described above and (i) the process yield, in an organized manner, (ii) the process loss, (iii) the process departure from target and (iv) process variability. This may clarify the role of the index Cpmk which is still the least

understood by practitioners.

2. The PCIs and process consistency

The general idea behind a PCI is to compare what the process ‘‘should do’’ with what the process is ‘‘actually doing’’ (Kotz and Lovelace, 1998). The specification interval should reflect the bounds on usability of the product, so that controlling the process will result in a high-quality product. What the process is ‘‘actually doing’’ refers primarily to process variability (the lower the variability, the lower is the proportion of items that falls outside tolerance limits). Therefore, the quantity

Q ¼ process spread specification interval   100% ¼ 6

s

USL  LSL   100% ¼ 3

s

d   100% (4)

is used to quantify the percentage of the specification band utilized by the process. This quantity should be rendered as low as possible: the lower is the value of the ratio, the lower would be the proportion of the specifica-tion interval utilizing the process data. For example, the value 1 indicates that the process variability (or spread) utilizes the whole width of the specification interval (tolerance band). For an on-target normally distributed process, this would result in about 0.27% (2700 parts per

million (ppm)) non-conforming units. (Equivalently the area outside the limits

m

þ3

s

and

m

3

s

of a normal Nð

m

;

s

Þdistribution is 0.27%.) A value of 0.75 means that the process spread uses 75% of the tolerance band. In fact, Q ¼ 0.75 is equivalent to Cp¼1.33 which implies the

availability of 0.01% of non-conforming units. Thus, it is desirable to have a Q as small as possible. Indeed large values of Q (particularly those greater than 1.00) would not be acceptable since this indicates that the natural range of variation of the process does not fit within the tolerance band. The process spread relative to specifica-tion interval (tolerance band) for the normal distribuspecifica-tion is illustrated graphically inFig. 1.

The ratio (4) can be rewritten as Q ¼ ð1=CpÞ 100% ¼ ðCa=CpkÞ 100%

¼ f1=½Cpmð1 þ

x

2Þ1=2g 100%

¼ fCa=½Cpmð1 þ

x

2Þ1=2g 100%,

where

x

¼ ð

m

TÞ=

s

. Therefore, when the process is centered (i.e.

m

¼T ¼ m and hence

x

¼0, Ca¼1.0), all of

the four indices provide the same bound on the process relative consistency (Qpð1=CÞ  100% with Cp¼Cpk¼

Cpm¼Cpmk¼C).

3. The PCIs and process relative departures

As mentioned before, neither Cp nor Cpk indices

alone are sufficient to tell us the whole story, since both indices have their individual drawbacks. By examining the relationship between them and using these indices as a pair, a substantial amount of information can be gleaned about the process (without the worrisome confounding of

m

and

s

in the Cpkcase). The Cp and Cpk indices are

specifically related by the process capability index k, i.e. Cpk¼Cp ð1  kÞ, which was one of the original Japanese

indices. This index is defined as k ¼j

m

mj d : cf: ca¼1  j

m

mj d ð1Þ   . (4)0

This index describes process capability in terms of departure of process mean

m

from the center point m and provides a quantified measure of the degree of ‘‘off-centrality’’ of the process. For example, k ¼ 0 indicates that the process is perfectly centered on target (

m

¼m), k ¼ 1 on the other hand shows that the process mean is

(4)

located at one of the specification limits (far away from the center point). For 0oko1, the process mean is located somewhere between the target and one of the specifica-tion limits. If k41 it shows that

m

falls outside the specification limits (i.e.

m

4USL or

m

oLSL), the process is severely off-centered and an immediate troubleshooting is necessary. The complement of k: Ca¼1  k measures the

degree of process centering (the ability to cluster around the center), which was described before.

The index Cpmtakes the proximity of process mean

m

from the target value T into account, thus being more sensitive to process ‘‘departures’’ than Cpk. Since the

structure of Cpm is based on the average process loss

relative to the manufacturing tolerance, it provides an upper bound on the average process loss. Furthermore, under the assumption that T ¼ m, definition (2) can be rewritten as Cpm¼ USL  LSL 6qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiCp 1 þ

x

2 q , (4a) where now

x

¼

m

m

s

. (4b)

Chan et al. (1988)discussed this ratio and the sampling properties of an estimated Cpm,Boyles (1991)has provided

an analysis of the index Cpm and its usefulness in

measuring process centering. He observes that both Cpk

and Cpmcoincide with Cpwhen

m

¼m and decrease as

m

departs from m. However, Cpko0 for

m

4USL or

m

oLSL,

whereas Cpm of process with j

m

mj40 is strictly

bounded above by the Cp value of a process with

s

¼

j

m

mj (see Eqs. (4a) and (4b)). Consequently, Cpmo

USL  LSL

6j

m

mj. (5)

The index Cpm approaches to zero asymptotically as

j

m

mj tends to infinity. On the other hand, while Cpk¼

ðd  j

m

mjÞ=ð3

s

Þ (see Eq. (1)0) increases without bound

for fixed

m

as

s

tends to zero, Cpm is bounded above

by Cpmod=ð3j

m

mjÞ (recall that d ¼ ðUSL  LSLÞ=2).

The right-hand side of the equation is the limiting value of Cpm as

s

tends to zero, and equals to Cp value of a

process with

s

¼ j

m

mj. It follows from (5) that a necessary condition for CpmX1 is j

m

mjod=3.

Kotz and Johnson (1999) examined the relations between Cp, Cpk and Cpm. Roughly speaking, for a fixed k

value, the value of Cpmis greater than Cpkfor small values

of Cp, but is less than Cpkfor larger values of Cp. In fact

Cpk Cpm ¼ ð1  kÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ

m

m

s

 2 r ¼ ð1  kÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 þ 9C2 pk 2 q . Thus, the relation Cpk4ðoroÞCpm is according to whether

að1  kÞð1 þ 9C2pk 2

Þ1=24ðoroÞ1, i.e. according to Cp4ðoroÞ

ð1=3Þð1  kÞ½ð2  kÞ=k1=2. Furthermore, the same authors

(2002) noted that Cpk Cpm ¼ 1  1 3Cp

m

m

s

  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ

m

m

s

 2 r ¼ 1  1 3Cp

x

  ffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ

x

2 q o 1  1 3Cp

x

  1 þ1 2

x

2   .

Hence the relation CpkoCpmis certainly valid if

1 3Cpj

x

j4

1 2

x

2or; equivalently; when ko 2

9C2 p

.

3.1. In defense of the index Cpmk

Now the index Cpmk¼ ðd  j

m

mjÞ=f3½

s

2þ ð

m

TÞ21=2g

is constructed by combining the yield-based index Cpk

and the loss-based index Cpm, thus taking into account the

process yield (meeting the manufacturing specifications) and the process loss (variation from the target). When the process mean

m

departs from the target value T, the reduction of the value of Cpmk is more substantial than

those of Cp, Cpk, and Cpm. Hence, the index Cpmkresponds to

the departure of the process mean

m

from the target value T faster than the other three basic indices Cp, Cpk, and Cpm,

while also being sensitive to the changes of the process variation. We note that a process meeting the capability requirement ‘‘CpkXC’’ may not meet the requirement ‘‘CpmXC’’ and vice versa. The discrepancy between these two indices is due to the fact that the Cpk index

measures primarily the process yield, while Cpmfocuses

to a large extent mainly on the process loss. However, if the process meets the capability requirement ‘‘CpmkXC’’, then a fortiori ‘‘CpkXC’’ and ‘‘CpmXC’’ are fulfilled (since CpmkpCpk and CpmkpCpm). In fact, the definition

of Cpmkgiven by (3) can be rewritten as Cpmk¼Cpk=f1 þ

½ð

m

TÞ=

s

2g1=2 or Cpmk¼ ð1  j

m

mj=dÞ  Cpmð¼ ð1  kÞ

CpmÞ. The index Cpmkis worse than Cpk being associated

with a certain percentage of non-conforming product, however, one should not choose this index if process yield is the main interest. Cpmk(and usually Cpm) is much more

sensitive than other capability indices to the deviation of the process mean relative to m. In fact when

m

is equal to m, Cpmk¼Cpk. If the mean of process moves away from m,

then, Cpmkdecreases more rapidly than Cpkdoes (although

both are 0 when

m

equals one of the specification limits). Conversely, when

m

is brought closer to m, Cpmkincreases

much faster than Cpk does. The Cpmk has its maximum

value when the process is centered. Viewing Cpmk as a

mixture of Cpkand Cpm, Cpmkbehaves ‘‘more like Cpm’’ if

s

2

is small, and ‘‘more like Cpk’’ if

s

2 is large (Jessenberger and Weihs, 2000).

In addition to the above advantages, Cpmkreveals the

larger information about the location of the process mean. Given a Cpk index of 1.0, all we can say about

m

is that

it is somewhere between the LSL and the USL, i.e., m  do

m

om þ d or ko1, where as above d equals ðUSL  LSLÞ=2 (cf. Eqs. (1) and (1)0). As far as the C

pmindex

is concerned, it can be shown (Bothe, 1997) that the distance between

m

and m must be less than d=ð3CpmÞ.

Therefore, given a Cpm index of 1.0, we know that

m  d=3o

m

om þ d=3 or ko1=3. This is a narrower interval than the one obtained for Cpkequals to 1.0. For

the Cpmkindex, it can be shown that the distance between

m

and m is less than d=ð3Cpmkþ1Þ. Consequently when

Cpmkindex of 1.0, it follows that m  d=4o

m

om þ d=4 or

ko1=4, being a narrower interval than the one obtained for Cpm¼1. Ranking of the three indices (Cpk, Cpm, Cpmk)

(5)

from the most sensitive to the least sensitive with respect to the departures of the process mean from the target value we thus have: (i) Cpmk, (ii) Cpm, and (iii) Cpk(see also

fromPearn and Kotz, 1994).

4. The PCIs and process yield

Process yield has been for some times the most common and standard criterion used in the manufactur-ing industries for judgmanufactur-ing process performance. Units are inspected according to specification limits placed on certain key product characteristics and are sorted into two categories: passed (conforming) (C) or rejected non-conforming. In the early days prior to mid eighties of the 20th century, fraction non-conforming for manufacturing processes were usually calculated by counting the number of non-conforming items in samples of 25 or 30 and then extrapolating the results. With the rapid advancement of the manufacturing technology, suppliers began to require their products be of a high quality involving very low fraction of non-conformities. The fraction of non-confor-mities is usually less than 0.01%, and often measured in ppm. The traditional methods of figuring out the fraction non-conforming are no longer applicable since any sample of a ‘‘reasonable size’’ may very likely contain no defective product items. Hence an alternative method of measuring the fraction of non-conforming is to use the capability indices, discussed in Section 3 all of which are functions of an item’s specification limits (the tolerance range) and the variation of the process producing the item.

If a proportion of conforming items is the primary concern, a most natural measure is the proportion itself referred to the yield, which is defined as

Yield ¼ Z USL

LSL

dFðxÞ,

where F(x) is the cumulative distribution function of the measured random characteristic X, and USL and LSL are, as before, the upper and the lower specification limits, respectively. The use of the yield as a quality measure implies that each rejected unit costs the producer an additional amount (to scrap or repair) while each passed unit involves no additional cost.

It is often assumed (not always correctly) that the product characteristic, X, follows the normal distribution, N ð

m

;

s

2Þ. In this case the fraction of non-conforming (%NC)

may be expressed as

%NC ¼ 1  PðLSLpXpUSLÞ ¼ PðXoLSLÞ þ PðX4USLÞ ¼

F

LSL 

m

s

  þ1 

F

USL 

m

s

  ,

where

F

ðÞis the cumulative distribution function (c.d.f.) of the standard normal distribution N(0,1) (see Fig. 1). Since USL ¼ m þ d and LSL ¼ m  d, we have

%NC ¼

F

m  d 

m

s

  þ1 

F

m þ d 

m

s

  ¼

F

d þ

m

m d  d

s

  þ

F

d 

m

þm d  d

s

  . Therefore, PðNCÞ ¼

F

ð1 þ

d

Þ

g

þ

F

ð1 

d

Þ

g

(we identify here proportion with probability) where

d

¼ ð

m

mÞ=d and

g

¼

s

=d. Furthermore, since %NC is an even function of

d

, %NC may be rewritten as

%NC ¼

F

1 þ j

d

j

g

  þ

F

1  j

d

j

g

  . (6)

Noting that Ca¼1  j

d

j and Cp¼1=ð3

g

Þ (Eq. (1)), the

above expression for %NC also can be expressed as %NC ¼

F

½3CpCa þ

F

½3Cpð2  CaÞ.

4.1. Yield assurance based on Cpk

The index Cpk has been regarded as a yield-based

index. It provides bounds on the process yield, 2

F

ð3CpkÞ

1pYieldo

F

ð3CpkÞ, for a normally distributed process

(Boyles, 1991). For a Cpk at level of 1, one would expect

that not more than 2700 ppm fall outside the specification limits (fraction of defectives). At a Cpk level of 1.33,

the defect rate drops to 66 ppm. To achieve less than 0.544 ppm defect rate, a Cpklevel of 1.67 is required. At a

Cpklevel of 2.0, the likelihood of a defective part drops to

the minuscule 2 parts per billion (ppb). Note a drastic decrease in the fraction of defectives as Cpkincreases from

1 to 11 3, say.

This bound may be established by noting that for a process with fixed Cpk the number of non-conformities

(product items falling outside of the specification interval [LSL, USL]) is bounded but the actual number of non-conformities will vary depending upon the location of the process mean and the magnitude of the process variation. First we rewrite the definition of the index Cpkin terms of

standardized parameters

d

¼ ð

m

mÞ=d and

g

¼

s

=d.

Cpk¼ d  j

m

mj 3

s

¼ 1  jð

m

mÞ=dj 3ð

s

=dÞ ¼1  j

d

j 3

g

¼ 1 þ

d

3

g

for LSLp

m

pm 1 

d

3

g

for mo

m

pUSL: 8 > > > > < > > > > : (7)

For a positive Cpk, Cpk40 (a natural situation), the exact

expected fraction of non-conforming formula for %NC can be expressed as a function of Cpkand Caor equivalently

that of Cpkand Cpas follows:

%NC ¼

F

½3Cpk þ

F

½3Cpkð2  CaÞ=Ca (8)

and

%NC ¼

F

½3Cpk þ

F

½3ð2CpCpkÞ. (9)

It follows from (8) that when the process mean

m

is located within the specification limits, i.e. 0oCap1 or

(6)

Cpk40, we have the bounds on %NC with

F

ð3CpkÞo

%NCp2

F

ð3CpkÞ since the standard normal c.d.f.

F

ðÞ

is an increasing function. That is equivalent to the bounds on the process yield 2

F

ð3CpkÞ 1pYieldo

F

ð3CpkÞ.

For Ca¼1.0, the process is perfectly centered (

m

¼m).

For Ca¼0, the process mean is at one of the specification

limits (

m

¼USL or

m

¼LSL) (cf. the beginning of Section 3). For processes with fixed Cpk, the number of

non-conformities attains its maximum for a centered process (Ca¼1.0), and %NC is reduced when the process mean

departs from the center (namely Ca decreases). Fig. 2

displays the surface plot of the actual number of the non-conformities (in ppm) for 0:8pCpkp1:5 and 0oCap1.

Fig. 3plots the actual number of the non-conformities (in ppm) for Cpk¼ 1.0, 1.1, 1.2 and 1.3, with 0oCap1.Fig. 4

displays the surface plot of the actual number of the non-conformities (in ppm) for 0:8pCpkp1:5 and 0:8pCpp1:5.

Fig. 5 plots the actual number of the non-conformities (in ppm) for Cpk¼1.0, 1.1, 1.2, and 1.3, with 0:8pCpp1:5.

Note that for CpkX1.3, the curves in Figs. 3 and 5 are almost indistinguishable.

4.2. Yield assurance based on Cpm

In the case when T ¼ m, the definition of Cpmindex (2)

can be rewritten as a function of the standardized parameters

d

and

g

, as follows:

Cpm¼ d 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2þ ð

m

2 q ¼ 1 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

g

2þ

d

2 q . Hence,

g

2 ¼ 1 ð3CpmÞ2 

d

2¼ 1 3Cpm þ

d

  1 3Cpm 

d

  , or, equivalently,

g

¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 3Cpm þd   1 3Cpm d   s ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 3Cpm þ jdj   1 3Cpm  jdj   s

holds for 0pj

d

jp1=ð3CpmÞ, i.e., for 1  1=ð3CpmÞpCap1.

We thus obtain the following explicit relationship be-tween the exact expected proportion of NC and the indices Fig. 2. Surface plot of NC for 0.8pCpkp1.5 and 0pCap1.

Fig 3. NC plots for Cpk¼1.0(0.1)1.3 with 0oCap1 (from top to bottom).

Fig 4. Surface plot of NC for 0.8pCpkp1.5 and 0.8pCpp1.5.

Fig. 5. NC plots for Cpk¼1.0(0.1)1.3 with 0.8pCpp1.5 (from top to bottom).

(7)

Cpmand Cavalid for 1  1=ð3CpmÞpCap1: %NC ¼

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2  Ca 1 ð3CpmÞ2  ð1  CaÞ2 s 2 6 6 6 6 4 3 7 7 7 7 5 þ

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiCa 1 ð3CpmÞ2  ð1  CaÞ2 s 2 6 6 6 6 4 3 7 7 7 7 5. (10)

Relation (10) shows that for a perfectly centered process (i.e. Ca¼1), the fraction of non-conforming has an

upper bound with %NCp2

F

ð3CpmÞ. Fig. 6displays the

surface plot of the actual number of the non-conformities (in ppm) for 0:7pCpmp1:3 and 0:75pCap1. Fig. 7plots

the actual number of non-conformities (in ppm) for Cpm¼1.0, 1.1, 1.2, and 1.3 (from top to bottom in the plot),

with 0:6pCap1. We note that for Cpm41.3, the curves

become close to each other. 4.3. Yield assurance based on Cpmk

Using a similar technique for deriving the formula of the exact expected proportion of non-conforming, and the relation Cpmk¼CpmCa, we obtain the following exact

expected proportion of non-conforming in terms of Cpmk

and Caas %NC ¼

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2  Ca ðCa=3CpmkÞ2 ð1  CaÞ2 q 2 6 4 3 7 5 þ

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiCa ðCa=3CpmkÞ2 ð1  CaÞ2 q 2 6 4 3 7 5. (11)

Similarly, for Ca¼1, the fraction of non-conforming has

an upper bound with %NCp2

F

ð3CpmkÞ. Fig. 8displays

the surface plot of the actual number of non-conformities (in ppm) for 0:5pCpmkp1:3 and 0:8pCap1. Fig. 9 Fig. 6. Surface plot of NC for 0.7pCpmp1.3 and 0.75pCap1.

Fig. 7. NC for Cpm¼1.0(0.1)1.3, with 0.6pCap1 (from top to bottom in the plot).

Fig. 8. Surface plot of NC for 0.5pCpmkp1.3 and 0.8pCap1.

Fig. 9. NC for Cpmk¼1.0(0.1)1.3, with 0.75pCap1 (from top to bottom in the plot).

(8)

plots the actual number of non-conformities (in ppm) for Cpmk¼1.0, 1.1, 1.2, and 1.3 (from top to bottom),

with 0:75pCap1. We note that for Cpmk41.3, the

corresponding curves are almost indistinguishable. For reader’s convenience we summarize inTable 1the formulas for %NC in the various cases discussed above. 4.4. Yield comparison among PCIs

For a normally distributed process, the Cpk index

provides a lower bound on the process yield, YieldX 2

F

ð3CpkÞ 1, or %NCp2

F

ð3CpkÞ for LSLp

m

pUSL.

Furthermore, based on the Cpm index, Ruczinski (1996)

obtained a lower bound on the process yield as YieldX 2

F

ð3CpmÞ 1, or %NCp2

F

ð3CpmÞ for Cpm4

ffiffiffi 3 p

=3. The bound, however, has never been analytically justified for quality assurance purposes based on the Cpmkindex. It is

not clear therefore whether Cpmkis related to the process

yield, since the relationship between Cpmkand the process

yield (or proportion of non-conforming) has not been available. Recently, Pearn and Lin (2005) provided a mathematical derivation of upper bound formula for Cpmk

on process yield, in terms of the number of non-conformities (in ppm) as

0p%NCp2

F

ð3CpmkÞ for CpmkX ffiffiffi2 p

=3.

Based on the yield analysis among capability indices Cpk,

Cpm, and Cpmk, the result illustrates that the three indices

provide the same lower bounds on process yield for normally distributed processes, that is, YieldX2

F

ð3CpkÞ

1 ¼ 2

F

ð3CpmÞ 1. For example, if it is given that

Cpk¼1.00 we have the information on the process yield

only through the upper bound %NCp2699.796 ppm and no information on C0. However, if Cpm¼1.00 we have the

information on the process yield through the upper bound %NCp2699.796 ppm and the process centering measure 0.667pCap1.00. Finally for Cpmk¼1.00 we have the same

upper bound on process yield %NCp2699.796 ppm and a narrower process centering measure 0.750pCap1.00. Figs. 10 and 11 plot the actual number of the non-conformities (in ppm) for Cpk¼Cpm¼Cpmk¼1.00 and

1.50, with the bound of 0oCap1 for Cpk, with the bound

of 1  1=ð3CpmÞpCap1 for Cpm, and the bound of 1 

1=ð1 þ 3CpmkÞpCap1 for Cpmk. These results indicate on

an advantage of using index Cpmkover the indices Cpkand

Cpmwhen (together) measuring the process yield. Indeed

Cpmk provides a better protection for the customers in

terms of the quality yield of the products.Table 1displays the bounds on %NC and Ca for Cpk¼Cpm¼Cpmk¼C,

respectively (for C ¼ 1ð1 3Þ2). Table 1

Bounds on %NC and Cafor Cpk¼Cpm¼Cpmk¼C, respectively.

C Cpk Cpm Cpmk

Bounds on NC (ppm) Bounds on Ca Bounds on NC (ppm) Bounds on Ca Bounds on NC (ppm) Bounds on Ca

1.00 2699.796 0pCap1 2699.796 0.667pCap1 2699.796 0.750pCap1

1.33 66.334 0pCap1 66.334 0.750pCap1 66.334 0.800pCap1

1.50 6.795 0pCap1 6.795 0.778pCap1 6.795 0.818pCap1

1.67 0.554 0pCap1 0.554 0.800pCap1 0.554 0.833pCap1

2.00 0.002 0pCap1 0.002 0.833pCap1 0.002 0.857pCap1

Fig. 10. The actual number of NC curves for Cpk¼1.0 (top), Cpm¼1.0 (dash), and Cpmk¼1.0 (bottom) for various allowed values of Ca.

Fig. 11. The actual number of NC curves for Cpk¼1.5 (top), Cpm¼1.5 (dash), and Cpmk ¼1.0 (bottom) for various allowed values of Ca.

(9)

In certain manufacturing industries, reducing the fraction of non-conformities or the expected proportion of non-conforming items is of primarily concern and a guiding principle for quality improvement. In such cases, keeping the process centered (on-target) may not be a good strategy for maintaining adequate process capability since the number of non-conformities reaches its max-imum when the process is centered (i.e. Ca¼1.0), and

the %NC reduces when the process mean departs from the center (i.e. Cadecreases) for a fixed Cpk, Cpm or Cpmk

(seeFigs. 10 and 11). For other manufacturing industries, a reduction of deviation from the target value serves as a guiding principle (e.g. Taguchi’s quality philosophy, or certain modern quality improvement theories). In such cases, the efforts should not be focused entirely on reducing the fraction of the non-conformities. Here keeping a process centered (on-target) would be consid-ered satisfactory. Note that if

m

happens to be far away from the target T (the corresponding Cais small) then the

process would not be viewed as capable even if

s

is so small so that the %NC is small. (Recall the expressions for Caand %NC.) %NC ¼

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2  Ca Ca 3Cpmk  2  ð1  CaÞ2 s 2 6 6 6 6 4 3 7 7 7 7 5 þ

F

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiCa Ca 3Cpmk  2  ð1  CaÞ2 s 2 6 6 6 6 4 3 7 7 7 7 5. (11)

5. The PCIs and the expected relative loss

A disadvantage of an yield measure is that it does not distinguish at all among the products which fall inside of the specification limits. Customers, however, do observe the unit-to-unit differences of this characteristic, espe-cially if the variance is large and/or the mean is offset from the target. With the increased importance of clustering around the target (rather than conforming to specification limits) and utilization of loss functions an alternative approach to PCIs is being developed. Instead of numbers or fraction of non-conforming, various economic/production costs (or losses) offer opportunities for developing an improved assessment, monitoring, and comparisons of process capability. Hsiang and Taguchi (1985) have presented an approach to quality improvement in which reduction of deviation from the target value serves as the guiding principle. According to this approach any mea-sured value x of a product characteristic X in general results in a loss to the consumer.

Proceeding along these lines, it was observed that the loss for each lot is often not necessarily the same, even though the lots have the same fraction defectives. Hence, to implement the Taguchi’s loss criterion, the loss caused by the deviation from its target value is expressed as a quadratic function with respect to the difference between the actual value and the target value to distinguish

between the products by increasing the penalty as the departure from the target increases. The squared loss function for the product characteristic X in symmetric case can be expressed as

LossðXÞ ¼ wðX  TÞ2,

where as above T denotes the target value of X and w is a positive constant. (The choice of a quadratic function may be viewed to be somewhat arbitrary and is no doubt influenced by over of three hundred years tradition of using mean square errors, etc., in statistical applications augmented by certain optimal properties.) This implies that the loss is zero when the process outcome is on target and is positive for any deviation from the target. The expected loss can be evaluated as

E½LossðXÞ ¼ w Z 1

1

ðx  TÞ2dFðxÞ ¼ w½ð

m

TÞ2þ

s

2

, where FðÞ is the underlying c.d.f. of X.

A disadvantage of the expected loss lies in a difficulty of setting a standard for the proposed index since it increases from zero to infinity. To overcome this draw-back,Johnson (1992)has defined the worth of the product W(X), which can be expressed as a function of X with WðXÞ ¼ WTwðX  TÞ2,

where WT denotes the worth of the product when

X is precisely on target. Defining

D

to be the distance of X from the target T at which the worth of the product is zero, we obtain 0 ¼ WTw

D

2 or

D

2¼WT=w. If the

loss at the specification limits (either USL or LSL) is A0

and the distance from the specification limits to the target T is d, then A0¼LossðUSLÞ ¼ LossðLSLÞ ¼ Loss

ðT  dÞ ¼ wd2. From the above we have that A0=WT¼

d2=

D

2and the expected relative squared loss, say Le, can

be rewritten as Le¼ E½LossðXÞ

D

2 ¼ E½LossðXÞ d2 A0 WT   , (12)

which provides a unitless measure of process performance in terms of the loss value of the product for industrial applications. The distributional and statistical properties of estimators of the loss index Lehave been investigated in Johnson (1992)andPearn et al. (2004a).

5.1. Loss assurance based on Cpk

Below we shall extend (without details) the derivation of the relationship between the expected relative squared loss and Cpkby rewriting the index Cpkin the form

Cpk¼ 2dCa 6 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½

s

2þ ð

m

2  ð

m

2 q ¼1 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A0C2a WTLeA0ð1  CaÞ2 s . (13) Consequently, the expected relative squared losses based on the Cpkindex, denoted by LCpk can be expressed as LCpk¼ A0 WT ð1  CaÞ2þ 1 9 A0C2a WTC2pk . (14)

(10)

By taking the derivative of LCpk with respect to Cawe arrive at the following equation:

q

LCpk

q

Ca ¼2A0 WT 9C2pkþ1 9C2pk ! Ca1 " # .

Thus, the expected relative loss with Cpk, LCpk, has the minimum loss ðA0=WTÞ=ð1 þ 9C2pkÞ when Ca¼9C2pk=

ð1 þ 9C2pkÞ. If the index Ca49C2pk=ð1 þ 9C 2

pkÞ, then LCpk decreases as Caincreases. Conversely, LCpkincreases as Ca decreases if Cao9C2pk=ð1 þ 9C

2

pkÞ. For example, at a level

of Cpk¼1.00, the LCpkhas the minimum loss A0=ð10WTÞat Ca¼0:9.Figs. 12 and 13display the surface plot and the

contour plot of the loss LCpk in terms of A0=WT for 0:5pCpkp2:0 and 0pCap1, respectively.

5.2. Loss assurance based on Cpm

From the definition of expected relative squared loss Le

(12), the relationship between Leand Cpmindex can easily

be derived as Cpm¼ 2d 6

D

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiE½LossðXÞ¼ 1 3 ffiffiffiffiffiffiffiffiffiffiffiffi A0 WTLe s . (15)

(Recall that A0¼LossðLSLÞ; see also the introduction to

Section 5.)

Thus, the expected relative squared losses based on the Cpmindex, LCpm, can be rewritten as

LCpm¼ 1 9 A0 WTC2pm ðc:f: ð14ÞÞ. (16)

From the expression of LCpmgiven by (16), we note that Cpmhas the property of being the so-called a

larger-the-better index. Thus, small values of Cpm may be due to a

high expected loss resulting in a poorer process capability. In addition, LCpmis a constant A0=ð9WTC

2

pmÞfor all values of

Ca. Figs. 14 and 15display the surface and the contour

plots of the expected relative loss LCpm for 0:5pCpmp2:0 and 0pCap1, respectively.

5.3. Loss assurance based on Cpmk

We shall briefly comment on the abilities of the index Cpmkto provide a loss assurance. The relationship between

the expected relative squared loss Le and Cpmk can be

expressed directly as follows: Cpmk¼CpmCa¼ 1 3 ffiffiffiffiffiffiffiffiffiffiffiffi A0C2a WTLe s ðc:f: ð15ÞÞ. (17)

Fig. 12. The surface plot of LCpkfor 0.5pCpkp2.0 and 0pCap1.

(11)

Hence the expected relative squared losses based on Cpmk, LCpmk, can be rewritten as LCpmk¼ 1 9 A0C2a WTC2pmk . (18)

Expression (18) shows that LCpmk increases as Ca increases and reaches its maximum at Ca¼1.0. For

example, at level of Cpmk¼1.00, LCpmk has the maximum loss A0=ð9WTÞobtained at Ca¼1.0.Figs. 16 and 17display

the surface and the contour plots of the expected relative

quadratic loss LCpmk for 0:5pCpmkp2:0 and 0pCap1, respectively.

5.4. Loss comparison among PCIs

We shall now compare the expected relative squared losses of PCIs given in (14), (16) and (18). The following features of Le’S for e ¼ Cp;Cpk;and Cpmkare worth noting:

(i) LCpm remains a constant A0=ð9WTC

2

pmÞ for all values

of Ca, (ii) LCpk attains a minimum value when Ca¼ 9C2pk=ð1 þ 9C

2

pkÞ and (iii) LCpmk increases as Ca increases and reaches its maximum value at Ca¼1.0.

Fig. 14. The surface plot of LCpmfor 0.5pCpmp2.0 and 0pCap1.

Fig. 15. The contour plot of LCpmfor 0.5pCpmp2.0 and 0pCap1.

(12)

Suppose now that the four capability indices values of Cp, Cpk, Cpmand Cpmkare all set to be equal to C (a positive

constant). Then the expected relative squared losses of the three indices (Cpk, Cpm and Cpmk) have the same value

A0=ð9WTC2Þwhen the process is centered (i.e. Ca¼1.0).

Moreover, under specified capability indices values Cpk¼Cpm¼Cpmk¼C, the ratios of the expected relative

squared losses for these PCIs satisfy: LCpmk LCpm ¼C2a, (19) LCpk LCpmk ¼1 þ 9C2ð1  CaÞ2=C2a, (20) LCpk LCpm ¼C2aþ9C 2 ð1  CaÞ2. (21)

(Note that these three ratios become 1 for Ca¼1.)

For the ratio of the expected relative squared losses between Cpmk and Cpm (Eq. (19)), one concludes that

LCpmkpLCpmsince of the value of Cais between 0 and 1 for

LSLp

m

pUSL. For the ratio of the expected relative squared losses between Cpkand Cpmk(Eq. (20)) since LCpk=LCpmk41 in all cases except for Ca¼1, we obtain that LCpmkpLCpkfor 0pCap1. Finally the Expression (21) shows that the ratio

of the expected relative squared loss between Cpkand Cpm

satisfies LCpmpLCpk for Cap 9C21 9C2 þ1 and LCpm4LCpk for Ca4 9C21 9C2 þ1, C being the common value.

These results illustrate the advantage of using the index Cpmkover the indices Cpkand Cpmwhen measuring

squared process loss, since Cpmkindeed provides a better

protection to the customers in terms of the quality loss of

the products.Figs. 18 and 19 plot the expected relative losses with Cpk¼Cpm¼Cpmk¼1.00 and 1.50 for 0pCap1.

The use of various loss functions in quality assurance settings is becoming more widespread as the Taguchi’s approach becomes more prominent.Spiring (1993),Sun et al. (1996), andSpiring and Yeung (1998)have devel-oped a class of loss functions that provide practitioners with a wide range of choices that can be used in depicting loss due to departures from the process target. Research efforts related to PCIs and the loss properties would appear to offer opportunities that could potentially reduce (but not yet eliminate!) practitioners’, managers’, and researchers’ concerns and discrepancies in the area of process capability. As it was already alluded above, theoretical statisticians and economists have for many years used the squared error loss function when making decisions or evaluating decision rules.English and Taylor

Fig. 17. The contour plot of LCpmkfor 0.5pCpmkp2.0 and 0pCap1. Fig. 18. The expected loss function curves for Cpk¼1.0 (top), Cpm¼1.0

(dash), and Cpmk¼1.0 (bottom) with various Ca.

Fig. 19. The expected loss function curves for Cpk¼1.5 (top), Cpm¼1.5 (dash), and Cpmk¼1.5 (bottom) with various Ca.

(13)

(1993) investigated the loss imparted to society by examining the expected losses arising in non-normal populations. Gupta and Kotz (1997) attempted to relate the relative loss to a modified Cpmindex, which they refer

to as Cpq. However, for the most part there has been little

research effort devoted to the area of loss and loss functions along the lines of assessing process capability. This may be due in part to criticisms of quadratic/squared error losses. Criticism of the quadratic loss function is widespread in the literature and includes statistical decision analysts (Box and Tiao, 1992; Berger, 1985)

quality assurance practitioners and researchers (Tribus and Szonyi, 1989;Leon and Wu, 1992), for the reasons of its failure to provide a quantifiable maximum loss (i.e., unbounded loss) and because the sizes of losses are severe for extreme deviations from the target.Pearn et al. (1992)

and other researchers also pointed out that the squared error loss function is necessarily very often chosen due to the simplicity of mathematical derivations rather for its success in depicting actual process losses.

6. Extensions and applications

The theory and methodology of PCIs have been successfully applied to real world problems including cases with asymmetric tolerances, data collected as multiple subsamples, tool wear, gauge measurement error, supplier selection, multi-process product, multiple quality characteristics and so on.

6.1. Extensions to asymmetric tolerances

A process is said to have a symmetric tolerance if the target value T is set to be the mid-point of the specification interval [LSL, USL], i.e. T ¼ ðUSL þ LSLÞ=2. In the manufacturing industry cases with asymmetric toler-ances (Tam) often occur. From the customer’s point of view, asymmetric tolerances indicate that deviations from the target that are less tolerable in one direction than the other (see e.g.Boyles, 1994;Va¨nnman, 1997;Wu and Tang, 1998). Nevertheless, asymmetric tolerances can also arise in those situations where the tolerances are symmetric to start with, but the process distribution is skewed follow-ing a non-normal distribution. To deal with this situation the data are usually transformed to achieve approximate normality. A prominent example of this approach isChou et al. (1998) who have used the well-known, Johnson (1949)curves to transform the non-normal process data. Other research focused on cases with asymmetric toler-ances include Choi and Owen (1990), Boyles (1994),

Va¨nnman (1997), Chen (1998), Pearn et al. (1999a, b),

Chen et al. (1999), and more recent Jessenberger and Weihs (2000), Pearn et al. (2006) and Chang and Wu (2008).

Several generalizations of Cpkincluding Cpk, C0pk have

been proposed to handle processes with asymmetric tolerances (see Kane, 1986; Franklin and Wasserman, 1992;Kushler and Hurley, 1992for details). Unfortunately, these generalizations understate or overstate the process capability, depending on the position in many cases of

m

relative to T. To remedy the situation, Pearn and Chen (1998) proposed the index C00

pk—another generalization

of Cpk—for processes with asymmetric tolerances. The

motivation for the new index C00

pk is based on the general

criteria stipulated byBoyles (1994),Choi and Owen (1990)

andPearn et al. (1992)when a d¼Du¼Dl¼d analyzing

and comparing the existing capability indices dealing with (a) process yield; (b) process centering; (c) other process characteristics. The generalization C00

pk (the Pearn–Chen

index) is formally defined as C00 pk¼ d A 3

s

, where A ¼maxfdð

m

TÞ=Du;dðT 

m

Þ=Dlg, d  ¼min Du;Dlg, Du¼USL  T and Dl¼T  LSL. Note that d



ð

m

 TÞ=Du¼ ½minfDu;Dlgð

m

TÞ=ðUSL  TÞ: Obviously,

when-ever T ¼ m (a symmetric tolerance), A

¼ j

m

mj and C00 pk

reduces to the index Cpk. The index C00pk attains the

maximal values at

m

¼T, regardless of whether the preset specification tolerances are symmetric or not. For pro-cesses with asymmetric tolerances, the corresponding loss function is also asymmetric (with respective to T). The index C00

pk takes into account the asymmetry of the

loss function. Thus, given two processes E and F with

m

E4T and

m

FoT, satisfying ð

m

ETÞ=Du¼ ðT 

m

FÞ=Dl(i.e.,

processes E and F have equal ‘‘departure ratio’’), the C00 pk

values for processes E and F are the same provided

s

s

F. In addition, the index C00pkdecreases when mean

m

shifts away from target T in either direction. Actually, C00 pk

decreases faster when

m

shifts away from T to the closer specification limit than that to the farther specification limit. This is an advantage since the index would respond faster to the shift towards ‘‘the wrong side’’ of T other than towards the middle of the specification interval.Pearn and Chen (1998)also provide a thorough comparison among the three indices, Cpk, C0pk and C

00

pk. The estimation of this

index C00

pk, PDF, CDF of its estimator ^C 00

pk, and a decision

making procedure for C00

pkcan be found inPearn and Chen

(1998)andPearn et al. (1999b).

For cases with asymmetric tolerances, generalizations of Cpmand Cpmkcan be developed along the similar lines. Chen et al. (1999) and Pearn et al. (1999a) considered extensions of Cpm and Cpmk to handle a process with

asymmetric tolerances. Under the normally distributed assumption, the explicit forms of the PDF and the CDF of the estimated index ^C00pm and ^C

00

pmk with asymmetric

tolerances are derived.

6.2. Extensions to multiple subsamples

The results obtained so far regarding the statistical properties of the estimated capability indices were based on a single sample. However, a common practice in process control is to estimate the PCIs by using the past ‘‘in-control data’’ from subsamples, especially, when a daily-based or a weekly-based production control plan is implemented for monitoring process stability. To use estimators based on several small subsamples and then interpret the results as if they were based on a single sample may generate incorrect conclusions, and vice versa. In order to use the past in-control data from

(14)

subsamples to provide decisions regarding process cap-ability, the distribution of the estimated capability index based on multiple subsamples should be considered.

When using subsamples, Kirmani et al. (1991) have investigated the distribution of estimators of Cpbased on

the sample standard deviations of the subsamples.Li et al. (1990) have investigated the distribution of estimators of Cp and Cpk based on the ranges of the subsamples. Va¨nnman and Hubele (2003)considered the indices in the super structure class defined by Cpðu; vÞ and derived

the distribution of the estimators of Cpðu; vÞ, in the case

when the estimators of the process parameters

m

and

s

are based on subsamples. Consider the case when the characteristic of the process is normally distributed and we have h subsamples, where the sample size of the ith subsample being ni. For each i, i ¼ 1; 2; . . . ; h, let xij,

j ¼ 1; 2; . . . ; ni, be a random sample from a normal

distribution with mean

m

and variance

s

2measuring the

characteristic under consideration. Assume that the process is monitored using a ¯X-chart together with a S-chart. For each subsample let ¯xiand s2i denote the sample

mean and sample variance, respectively, of the ith sample and let N denote the total number of observations, namely, ¯xi¼ 1 ni Xni j¼1 xij; s2i ¼ 1 ni1 Xni j¼1 ðxij¯xiÞ2 and N ¼ Xh i¼1 ni.

Let N1¼Phi¼1ðni1Þ ¼ N  h. When all the subsamples

are of the same size n, N ¼ hn and N1¼hðn  1Þ. As an

estimator of

m

and

s

2, we use the overall sample mean

and the pooled sample variance, respectively. These are the unbiased estimators, i.e.

^

m

¼ ¯¯x ¼1 N Xni j¼1 ni¯xi;

s

^2¼s2p¼ 1 N1 Xni j¼1 ðni1Þs2i.

For the Cpkindex, the natural estimator based on multiple

samples can be expressed as ^ CMpk¼min USL  ¯¯x 3sp ;¯¯x  LSL 3sp   ¼d  j¯¯x  mj 3sp .

Using the techniques available for cases with a single sample, the CDF of ^CMpkcan be derived. Consequently, the

critical values, lower confidence bounds, and the manu-facturing capability calculations also can be carried out. For cases with multiple subsamples, several estimators of Cpm and Cpmkcan be derived using similar technique

(seeVa¨nnman and Hubele, 2003;Wu and Pearn, 2005b;

Wu, 2008for more details).Hubele and Va¨nnman (2004)

considered the pooled and un-pooled estimators of the variance from subsamples, and provide the sampling distributions of the corresponding estimators of Cpm. The

un-pooled variance estimator is equivalent to the usual ‘‘overall’’ or ‘‘long-term’’ variance estimator, whereas the pooled variance estimator is based on a control chart relating ‘‘within’’ and ‘‘short-term’’ variance estimator. Namely, when the process has undergone a change in variation, the un-pooled estimator captures all of the variation, whereas the pooled one captures only the component of within subsamples variation (see, e.g.,Cryer and Ryan, 1990;Hubele and Va¨nnman, 2004).

6.3. Extensions to tool wear problem

In the 21st century, manufacturing systems are geared towards meeting the challenges of a quality-based competition. Process capability studies and analyses have become critical issues in process control; indeed a number of guidelines are available for process capability assess-ment. Moreover, certain conditions such as normally distributed output, statistical independence of observed values and the existence of only random variation (resulted from chance causes) ought to be stipulated for this assessment. These conditions may not be fully satisfied in a practical set-up and some departures are quite likely to occur. Tool wear, naturally, constitutes a dominant and inseparable component of variability in many ‘‘machining’’ processes, and hence it represents a systematic assignable cause. Process capability assess-ment in such cases may turn out to be a bit tricky since the standard procedure may not always provide accurate results.

Observe that a process capability analysis is valid only when the process under investigation is free of any special or assignable causes (i.e., being in-control). A process is said to have a ‘‘tool wear problem’’ when a variation due to a certain systematic cause is present. There are, in fact, two areas of interest when studying one process: process stability and process capability. It is important to have clear guideless about control before developing the plan for a tool wear process. Specifically is the intent of our plan to detect changes in the process or is our goal just to monitor the tool? An action to be taken in an out-of-control situation should be determined by the intent of the plan. Statistical process studies and the ongoing control can be quite complicated in case that we are dealing with machine processes possessing a tool wear. Indeed such a wear is a fact, and it is essential for processes that exhibit tool wear to be controlled, to maintain high part quality and to maximize the tool life. In its simplest and most common form, tool wear data tend to have an upward or a downward slope over time. To determine this trend, a best-fit line to the data ought to be generated. For standard control charts, the grand average and the control limits are usually horizontal. In contrast, when a tool wear is present, the control limits will be parallel to the tool wear slope. Once control has been assessed, the capability of the process can be then determined.

Some investigators attempt to remove the variability associated with the systematic cause. For example,Yang and Hancock (1990)recommended that in computing the basic Cpindex, an unbiased estimator of

s

can be obtained

to be

s

=ð1 

r

Þ1=2, where

r

is defined as the average correlation factor. Some other authors make a general assumption of linear degradation in the tool. For instance,

Quesenberry (1988) suggested that tool wear can be modeled over an interval of tool life by a regression model and assumes that the tool wear rate is either known or a good estimate of it is available, and that the process mean can be adjusted after each batch without an error. However, the procedure of model-building does not appear to be either easy or directly applicable to realistic

(15)

conditions. Long and De Coste (1988) approach is to remove first the linearity by regressing on the means of the subgroups and then to determine the process capability. These authors discussed the techniques for obtaining the best-fit line for the data, calculating the control limits, comparing the slopes to determine differ-ent tools, and finally calculating the capability of the process. These techniques are based on the assumption that tools are ‘‘consistent’’ within their tool groups. As the data are recorded over several tools, the subgroup averages are plotted over time. A best-fit line is then determined using the methodology of the standard linear regression analysis.

Evidently when systematic assignable causes are present and tolerated, the overall variation of the process (

s

2

) is then composed of the variation due to random causes (

s

2

r) and the variation due to assignable causes

(

s

2

a), that is,

s

s

2r þ

s

2a. The traditional PCI measures

disregard the portions of the overall variation, (in the presence of tool wear) that are due to assignable causes. Hence any estimates of the process capability will confound the true capability with these two sources. In order to get a true measure of a process capability, any variation due to an assignable cause must be removed from the measure of a process capability. However, the above approaches tacitly assume a static process cap-ability over a cycle. By allowing the process capcap-ability to be dynamic within a cycle, as well as from a cycle to cycle, one could circumvent some of the problems encountered.

Spiring (1989, 1991) viewed this as a dynamic process which is in a constant change. In this dynamic model, the capability of the process may vary, possibly even in a predictable manner. Spiring has devised a modification of Cpm index for this dynamic process under the influence

of systematic assignable causes. In this scenario the goal is to maintain a minimum level of capability at all times. As a result, the capability will be cyclical in nature and its period being defined by the frequency of the process/ tolling adjustments. Even when the assignable cause variation is not systematic, as is in the case with tool or die wear, one ought to be able to deal with random fluctuations of the process mean over time. Quite often in practice, deviations from the target value are due to assignable causes, which are easy to pinpoint such as shift-to-shift changes, differences in the raw material batches, environmental factors, etc.

The measure of process capability for dynamic process proposed bySpiring (1991)is Cpm¼min USL  T; T  LSL f g 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

s

2 rtþ ð

m

tTÞ2 q ,

where USL, LSL and T as above but

m

trepresents the mean

and

s

2

rt—the variation (due to random causes only) of the

process at time period t. As we have already remarked the actual value of

m

tand

s

2rtare seldom known, and in order

to get an assessment of process capability these values ought to be estimated. These estimators will have to incorporate various existing sources of variation. Monitor-ing process’s capability will thus require obtainMonitor-ing the

value of Cpmor its suitable estimate at various times t over

each cycle during the lifetime of the tool.

It thus follows that the proposed sampling scheme is similar to procedures used in monitoring a process for control charting. The general format is to gather k subgroups of size n from each cycle (e.g., the period from t0to t1) over the lifetime of the tool. The value of k will

be unique to each process and, in fact, may change from cycle to cycle within the process. On the other hand, sample size of less than five (i.e., no5) are cautioned against, while larger samples (e.g., n425, see Spiring, 1991) may also pose problems. The optimal sample size for assessing process capability in the presence of systematic assignable causes will thus vary for each process under consideration.

Assuming that the effect of the tool deterioration is linear over the sampling window, estimates of Cpm

are available which would not involve contribution of the assignable causes. Typically such an estimator is of the form: ^ Cpm¼ min USL  T; T  LSLf g 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MSEtþ n n  1ð ¯XtTÞ 2 r .

This measure of process capability considers only the proximity to the target value T and the variation associated with random causes as the linear effect of the tool wear is effectively removed by using

MSEt¼

PN

I¼1ðxtai ^xtaiÞ2

n  2

at sequentially selected points (i.e., ta1;ta2;ta3; . . .) rather than the standard estimator S2. The MSE

t is the mean

square error associated with the regression equation xai¼

a

b

taiþ



aiand where taiis the sequence number of the sampling unit and



aiNð0; 1Þ. The coefficient

b

represents the linear change in the tool wear given a unit change in time/production. The method was proposed by Spiring (1991) who have suggested that the problem can be tackled by viewing the process capability as ordinal sequential dynamic rather than static process. This entails calculating a new index and constant monitoring it as the process advances. When the index reaches a preset minimum value, the processing is terminated and reset-ting/replacement is carried out.

6.4. Extensions to gauge measurement error

The inevitable variations in process measurements come from two sources: the manufacturing process and the gauge. Gauge capability reflects the gauge’s precision, or lack of variation, which is not the same as calibration (the latter assures the gauge’s accuracy). As it was emphasized on a numerous occasions, process capability measures the ability of a manufacturing process to meet preassigned specifications. Nowadays, many customers use process capability to judge supplier’s ability to deliver quality products. Suppliers need to be aware of how gauges affect various process capability estimates.

The gauge capability consists of two parts: repeat-ability and reproducibility. Repeatrepeat-ability is the gauge’s

(16)

experimental or random error. Namely, when measuring the same specimen several times, the gauge will never provide exactly the same measurement. Reproducibility is the (quite annoying) inability of several inspectors or gauges to arrive at the same measurement value from a given specimen i.e. the variability due to different operators using the gauge (or different time periods, different environments, in general different conditions). To summarize we have

s

2

Measurement error¼

s

2Gauge¼

s

2repeatabilityþ

s

2reproducibility

Estimates for

s

2

repeatabilityand

s

2reproducibility come from a

gauge study, or a GR&R (gauge repeatability and reprodu-cibility) study.Barrentine (1991),Levinson (1995), Mon-tgomery (2001, 2005), andBurdick et al. (2003), among others, describe various procedures for gauge studies.

Gauge capability is a gauge’s ability to repeat and reproduce measurements. Its measurement is the percen-tage of tolerance consumed by (gauge) capability (PTCC).

Montgomery (2001) referred to it as the precision-to-tolerance (or P/T) ratio. It is the ratio of the gauge’s variation to the specification width; its smaller values are evidently preferable. Denoting the gauge’s standard deviation as

s

Gauge, we have

PTCC ¼ 6

s

Gauge

USL  LSL100%.

Some authors and practitioners prefer to use the coeffi-cient 5.15 instead of 6 (see e.g.Barrentine, 1991;Levinson, 1995). This formula above uses 6

s

as a natural tolerance width for the gauge based on the normal distribution assumptions.

The gauge capability has a significant effect on process capability measurements. An inaccurate measurement system can thwart the benefits of improvement endeavors and results in poor quality. Analyzing process capability without considering gauge capability may often lead to unreliable decisions. It could cause a serious loss to producers if gauge capability is ignored in process capability estimation and testing. On the other hand, improving the gauge measurements and properly trained operators can reduce the measurement errors. Since measurement errors unfortunately cannot be avoided, using appropriate confidence coefficients and power becomes necessary. However, the real world is that no measurement is free from an error or uncertainty even if it is carried out with the aid of the most sophisticated and precise measuring devices. Any variation in the measure-ment process has a direct impact on the ability arrive at an execute sound judgment about the manufacturing pro-cess. Analyzing the effects of measurement errors on PCIs,

Levinson (1995) and Mittag (1997)developed definitive techniques for quantifying the percentage error in process capability indices estimation in the presence of measure-ment errors.

Common approaches to GR&R studies, such as the Range method (seeMontgomery and Runger, 1993a) and the ANOVA method (see Mandel, 1972; Montgomery and Runger, 1993b) assume that the distribution of the measurement errors is normal with a mean error of zero. Let the measurement errors be described by a random

variable MeNð0;

s

2MeÞ;Montgomery and Runger (1993b) determined the gauge capability

l

using of the formula:

l

¼ 6

s

Me

USL  LSL100%.

For a measurement system to be deemed acceptable, the variability in the measurements due to this system ought to be less than a predetermined percentage of the engineering tolerance. Some guidelines for gauge accep-tance have been developed by the Automotive Industry Action Group (AIAG, 2002).

Let XNð

m

;

s

2Þ be the relevant quality characteristic

of a manufacturing process and consider this process capability in a measurement error system. Due to the measurement errors, the observed random variable YNð

m

m

;

s

2Y¼

s

s

2MeÞ is measured under the as-sumption that X and Me (the measurement error) are

stochastically independent (instead of measuring the actual variable X). The empirical capability index CYpk will

be obtained after substituting

s

Y for

s

. The relationship

between the true process capability Cpk¼minfðUSL 

m

Þ=

3

s

; ð

m

LSLÞ=3

s

gand the empirical process capability CYpk

can be expressed as CYpk Cpk ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 1 þ

l

2C2p q .

Since the variation of the observed data is larger than the variation of the original one, the denominator of the index Cpkbecomes larger, and the true capability of the process

will be understated if calculations of process capability index are based on the empirical data represented by Y. Suppose that the empirical data (the observed measure-ments contaminated by errors) fYi;i ¼ 1; 2; :::; ng are

available, then the natural estimator ^CYpkis

^ CYpk¼

d  j ¯Y  mj 3SY

,

which is obtained by replacing the process mean

m

and the process standard deviation

s

by their conventional estimators ¯Y ¼ Pn

i¼1Yi=n and SY¼ ½Pni¼1ðYi ¯YÞ=ðn 

1Þ1=2from a ‘‘bonafide’’ stable process. When estimating the capability, the estimator ^CYpk in the case of

contami-nated data, substantially underestimates the true

cap-ability in the presence of measurement errors.

Consequently, if a statistical test is used to determine whether the process meets the capability requirement, the power of the test would drastically decrease.

In fact the discussions in Pearn and Liao (2005)

indicated that the true process capability would be inappropriately underestimated if ^CYpk is used. The

prob-ability that ^CYpk is greater than c0 would be less than a

preassigned when using ^Cpk. Thus, when estimating Cpk

the

a

-risk using ^CYpkis less than the

a

-risk of using ^Cpk. The

power of the test based on ^CYpk is then also smaller than

that based on ^Cpk. Namely, the

a

-risk and the power of the

test decrease with the measurement error. Since the lower confidence bound is underestimated and the power becomes small, the producers cannot firmly state that their processes meet the capability requirement even if their processes are indeed sufficiently capable. Adequate

數據

Fig. 1. Process spread with specification interval.
Fig. 3 plots the actual number of the non-conformities (in ppm) for C pk ¼ 1.0, 1.1, 1.2 and 1.3, with 0 oC a p1
Fig. 8. Surface plot of NC for 0.5pC pmk p1.3 and 0.8pC a p1.
Fig. 10. The actual number of NC curves for C pk ¼ 1.0 (top), C pm ¼ 1.0 (dash), and C pmk ¼ 1.0 (bottom) for various allowed values of C a .
+4

參考文獻

相關文件

Falling prices in outbound package tours, hairdressing and air tickets after the Lunar New Year coupled with the continuous sale on clothing pushed down the indices of OTHER GOODS

On the contrary, apart from the 18.95% decrease of the price index of Education, reduced charges for mobile phone services and lower rentals for housing drove the price indices

On the contrary, price indices of Clothing &amp; Footwear; Miscellaneous Goods &amp; Services; and Food &amp; Non-Alcoholic Beverages increased by 8.81%, 4.08% and 3.15%

The Composite CPI for December 2007 rose by 0.98% over November to 118.49, the increment was mainly attributable to the increase in the price indices of Food &amp;

The Composite CPI for June 2008 increased by 1.11% month-to-month, of which the price indices of Clothing &amp; footwear, Food &amp; non-alcoholic beverages and Transport rose by

The Composite CPI for June 2009 increased by 0.39% month-to-month, with the price indices of Transport; Clothing &amp; Footwear; and Food &amp; Non-Alcoholic Beverages rising by

Meanwhile, the price indices of Miscellaneous Goods &amp; Services and Health increased by 4.64% and 4.51% respectively, due to dearer prices of articles and products for

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in