• 沒有找到結果。

A Bayesian-like estimator of the process capability index C-pmk

N/A
N/A
Protected

Academic year: 2021

Share "A Bayesian-like estimator of the process capability index C-pmk"

Copied!
10
0
0

加載中.... (立即查看全文)

全文

(1)

>Springer-Verlag 2003

A Bayesian-like estimator of the process capability

index C

pmk

W. L. Pearn1 and G. H. Lin2*

1 Department of Industrial Engineering & Management, National Chiao Tung University, 2 Department of Communication Engineering, National Penghu, Institute of Technology, Penghu, Taiwan, ROC

Abstract. Pearn et al. (1992) proposed the capability index Cpmk, and

inves-tigated the statistical properties of its natural estimator ^CCpmkfor stable normal

processes with constant mean m. Chen and Hsu (1995) showed that under general conditions the asymptotic distribution of ^CCpmkis normal if m 0 m, and

is a linear combination of the normal and the folded-normal distributions if m¼ m, where m is the mid-point between the upper and the lower specifi-cation limits. In this paper, we consider a new estimator ~CCpmk for stable

pro-cesses under a di¤erent (more realistic) condition on process mean, namely, Pðm b mÞ ¼ p, 0 a p a 1. We obtain the exact distribution, the expected value, and the variance of ~CCpmk under normality assumption. We show that

for Pðm b mÞ ¼ 0, or 1, the new estimator ~CCpmk is the MLE of Cpmk, which

is asymptotically e‰cient. In addition, we show that under general condi-tions ~CCpmkis consistent and is asymptotically unbiased. We also show that the

asymptotic distribution of ~CCpmkis a mixture of two normal distributions.

Keywords and Phrases: process capability index; Bayesian-like estimator; consistent; mixture distribution

1. Introduction

Pearn et al. (1992) proposed the process capability index Cpmk, which

com-bines the merits of two earlier indices Cpk(Kane (1986)) and Cpm (Chan et al.

(1988)). The index Cpmk alerts the user if the process variance increases and/

or the process mean deviates from its target value, and is designed to monitor the normal and the near-normal processes. The index Cpmkis considered

argu-ably the most useful index to date for processes with two-sided specification * The research was partially supported by National Science Council of the Republic of China (NSC-89-2213-E-346-003).

(2)

limits (Boyles (1994), Wright (1995)). The index Cpmk, referred to as the

third-generation capability index, has been defined as the following: Cpmk¼ min USL m 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ; m LSL 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q 8 > < > : 9 > = > ;; ð1Þ

where USL and LSL are the upper and the lower specification limits, respec-tively, m is the process mean, s is the process standard deviation, and T is the target value. We note that Cpmkcan be rewritten as:

Cpmk¼ d jm  mj 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ; ð2Þ

where m is the mid-point between the upper and the lower specification limits, and d is the half length of the specification interval ½LSL; USL. That is, m¼ ðUSL þ LSLÞ=2, and d ¼ ðUSL  LSLÞ=2. For stable processes where the process mean m is assumed to be a constant (unknown), Pearn et al. (1992) considered the natural estimator of Cpmkwhich is defined as:

^ C Cpmk¼ d jX  mj 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 n þ ðX  TÞ 2 q ; ð3Þ

where X ¼ ðPi¼1n XiÞ=n and Sn ¼ fn1Pi¼1n ðXi X Þ2g1=2 are conventional

estimators of the process mean and the process standard deviation, m and s, respectively. If the process characteristic follows the normal distribution, Pearn et al. (1992) showed that for the case with T¼ m (symmetric tolerance) the distribution of the natural estimator ^CCpmk is a mixture of the chi-square

distribution and the non-central chi-square distribution, as expressed in the following: ^ C Cpmk@ dpffiffiffin s  w 0 1ðlÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiw2 n1þ wn102 ðlÞ q ; ð4Þ where w2

n1 is the chi-square distribution with n 1 degrees of freedom, w10ðlÞ

is the central chi distribution with one degree of freedom and non-centrality parameter l, and w02

n1ðlÞ is the non-central chi-square distribution

with n 1 degrees of freedom and non-centrality parameter l, where l ¼ nðm  TÞ2=s2. Chen and Hsu (1995) showed that the natural estimator ^CC

pmkis

asymptotically unbiased. Chen and Hsu (1995) also showed that under general conditions the natural estimator ^CCpmk converges to the normal distribution

Nð0; s2 pmkÞ, where s2pmk¼ s 2 9½s2þ ðm  TÞ2þ 12ðm  TÞs2 6m 3 18½s2þ ðm  TÞ23=2 ( ) Cpmk þ 144ðm  TÞ 2 s2 144ðm  TÞm 3þ 36ðm4 s4Þ 144½s2þ ðm  TÞ23=2 ( ) Cpmk2 ; ð5Þ m3, m4 are the third and fourth central moment of the process, respectively.

(3)

2. A Bayesian-like estimator

In real-world applications, the production may require multiple supplies with di¤erent quality characteristics on each single shipment of the raw materials, multiple manufacturing lines with inconsistent precision in machine settings and engineering e¤ort for each manufacturing line, or multiple workmanship shifts with unequal performance level on each shift. Therefore, the basic and common assumption that the process mean stay as a constant may not be satisfied in real situations. Consequently, using the natural estimator ^CCpmk to

measure the potential and performance of such a process is inappropriate as the resulting capability measure would not be accurate. For stable processes under those conditions, if the knowledge on the process mean, Pðm b mÞ ¼ p, 0 a p a 1, is available, then we can consider the following new estimator ~CCpmk.

In general, the probability Pðm b mÞ ¼ p, 0 a p a 1, can be obtained from historical information of a stable process.

~ C Cpmk¼ bn1½d  ðX  mÞIAðmÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q ; ð6Þ where bn1¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2=ðn  1Þ p

fG½ðn  1Þ=2=G½ðn  2Þ=2g is the correction factor, IAðÞ is the indicator function defined as IAðmÞ ¼ 1 if m A A, and IAðmÞ ¼ 1

if m B A, where A¼ fm j m b mg. We note that the new estimator ~CCpmkcan be

rewritten as the following:

~ C Cpmk¼ bn1½d  ðX  mÞIAðmÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q ¼ d ðX  mÞIAðmÞ 3b1 n1Sn ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1þðX  TÞ 2 S2 n s ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiCC~pk 1þðX  TÞ 2 S2 n s ; ð7Þ where ~CCpk¼ bn1½d  ðX  mÞIAðmÞ=ð3SnÞ as defined by Pearn and Chen

(1996). If the process characteristic follows the normal distribution, Nðm; s2Þ,

then we can show the following Theorem.

Theorem 1. If the process characteristic follows the normal distribution, then

~ C Cpmk@ bn1 ffiffiffin p Cp Nðh; 1Þ 3   ffiffiffiffiffiffi w02 n

p , where Nðh; 1Þ is the normal distribution with mean h¼ 3pffiffiffinðCp CpkÞ, wn02 is the non-central chi-square distribution with n

degrees of freedom and non-centrality parameter l¼ nðm  TÞ2=s2.

Proof: We note that 3b1

n1SnCC~pk¼ d  ðX  mÞIAðmÞ is distributed as the

normal distribution Nð3sCpk;s2=nÞ. Therefore, bn1½d  ðX  mÞIAðmÞ=

ð3sÞ ¼ bn1f½d=ð3sÞ  ½ðX  mÞIAðmÞ=ð3sÞ is distributed as bn1fCp

½Nðh; 1Þ=ð3pffiffiffinÞg, where Nðh; 1Þ is the normal distribution with mean h ¼ 3pffiffiffinðCp CpkÞ. We also note that ½nSn2þ nðX  TÞ

2=s2 ¼Pn

i¼1ðXi TÞ2=s2

is distributed as wn02, the non-central chi-square distribution with n degrees of freedom and non-centrality parameter l¼ nðm  TÞ2=s2. Therefore, ~CC

pmk is distributed as bn1f ffiffiffin p Cp ½Nðh; 1Þ=3g= ffiffiffiffiffiffi w02 n p .

(4)

The r-th moment (about zero) of ~CCpmk, therefore, can be obtained as: Eð ~CCpmkr Þr¼ E 8 > > > < > > > : bn1 ffiffiffin p Cp Nðh; 1Þ 3   ffiffiffiffiffiffi w02 n p 9 > > > = > > > ; r ¼X r i¼0 bn1r r i   E Nðh; 1Þ 3pffiffiffinCp  i ffiffiffi n p Cp ffiffiffiffiffiffi w02 n p " #r ( ) ; ð8Þ

By setting r¼ 1, and r ¼ 2, we may obtain the first two moments and the variance as: Eð ~CCpmkÞ ¼ X1 i¼0 bn1 1 i   E Nðh; 1Þ 3pffiffiffinCp  i ffiffiffi n p Cp ffiffiffiffiffiffi w02 n p " # ( ) ; ð9Þ Eð ~CCpmk2 Þ ¼X 2 i¼0 b2n1 2 i   E Nðh; 1Þ 3pffiffiffinCp  i ffiffiffi n p Cp ffiffiffiffiffiffi w02 n p " #2 8 < : 9 = ;; ð10Þ

Varð ~CCpmkÞ ¼ Eð ~CCpmk2 Þ  ½Eð ~CCpmkÞ2: ð11Þ

We note that for the case with pðm b mÞ ¼ 1, ~CCpmk< ^CCpmk for X b m

and ~CCpmk> ^CCpmk for X < m d½ð1  bn1Þ=ð1 þ bn1Þ. If the process

distri-bution is normal, then the probability PðX b mÞ ¼ Ffpffiffiffin½ðm  mÞ=sg con-verges to 1. Thus, for large values of n, we expect to have ~CCpmk< ^CCpmk. On

the other hand, if Pðm b mÞ ¼ 0, then we have ~CCpmk< ^CCpmk for X a m and

~ C

Cpmk> ^CCpmkfor X > mþ d½ð1  bn1Þ=ð1 þ bn1Þ. If the process distribution

is normal, then the probability PðX a mÞ ¼ Ffpffiffiffin½ðm  mÞ=sg converges to 1. Thus, for large values of n, we also expect to have ~CCpmk< ^CCpmk. Explicit

forms of the expected value and the variance of ~CCpmk are analytically

intrac-table. But, for the cases with Pðm b mÞ ¼ 1 or 0, the probability density function may be obtained (the proof is omitted for the simplicity of the pre-sentation).

3. Asymptotic distribution of ~CCpmk

In the following, we show that if the knowledge on the process mean, the probabilities Pðm b mÞ ¼ p, and Pðm < mÞ ¼ 1  p, with 0 a p a 1 is given, then the asymptotic distribution of the proposed new estimator ~CCpmk is a

mixture of two normal distributions. We first present some Lemmas. The proofs for these Lemmas can be found in the reference Serfling (1980). A direct consequence of our result is that for the cases with either Pðm b mÞ ¼ 1, or Pðm b mÞ ¼ 0, the asymptotic distribution will then be an ordinary normal distribution.

Lemma 1: If m4¼ EðX  mÞ4 exists, then pffiffiffinðX  m; S2

n s2Þ converges to Nðð0; 0Þ; SÞ in distribution, where S ¼ s 4 m 3 m3 m4 s4   .

(5)

Lemma 2: If gðx; yÞ is a real-valued di¤erentiable function, then ffiffiffi n p ½gðX ; S2 nÞ  gðm; s2Þ converges to Nð0; DSD0Þ in distribution, if D ¼ qg qx    ð m; s2Þ ;qg qy    ð m; s2Þ  0ð0; 0Þ.

Lemma 3: If the random vector ðw1n; w2n; . . . ; wknÞ converges to the random

vectorðw1; w2; . . . ; wkÞ in distribution, and the random vector ðv1n; v2n; . . . ; vknÞ

converges to the random vector ðv1; v2; . . . ; vkÞ in probability, then the

ran-dom vectorðv1nw1n; v2nw2n; . . . ; vknwknÞ converges to the random vector ðv1w1;

v2w2; . . . ; vkwkÞ in distribution.

Lemma 4: If the random vector ðw1n; w2n; . . . ; wknÞ converges to the random

vector ðw1; w2; . . . ; wkÞ in distribution, and the function g is continuous with

probability one, then gðw1n; w2n; . . . ; wknÞ converges to gðw1; w2; . . . ; wkÞ in

dis-tribution.

Lemma 5: If the random vectorðv1n; v2n; . . . ; vknÞ converges to the random

vec-torðv1; v2; . . . ; vkÞ in probability, and the function g is continuous with

proba-bility one, then gðv1n; v2n; . . . ; vknÞ converges to gðv1; v2; . . . ; vkÞ in probability.

Lemma 6: Ifm4¼ EðX  mÞ4exists, thenpffiffiffinðX  m; X  m; S2

n  s2Þ converges to Nðð0; 0; 0Þ; SÞ in distribution, where S¼ s2 s2 m 3 s2 s2 m 3 m3 m3 m4 s4 2 6 4 3 7 5. Proof: See Chen and Hsu (1995).

Theorem 2: The estimator ~CCpmkis consistent.

Proof: We first note thatðX ; S2

nÞ converges to ðm; s2Þ in probability and bn1

converges to 1 as n! y. Since ~CCpmkis a continuous function ofðX ; Sn2Þ, then

it follows directly from Lemma 5 that ~CCpmk converges to Cpmk in probability.

Hence, ~CCpmkmust be consistent.

Theorem 3: Under general conditions, if the fourth central moment m4¼

EðX  mÞ4exists, thenpffiffiffinð ~CCpmk CpmkÞ converges to p  Nð0; spmk12 Þ þ ð1  pÞ 

Nð0; s2 pmk2Þ in distribution, where spmk12 ¼D 2 1 9 1þ ðm  TÞ2 s2 " #1 þD1 3 m3 s3 1þ ðm  TÞ2 s2 " #3=2 Cpmk1 þ1 4 m4 s4 s4 1þ ðm  TÞ2 s2 " #2 Cpmk12 spmk22 ¼D 2 2 9 1þ ðm  TÞ2 s2 " #1 þD2 3 m3 s3 1þ ðm  TÞ2 s2 " #3=2 Cpmk2 þ1 4 m4 s4 s4 1þ ðm  TÞ2 s2 " #2 Cpmk22

(6)

D1¼ 9ðm  TÞC2 pmk1 d ðm  mÞ þ 1; Cpmk1¼ d ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ; D2¼ 9ðm  TÞC2 pmk2 dþ ðm  mÞ  1; Cpmk2¼ dþ ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q :

Proof: (CASE I) If m > m, we define the function g1ðx; yÞ ¼

d ðx  mÞ 3

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi yþ ðx  TÞ2

q ,

where x > m, y > 0. Since g1is di¤erentiable, then we have

qg1 qx     ð m; s2Þ ¼ D1Cpmk1 d ðm  mÞ; qg1 qy     ð m; s2Þ ¼ 9 2 C3 pmk1 ½d  ðm  mÞ2; where D1¼ 9ðm  TÞC2 pmk1 d ðm  mÞ þ 1, and Cpmk1 ¼ d ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q . If we define D1¼ qg1 qx    ð m; s2Þ ;qg1 qy    ð m; s2Þ ! , then D10ð0; 0Þ.

By Lemma 1 and Lemma 2, pffiffiffinðb1

n1CC~pmk CpmkÞ ¼ ffiffiffin

p

½g1ðX ; Sn2Þ

g1ðm; s2Þ converges to Nð0; spmk12 Þ in distribution, where

s2pmk1¼ D1SD10 ¼ D12 9 1þ ðm  TÞ2 s2 " #1 þD1 3 m3 s3 1þ ðm  TÞ2 s2 " #3=2 Cpmk1 þ1 4 m4 s4 s4 1þ ðm  TÞ2 s2 " #2 Cpmk12

(CASE II) If m < m, we define the function g2ðx; yÞ ¼

dþ ðx  mÞ 3

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi yþ ðx  TÞ2

q ,

where x < m, y > 0. Since g1is di¤erentiable, then we have

qg2 qx    ð m; s2Þ ¼ D2Cpmk2 dþ ðm  mÞ; qg2 qy    ð m; s2Þ ¼ 9 2 C3 pmk2 ½d þ ðm  mÞ2; where D2¼ 9ðm  TÞC2 pmk2 dþ ðm  mÞ  1, and Cpmk2 ¼ dþ ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q . If we define D2¼ qg2 qx     ð m; s2Þ ;qg2 qy     ð m; s2Þ ! , then D20ð0; 0Þ.

By Lemma 1 and Lemma 2, pffiffiffinðb1

n1CC~pmk CpmkÞ ¼ ffiffiffin

p

½g2ðX ; Sn2Þ 

g2ðm; s2Þ converges to Nð0; spmk22 Þ in distribution, where

s2pmk2¼ D2SD20 ¼ D22 9 1þ ðm  TÞ2 s2 " #1 þD2 3 m3 s3 1þ ðm  TÞ2 s2 " #3=2 Cpmk2 þ1 4 m4 s4 s4 1þ ðm  TÞ2 s2 " #2 Cpmk22

(7)

(CASE III) If m¼ m, then ffiffiffi n p ðb1 n1CC~pmk CpmkÞ ¼  ffiffiffi n p ðX  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q d 3 ½pffiffiffinðS2 n  s2Þ þ ffiffiffi n p ðX2 m2Þ  2pffiffiffinðX  mÞT  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q ½ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 n þ ðX  TÞ 2 q  : We define v1n¼  1 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q , v2n¼  d 3½ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 n þ ðX  TÞ 2 q ½ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 n þ ðX  TÞ 2 q  ; w1n¼ ffiffiffin p ðX  mÞ; w2n¼ ffiffiffin p ðS2 n s2Þ þ ffiffiffi n p ðX2 m2Þ  2pffiffiffinðX  mÞT: SinceðX ; S2

nÞ converges to ðm; s2Þ in probability, then ðv1n; v2nÞ converges to

ðv1; v2Þ in probability, where v1¼  Cpmk0 d   , v2¼  9C3 pmk0 2d2 ! , and Cpmk0¼ d 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2

q . Define the function Gðx; y; zÞ ¼ ðx; z þ xy  2xTÞ. Then by Lemma 4 and Lemma 6,ðw1n; w2nÞ ¼ ffiffiffin

p

½GðX ; X ; S2

nÞ  Gðm; m; s2Þ

con-verges toðw1; w2Þ which is distributed as Nðð0; 0Þ; SGÞ, where

SG¼ 1 0 0 a m 1   s2 s2 m 3 s2 s2 m 3 m3 m3 b 2 6 4 3 7 5 1 a 0 m 0 1 2 4 3 5 ¼ s2 c c d   ; with a¼ m  2T; b ¼ m4 s4; c¼ 2ðm  TÞs2þ m3; d ¼ 4ðm  TÞ2þ 4ðm  TÞm3þ ðm4 s4Þ: Hence, by Lemma 3, 2 6 4  ffiffiffi n p ðX  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q ; 3 d ffiffiffi n p ðS2 n  s2Þ þ ffiffiffi n p ðX2 m2Þ  2pffiffiffinðX  mÞT ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q ½ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S2 nþ ðX  TÞ 2 q  3 7 5

(8)

converges toðv1w1; v2w2Þ ¼  Cpmk d w1; 9C3 pmk 2d2 w2 ! in distribution. Define Hðx; yÞ ¼ x þ y. Then, by Lemma 4,pffiffiffinðb1

n1CC~pmk CpmkÞ converges to Y ¼ Cpmk d w1 9 2 C3 pmk

d2 w2, which is a normal distribution with EðY Þ ¼ 0,

VarðY Þ ¼D 2 0 9 1þ ðm  TÞ2 s2 " #1 þD0 3 m3 s3 1þ ðm  TÞ2 s2 " #3=2 Cpmk0 þ1 4 m4 s4 s4 1þ ðm  TÞ2 s2 " #2 Cpm02 where D0¼ 9ðm  TÞC2 pmk0 d þ 1, Cpmk0¼ d 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q . Since Pfpffiffiffinðb1 n1CC~pmk CpmkÞ a rg ¼ Pfm b mgPf ffiffiffin p ðb1 n1CC~pmk CpmkÞ a rj m b mg þ Pfm < mgPfpffiffiffinðb1

n1CC~pmk CpmkÞ a r j m < mg for all real

number r, then it follows that pffiffiffinðb1

n1CC~pmk CpmkÞ converges to p Nð0; s2 pmk1Þ þ ð1  pÞ  Nð0; spmk22 Þ in distribution. Since ffiffiffi n p ð ~CCpmk CpmkÞ ¼ ffiffiffi n p ðb1 n1CC~pmk CpmkÞ þ ffiffiffin p ð ~CCpmk b1n1CC~pmkÞ and bn1 converges to 1 as

n! y, thus by Slutsky’s theory, the theorem proved. Corollary 3.1: The estimator ~CCpmkis asymptotically unbiased.

Proof: From Theorem 3, pffiffiffinð ~CCpmk CpmkÞ converges to the following

p Nð0; s2

pmk1Þ þ ð1  pÞ  Nð0; spmk22 Þ in distribution. Therefore,

Efpffiffiffinð ~CCpmk CpmkÞg converges to zero, and so ~CCpmk must be asymptotically

unbiased.

Corollary 3.2: If the process characteristic follows the normal distribution, thenffiffiffi n

p

ð ~CCpmk CpmkÞ converges to p  Nð0; spmk12 0Þ þ ð1  pÞ  Nð0; spmk22 0Þ, a mix-ture of two normal distributions, where

s2pmk10 ¼ D12 9 1þ ðm  TÞ2 s2 " #1 þ1 2 1þ ðm  TÞ2 s2 " #2 Cpmk12 ; s2pmk20 ¼ D22 9 1þ ðm  TÞ2 s2 " #1 þ1 2 1þ ðm  TÞ2 s2 " #2 Cpmk22 ; D1¼ 9ðm  TÞC2 pmk1 d ðm  mÞ þ 1; Cpmk1¼ d ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q ; D2¼ 9ðm  TÞC2 pmk2 dþ ðm  mÞ  1; Cpmk2¼ dþ ðm  mÞ 3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2þ ðm  TÞ2 q :

(9)

for the case with Pðm b mÞ ¼ 0, or 1, (i) ~CCpmkis the MLE of Cpmk, (ii) ~CCpmkis

asymptotically e‰cient.

Proof: (i) For normal distributions, ðX ; S2

nÞ is the MLE of ðm; s2Þ. By the

invariance property, ~CCpmk is the MLE of Cpmk.

(ii) The Fisher information matrix can be calculated as: IðyÞ ¼ a 0 b0 c0 d0   ¼ s 2 0 0 ð2s2Þ2   ; where y¼ ðm; s2Þ; a0¼ E q qmln fðx; yÞ  2 ; b0¼ c0¼ E q qmln fðx; yÞ q qs2 ln fðx; yÞ   ; and d0¼ E q qs2 ln fðx; yÞ  2 :

If Pðm b mÞ ¼ 1, then the information lower bound reduces to

q qmCpmk; q qs2Cpmk   I1ðyÞ n q qmCpmk q qs2Cpmk 2 6 6 6 4 3 7 7 7 5 ¼D 2 1 9n 1þ ðm  TÞ2 s2 ( )1 þC 2 pmk1 2 1þ ðm  TÞ2 s2 ( )2 ¼s 2 pmk10 n :

On the other hand, if Pðm b mÞ ¼ 0, then the information lower bound reduces to q qmCpmk; q qs2Cpmk   I1ðyÞ n q qmCpmk q qs2Cpmk 2 6 6 6 4 3 7 7 7 5 ¼D 2 2 9n 1þ ðm  TÞ2 s2 ( )1 þC 2 pmk2 2 1þ ðm  TÞ2 s2 ( )2 ¼s 2 pmk20 n :

Since the information lower bound is achieved (Corollary 3.2), then for the case with Pðm b mÞ ¼ 0, or 1, ~CCpmkis asymptotically e‰cient.

In practice, to evaluate the estimator ~CCpmk we need to determine the value

of the indicator which requires additionally the knowledge of Pðm b mÞ, or Pðm < mÞ. If historical information of the process shows Pðm b mÞ ¼ p, then we may determine the value IAðmÞ ¼ 1, or 1 using available random number

tables. For example, assume p¼ 0:375 is given, then IAðmÞ ¼ 1 if the

gen-erated 3-digit random number is no greater than 375, and IAðmÞ ¼ 1

(10)

4. Conclusions

Pearn et al. (1992) proposed the capability index Cpmk, which is designed to

monitor the normal and the near-normal processes. The index Cpmk is

con-sidered to be the most useful index to date for processes with two-sided spec-ification limits. Pearn et al. (1992) investigated the statistical properties of the natural estimator ^CCpmk for stable normal processes with constant mean m. In

this paper, we considered stable processes under a di¤erent condition (more realistic) where the process mean may not be a constant. For stable processes under such conditions with given knowledge of Pðm b mÞ ¼ p, 0 a p a 1, we investigated a new estimator ~CCpmkusing the given information.

We obtained the exact distribution of the new estimator, and derived its expected value and variance under normality assumption. For cases with Pðm b mÞ ¼ 0, or 1, we showed that the new estimator ~CCpmk is the MLE of

Cpmk. In addition, we showed that under general conditions ~CCpmkis consistent

and is asymptotically unbiased. We also showed that the asymptotic distribu-tion of ~CCpmk is a mixture of two normal distributions. The results obtained

in this paper allow us to perform a more accurate capability measure for processes under more realistic conditions in which using existing method (estimator) is inappropriate.

References

[1] Boyles RA (1994) Process capability with asymmetric tolerances. Communications in Statis-tics: Simulation and Computation 23(3):615–643

[2] Chan LK, Cheng SW, Spiring FA (1988) A new measure of process capability: Cpm. Journal

of Quality Technology 20(3):162–175

[3] Chen SM, Hsu NF (1995) The asymptotic distribution of the process capability index Cpmk.

Communications in Statistics: Theory and Methods 24(5):1279–1291

[4] Kane VE (1986) Process capability indices. Journal of Quality Technology 18(1):41–52 [5] Pearn WL, Kotz S, Johnson NL (1992) Distributional and inferential properties of process

capability indices. Journal of Quality Technology 24(4):216–233

[6] Pearn WL, Chen KS (1996) A Bayesian-like estimator of Cpk. Communications in Statistics:

Simulation & Computation 25(2):321–329

[7] Serfling RJ (1980) Approximation theory of mathematical statistics. John Wiley and Sons, New York

[8] Wright PA (1995) A process capability index sensitive to skewness. Communications in Sta-tistics: Simulation & Computation 52:195–203

參考文獻

相關文件

Al atoms are larger than N atoms because as you trace the path between N and Al on the periodic table, you move down a column (atomic size increases) and then to the left across

Then, we tested the influence of θ for the rate of convergence of Algorithm 4.1, by using this algorithm with α = 15 and four different θ to solve a test ex- ample generated as

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

Then, it is easy to see that there are 9 problems for which the iterative numbers of the algorithm using ψ α,θ,p in the case of θ = 1 and p = 3 are less than the one of the

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

The Liouville CFT on C g,n describes the UV region of the gauge theory, and the Seiberg-Witten (Gaiotto) curve C SW is obtained as a ramified double cover of C g,n ... ...

In fact, the formation of chemical C-O state increases the extra factor inside the DOS re-distribution; therefore, without this, like the case of the sidewalls region (C), it shows

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most