• 沒有找到結果。

On the common mean of several inverse Gaussian distributions based on a higher order likelihood method

N/A
N/A
Protected

Academic year: 2021

Share "On the common mean of several inverse Gaussian distributions based on a higher order likelihood method"

Copied!
11
0
0

加載中.... (立即查看全文)

全文

(1)

On the common mean of several inverse Gaussian distributions based

on a higher order likelihood method

S.H. Lin

a,⇑

, I.M. Wu

b a

Department of Insurance and Finance, National Taichung Institute of Technology, 129, Sec. 3, Sanmin road, Taichung 404, Taiwan b

Institute of Statistics, National ChiaoTung University, Hsinchu, Taiwan

a r t i c l e

i n f o

Keywords: Coverage probability Expected length Inverse Gaussian

Signed log-likelihood ratio statistic Type I errors

a b s t r a c t

An interval estimation method for the common mean of several heterogeneous inverse Gaussian (IG) populations is discussed. The proposed method is based on a higher order likelihood-based procedure. The merits of the proposed method are numerically compared with the signed log-likelihood ratio statistic, two generalized pivot quantities and the sim-ple t-test method with respect to their expected lengths, coverage probabilities and type I errors. Numerical studies show that the coverage probabilities of the proposed method are very accurate and type I errors are close to the nominal level.05 even for very small sam-ples. The methods are also illustrated with two examsam-ples.

Ó 2010 Elsevier Inc. All rights reserved.

1. Introduction

In many application areas, such as demography, management science, hydrology, finance, etc., data are frequently posi-tive and right-skewed. In the past four decades, the inverse Gaussian (IG) distribution has drawn much attention and the inferences concerned with the IG distribution have also grown rapidly because IG is an ideal candidate for modeling and

analyzing the right-skewed and positive data. For instance, Wise[1,2]and Wise et al.[3]developed the IG population as

a possible model to describe cycle time distribution for particles in the blood and Lancaster[4]made use of the IG

distribu-tion in describing strike duradistribu-tion data. Furthermore, IG distribudistribu-tion can also serve as a convenient prior for scale in Bayesian

approaches to estimation with assumed Gaussian data[5]. The IG distribution can not only accommodate a variety of shapes,

from highly skewed to almost normal, but also shares many elegant and convenient properties with Gaussian models; e.g.,

the associated inference methods are based on the well-known t,

v

2, and F distributions as for the normal case. See Chhikara

and Folks[6], Seshadri[7,8]and Mudholkar and Tian[9]more details of Gaussian and IG analogies.

The probability density function (pdf) of IG distribution, IG (

l

, k), is defined as

f ðx;

l

;kÞ ¼ k 2

p

x3  1=2 exp  k 2

l

2xðx 

l

Þ 2   ; x > 0;

l

>0; k > 0; ð1:1Þ

where

l

is the mean parameter and k is the scale parameter. The inference methods of the IG model are closely analogous to

those of the Gaussian model; for example, a very common problem in applied fields is to compare the means of several Gaussian populations, i.e.

H0:

l

l

2¼    ¼

l

I vs: H1:not all

l

i’s are equal:

0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.12.019

⇑Corresponding author. Address: 129 Sanmin Road Sec. 3, Taichung 404, Taiwan.

E-mail address:suelin@ntit.edu.tw(S.H. Lin).

Contents lists available atScienceDirect

Applied Mathematics and Computation

(2)

If the variance of each population is homogeneous, the analysis of variance (ANOVA) can be used to perform the test. Sim-ilarly, the analysis of reciprocals (ANORE) can be used to test the equality of means of several IG samples if all scale

param-eters among groups are assumed to be equal[6]. When the scale parameters are non-homogeneous, the ANORE fails to solve

the problem. Tian[10]proposed a method to test the equality of IG means under heterogeneity, based on a generalized test

variable. However, when the null hypothesis is not rejected, the inferences for the common mean remain unsolved. Recently,

Ye et al.[11]proposed a mixture method for the common mean problem based on generalized inference and the large

sam-ple theory. However, as the author has mentioned, if the samsam-ple sizes niare not large and/or the scale parameter kiis not

large compared to

l

i, the approximate distributions don’t fit well. Therefore, an alternate method which can be applied to

general cases deserves further research.

In this paper, we will estimate and construct the 100(1 

a

)% confidence interval for the common mean of several

non-homogeneous IG populations based on a higher order likelihood-based method. This method, in theory, has a higher order

accuracy, O(n3/2), even when the sample size is small. Reid[12]provided a review and annotated the development of his

method. The method has been applied to solve many practical problems involving interval estimation for a skewed

distri-bution, e.g., Wu et al.[13]presented a confidence interval for a log-normal mean based on this method; Wu and Wong

[14]used the method to improve the interval estimation for the two-parameter Birnbaum-Saunders distribution; and Tian

and Wilding[15]used the method to construct confidence interval for the ratio of means of two independent IG

distribu-tions. In our case, the likelihood-based method also gives a satisfactory result for the problem of interval estimation for the common mean of several IG distributions.

The remainder of this article is organized as follows. In Section2, we will briefly introduce the properties of IG

distribu-tion and the concepts of the signed log-likelihood ratio statistic and a higher order asymptotic method. The method is then

applied to construct a confidence interval for the common mean of several independent IG populations in Section3. The

gen-eralized inference approach and the classical procedure under the assumption of identical scale are also described in Section

3. We present several simulation studies and two numerical examples in Section4to illustrate the merits of our proposed

method. Some concluding remarks are given in Section5.

2. A general review

2.1. Some properties of IG distribution

For a random sample of n observations x1, x2, . . . , xnfrom IG (

l

, k), the uniformly minimum variance unbiased estimators

(UMVUEs) of

l

and k1are x ¼1

n Pn i¼1xiand W ¼n11 Pn i¼1 1xi1x  

, respectively, and a minimum sufficient statistic of (

l

,k) is

Pn

i¼1xi;Pni¼1x1i



. It is easy to verify that



x  IGð

l

;nkÞ and ðn  1ÞW 1 k

v

2

n1; ð2:1Þ

and that these two statistics are independently distributed.

Remark 1. Let x  IG (

l

, k) and K1

k

v

2

n be two independent random variables, then

kðxlÞ2 l2x 

v

2

1 and its distribution is

independent of kK

v

2 n. Let M ¼ ffiffi n p ðxlÞ

lðxKÞ1=2, then the distribution of jMj is the truncated Student’s t variable with n degrees of

freedom and M2has the F distribution with 1 and n degrees of freedom[6].

From(2.1)andRemark 1, we know thatnkðxlÞ2

l2x 

v

21which is independent of ðn  1ÞW 1k

v

2 n1. Let U ¼ ffiffi n p ðxlÞ lðxWÞ1=2, then the

distribution of jUj is the truncated Student’s t with n  1 degrees of freedom and U2

 F1,n1.

2.2. The likelihood-based inference

Let x = (x1, x2, . . . , xn) be an independent sample from some distribution and l(h) = l(h; x) be the log-likelihood function

based on the sample data. Suppose h is the p-dimensional vector of parameters that can be partitioned into (

l

, k) with

l

being the parameter of interest with dimension 1, and k being the nuisance parameters with dimensions p  1. The signed

log-likelihood ratio statistic r(

l

) for inference on

l

is defined as

l

Þ ¼ sgnð^

l



l

Þ 2 lð^hÞ  lð^hlÞ

h i

n o1=2

; ð2:2Þ

where ^h¼ ð^

l

; ^kÞ is the overall maximum likelihood estimator (MLE) of h and ^hl¼ ð

l

; ^klÞ is the constrained MLE of h for a

given

l

. Cox and Hinkley [16] verified that r(

l

) is asymptotically distributed as the standard normal distribution with

first-order accuracy O(n1/2). A 100(1 

a

)% confidence interval for

l

based on r(

l

) can be obtained by

f

l

:jrð

l

Þj 6 za=2g; ð2:3Þ

where za/2is the 100(1 

a

/2)th percentile of a standard normal distribution. Since the log-likelihood ratio statistic is quite

inaccurate when the sample size is small, Barndorff–Nielsen[17,18]proposed a higher order likelihood-based method which

(3)

rð

l

Þ ¼ rð

l

Þ þ rð

l

Þ1log

l

Þ rð

l

Þ

 

; ð2:4Þ

where r(

l

) is the sign log-likelihood ratio statistic and q(

l

) is a statistic which can be expressed in various forms depending

on the information available. A widely applicable formula for q(

l

) that ensures the O(n3/2) accuracy provided by Fraser et al.

[19]is defined as qð

l

Þ ¼ l;Vð^hÞ  l;Vð^hlÞlk;Vð^hlÞ lh;Vð^hÞ jhhð^hÞ jkkð^hlÞ 8 < : 9 = ; 1=2 ; ð2:5Þ

where jhhð^hÞ is the p  p observed information matrix and jkkð^hlÞ is the (p  1)  (p  1) observed nuisance information

ma-trix. The vector array V ¼ ð

v

0

1; . . . ;

v

0pÞ in(2.5)is obtained from a vector pivot quantity R(x; h) = (R1(x; h), . . . , Rn(x; h)) by V ¼  @Rðx; hÞ @x  1 @Rðx; hÞ @h   ^h ; ð2:6Þ

where the distribution of Ri(x; h) is free of the nuisance parameters k. The quantity l;V(h) is the likelihood gradient with

l;VðhÞ ¼ d d

v

1 lðh; xÞ; . . . ; d d

v

p lðh; xÞ   ¼ X n j¼1 l;xjðhÞ

v

1;j; . . . ; Xn j¼1 l;xjðhÞ

v

p;j ( ) ; ð2:7Þ where d

dvilðh; xÞ is the directional derivative of the log-likelihood function along

v

i= {

v

i,1, . . . ,

v

i,n}, i = 1, . . . , p, and

l;xjðhÞ ¼ @lðhÞ @xj; j ¼ 1; . . . ; n. Moreover, l;Vð^hlÞ ¼@lðhÞ @V ^hl ; lk;Vð^hlÞ ¼@l;VðhÞ @k ^hl and lh;Vð^hÞ ¼ @l;VðhÞ @h ^h :

Note that r⁄(

l

) achieves third-order accuracy to a standard normal distribution[19]. Hence a 100(1 

a

)% confidence interval

for

l

based on r⁄(

l

) is

l

:jrð

l

Þj 6 z

a=2

: ð2:8Þ

3. Inferences for the common mean of several independent IG populations 3.1. The likelihood-based confidence interval in the general case

Suppose xi¼ ðxi1;xi2; . . . ;xiniÞ; i ¼ 1; 2; . . . ; I; are I random samples from IG (

l

, ki) populations. The parameters,

h= (

l

, k1, . . . , kI), contain

l

being the parameter of interest and (k1, . . . , kI) being the nuisance parameters. The log-likelihood

function is lðh; x1;x2; . . . ;xIÞ ¼ 1 2 XI i¼1 nilog ki 2

p

 3 2 XI i¼1 Xni j¼1 log xij 1 2

l

2 XI i¼1 Xni j¼1 kixijþ 1

l

XI i¼1 niki 1 2 XI i¼1 Xni j¼1 ki xij : ð3:1Þ

Differentiating the log-likelihood function(3.1)with respect to h for the first order yields the following results:

@lðhÞ @

l

¼ 1

l

2 XI i¼1 nikiþ 1

l

3 XI i¼1 Xni j¼1 kixij @lðhÞ @ki ¼ ni 2ki þni

l

 1 2 Xni j¼1 1 xij  1 2

l

2 Xni j¼1 xij; i ¼ 1; . . . ; I: ð3:2Þ

The overall MLEs ^h¼ ð^

l

; ^k1; . . . ; ^kIÞ can be uniquely obtained by solving the non-linear system(3.2)simultaneously.

Further-more, the constrained MLEs ^hl¼ ð

l

; ^kl;1; . . . ; ^kl;IÞ for a given

l

are ^

kl;i¼ ni

l

2

ð2ni

l

Pnj¼1i xij

l

2Pnj¼1i x1ijÞ

; i ¼ 1; . . . ; I: ð3:3Þ

Choosing a vector pivot quantity R ¼ fR11; . . . ;RInIg with Rij¼

kiðxijlÞ2

l2xij ; i ¼ 1; . . . ; I; j ¼ 1; . . . ; nI, then Rij

v

2

1with the

(4)

@Rij @xk ¼

l

2x2 ij kiðx2ij

l

2Þ !1 ; if j ¼ k; else @Rij @xk ¼ 0; @Rij @

l

¼ 2kiðxij

l

Þ

l

3 ; @Rij @kk ¼ðxij

l

Þ 2

l

2x ij ; if j ¼ k; else @Rij @kk ¼ 0: ð3:4Þ Furthermore, V ¼ ð

v

0 1; . . . ;

v

0Iþ1Þ ¼  @R@x  1 @R @h   ^hwith

v

1¼ 2x2 11 ^

l

ðx11þ ^

l

Þ ; . . . ; 2x 2 1n1 ^

l

ðx1n1þ ^

l

Þ ; . . . ; 2x 2 I1 ^

l

ðxI1þ ^

l

Þ ; . . . ; 2x 2 InI ^

l

ðxInIþ ^

l

Þ ! ;

v

iþ1¼ ð0; . . . ; 0|fflfflfflffl{zfflfflfflffl} n1 ;0; . . . ; 0 |fflfflfflffl{zfflfflfflffl} n2 ;x^i1ðxi1 ^

l

Þ kiðxi1þ ^

l

Þ ; . . . ;xiniðxini ^

l

Þ ^ kiðxiniþ ^

l

Þ ;0; . . . ; 0 |fflfflfflffl{zfflfflfflffl} niþ1 ; . . . ;0; . . . ; 0 |fflfflfflffl{zfflfflfflffl} nI Þ; i ¼ 1; . . . ; I:

The likelihood gradients, l;VðhÞ ¼ ddv1lðh; xÞ; . . . ;

d dvIþ1lðh; xÞ n o ; lk;VðhÞ and lh;V(h) are l;VðhÞ ¼ PI i¼1 Pni j¼1 l;xijðhÞ

v

1;jþði1Þni1 .. . PI i¼1 Pni j¼1 l;xijðhÞ

v

Iþ1;jþði1Þni1 2 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 5 ; lk;VðhÞ ¼ PI i¼1 Pni j¼1 lk1;xijðhÞ

v

1;jþði1Þni1    PI i¼1 Pni j¼1 lkI;xijðhÞ

v

Iþ1;jþði1Þni1 .. . .. . PI i¼1 Pni j¼1 lk1;xijðhÞ

v

Iþ1;jþði1Þni1    PI i¼1 Pni j¼1 lkI;xijðhÞ

v

Iþ1;jþði1Þni1 2 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 5 ; lh;VðhÞ ¼ PI i¼1 Pni j¼1 ll;xijðhÞ

v

1;jþði1Þni1 PI i¼1 Pni j¼1 lk1;xijðhÞ

v

1;jþði1Þni1    PI i¼1 Pni j¼1 lkI;xijðhÞ

v

1;jþði1Þni1 .. . .. . .. . .. . PI i¼1 Pni j¼1 ll;xijðhÞ

v

Iþ1;jþði1Þni1 PI i¼1 Pni j¼1 lk1;xijðhÞ

v

Iþ1;jþði1Þni1    PI i¼1 Pni j¼1 lk1;xijðhÞ

v

Iþ1;jþði1Þni1 2 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 5 ;

respectively. The observed Fisher information matrix and the observed nuisance information matrix are

jhhðhÞ ¼ PI i¼1 Pni j¼1 3kixij l4  PI i¼1 2kini l3 n1 l2 Pn1 i¼1 x1i l2 n2 l2 Pn2 i¼1 x2i l3       nI l2 PnI i¼1 xIi l3 n1 l2 Pn1 i¼1 x1i l3 n1 2k2 1 0 0    0 n2 l2 Pn2 i¼1 x2i l3 0 n2 2k2 2 0 ... ... .. . .. . 0 .. . 0 ... .. . .. . .. . 0 .. . 0 nI l2 PnI i¼1 xIi l3 0 0    0 nI 2k2 I 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 and jkkðhÞ ¼ n1 2k2 1 0 . . . 0 nI 2k2 I 2 6 6 6 6 4 3 7 7 7 7

5:Apply the above quantities to(2.5) and (2.4), q(

l

) and then r

(5)

Although the confidence intervals based on r(

l

) and r⁄(

l

) can be obtained here, in general, some simple numerical

iter-ation procedure is needed to solve the upper bound limit and lower bound limit. In this paper we use the so-called secant method; the algorithm is summarized as follows:

Step 1: Give the tolerance

e

for the purpose of accuracy;

Step 2: Select d for the purpose of numerical differentiation;

Step 3: Give the initial estimate

l

0to start the iteration;

Step 4: Compute

l

1¼

l

0þ ½Za=2 rð

l

0Þ ½rð

l

0þ dÞ  rð

l

0 dÞ=2d

; ð3:5Þ

Step 5: If j

l

1

l

0j >

e

, replace

l

0with

l

1and return to Step 4 again, otherwise take the latest

l

1as the lower bound limit of

the 100(1 

a

)% confidence interval.

Replacing Za/2with Z1a/2in(3.5), we can obtain the upper bound limit for the 100(1 

a

)% confidence interval of the

common mean

l

. Similarly, the confidence interval based on r⁄(

l

) can be obtained by substituting r(

l

) for r(

l

) in(3.5).

3.1.1. The likelihood-based confidence interval when I = 2

In order to express the proposed method in details, we present the derivation of the confidence interval for the common mean of two independent IG populations. The log-likelihood function based on the observations is:

lðh; x1;x2Þ ¼ n1 2 log k1 2

p

 3 2 Xn1 i¼1 log x1i k1 2

l

2 Xn1 i¼1 x1iþ k1n1

l

 k1 2 Xn1 i¼1 1 x1i þn2 2 log k2 2

p

 3 2 Xn2 i¼1 log x2i k2 2

l

2 Xn2 i¼1 x2i þk2n2

l

 k2 2 Xn2 i¼1 1 x2i : ð3:6Þ Take Rij¼ kiðxijlÞ2 l2x

ij to be the pivot quantity as we mentioned earlier and differentiate Rijwith respect to x and h, we then have

@R @x  1 ¼ diag l2x211 k1ðx211l2Þ ; . . . ; l 2x2 1n1 k1ðx21n1l2Þ; l2x2 21 k2ðx221l2Þ ; . . . ; l 2x2 2n2 k2ðx22n2l2Þ   and ð@R @hÞ ¼ ðr 0

1;r02;r03Þ with diag() being a diagonal matrix and

r1¼ 2k1ðx11

l

Þ

l

3 ; . . . ; 2k1ðx1n1

l

Þ

l

3 ; 2k2ðx21

l

Þ

l

3 ; . . . ; 2k2ðx2n2

l

Þ

l

3   ; r2¼ ðx11

l

Þ2

l

2x 11 ; . . . ;ðx1n1

l

Þ 2

l

2x 1n1 ;0; . . . ; 0 ! ; r3¼ 0; . . . ; 0;ðx21

l

Þ2

l

2x 21 ; . . . ;ðx2n2

l

Þ 2

l

2x 2n2 ! : So V ¼ ð

v

0 1; . . . ;

v

03Þ ¼  @R@x  1 @R @h   ^ his obtained with

v

1¼ 2x2 11 ^

l

ðx11þ ^

l

Þ ; . . . ; 2x 2 1n1 ^

l

ðx1n1þ ^

l

Þ ; 2x 2 21 ^

l

ðx21þ ^

l

Þ ; . . . ; 2x 2 2n2 ^

l

ðx2n2þ ^

l

Þ ! ;

v

2¼  x11ðx11 ^

l

Þ ^ k1ðx11þ ^

l

Þ ; . . . ;x1n1ðx1n1 ^

l

Þ ^ k1ðx1n1þ ^

l

Þ ;0; . . . ; 0 ! ;

v

3¼ 0; . . . ; 0;  x21ðx21 ^

l

Þ ^ k2ðx21þ ^

l

Þ ; . . . ;x2n2ðx2n2 ^

l

Þ ^ k2ðx2n2þ ^

l

Þ ! : ð3:7Þ

Moreover, the likelihood gradients are

l;VðhÞ ¼ Pn1 i¼1 2x2 1i ^ lðx1iþ^lÞ ^ k1 2x2 1i  ^k1 2^l22x31i   þP n2 i¼1 2x2 2i ^ lðx2iþ^lÞ ^ k2 2x2 2i ^k2 2^l22x32i   Pn1 i¼1 x1ið^lx1iÞ ^ k1ðx1iþ^lÞ ^ k1 2x2 1i ^ k1 2^l22x3 1i   Pn2 i¼1 x2ið^lx2iÞ ^ k2ðx2iþ^lÞ ^ k2 2x2 2i ^ k2 2^l2 3 2x2i   2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 ;

(6)

lk;VðhÞ ¼ Pn1 i¼1 2x2 1i ^ lðx1iþ^lÞ 1 2x2 1i  1 2^l2   Pn2 i¼1 2x2 2i ^ lðx2iþ^lÞ 1 2x2 2i  1 2^l2   Pn1 i¼1 x1ið^lx1iÞ ^ k1ðx1iþ^lÞ 1 2x2 1i  1 2^l2   0 0 P n2 i¼1 x2ið^lx2iÞ ^ k2ðx2iþ^lÞ 1 2x2 2i 1 2^l2   2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 and lh;VðhÞ ¼ P2 i¼1 Pni j¼1 ^ ki ^ l4 2x2 ij xijþ^l Pn1 i¼1 2x2 1i ^ lðx1iþ^lÞ 1 2x2 1i  1 2^l2   Pn2 i¼1 2x2 2i ^ lðx2iþ^lÞ 1 2x2 2i  1 2^l2   1 ^ l3 Pn1 i¼1 x1iðx1i^lÞ ðx1iþ^lÞ Pn1 i¼1 x1ið^lx1iÞ ^ k1ðx1iþ^lÞ 1 2x2 1i  1 2^l2   0 1 ^ l3 Pn2 i¼1 x2iðx2i^lÞ ðx2iþ^lÞ 0 Pn2 i¼1 x2ið^lx2iÞ ^ k2ðx2iþ^lÞ 1 2x2 2i  1 2^l2   2 6 6 6 6 6 6 6 6 4 3 7 7 7 7 7 7 7 7 5 :

Furthermore, the observed Fisher information matrix and the observed nuisance information matrix are

jhhðhÞ ¼ 3k1s1 l4  2k1n1 l3 þ 3k2s2 l4  2k2n2 l3 n1 l2 s1 l3 n2 l2 s2 l3 n1 l2 s1 l3 n1 2k2 1 0 n2 l2 s2 l3 0 n2 2k2 2 2 6 6 4 3 7 7 5 and jkkðhÞ ¼ n1 2k2 1 0 0 n2 2k2 2 2 6 4 3 7

5, respectively, where s1¼Pni¼11X1i and

s2¼Pni¼12 X2i. Finally, r⁄(

l

) is then obtained by applying these quantities to(2.5) and (2.4).

3.2. The generalized inference method [11]

Ye et al.[11]proposed a mixture of generalized inference method and the large sample theory. Their procedures for

deriv-ing two generalized pivot quantities, eT1and eT2, are briefly introduced below.

Suppose xi¼ ðxi1;xi2; . . . ;xiniÞ; i ¼ 1; 2; ::; I; are I independent populations with parameters (

l

,ki) for each population, from

(2.1)we know that xi¼n1i

Pni

j¼1xij IGð

l

;nikiÞ and nikiVi

v

2ni1, where Vi¼

1 ni Pni j¼1 x1ij 1  xi  . Let xo

i and

v

oi denote the observed

values of xiand Vi, respectively. The author defined the first generalized pivot quantity (GP1) for common mean

l

as

eT1¼ PI i¼1niRiTi PI i¼1niRi ; ð3:8Þ where Ri¼ nikiVi ni

v

oi 

v

2 ni1 ni

v

oi and Ti¼  xo i 1 þ ffiffiffiffiffiffiniki p ðxilÞ l ffiffiffixi p ffiffiffiffiffiffixoi niRi q d x o i 1 þ Zi ffiffiffiffiffiffixo i niRi q ð3:9Þ

are the generalized pivot quantities for kiand

l

, respectively, based on the ith sample. It is noted that 

d

denotes ‘‘approx-imately distributed’’ and

ffiffiffiffiffiffi

niki

p

ðxilÞ

l ffiffiffixi

p dZiwhen the sample sizes niare large and/or the scale parameter kiis large compared to

l

i. Let eT1ð

a

Þ be the 100

a

th percentile eT1, the 100(1 

a

)%generalized confidence interval for

l

based on eT1is eT1ð

a

=2Þ; eT1ð1 

a

=2Þ

n o

: ð3:10Þ

The generalized p-value for testing H0:

l

=

l

0vs. H1:

l

l

0can be computed as

p1¼ 2 min PrðeT16

l

0Þ; PrðeT1P

l

n o

: ð3:11Þ

Furthermore, if the scale parameters ki’s are known, then ~

l

¼

PI i¼1nikixi

PI i¼1niki

is the MLE of

l

based on I independent IG populations

and ~

l

 IGð

l

;PIi¼1nikiÞ[6]. The author provided a second generalized pivot quantity (GP2) for

l

as

eT2¼ R 1 þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PI i¼1niki q ð~llÞ lpffiffiffi~l ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R PI i¼1niRi r d R 1 þ Z ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiR PI i¼1niRi r ; ð3:12Þ where R ¼ PI i¼1niRixoi PI i¼1niRi

; Z  Nð0; 1Þ and Riis defined in(3.9). The 100(1 

a

)% generalized confidence interval for

l

and the

gen-eralized p-value for testing H0:

l

=

l

0vs. H1:

l

l

0based on eT2can be respectively obtained by substituting eT2in place of

e

(7)

3.3. Simple t-test confidence interval

For the purpose of comparison, we calculate a simple t-test confidence interval that is inspired from the analysis of recip-rocals (ANORE). This method can provide an exact confidence interval when the scale parameters are homogeneous. Suppose xi¼ ðxi1;xi2; . . . ;xiniÞ; i ¼ 1; 2; ::; I;areIindependent populations with parameters (

l

, ki) for each population. It can be shown

that x ¼1 N

PI i¼1

Pni

j¼1xij IGð

l

;NkÞ and ðN  IÞW 1k

v

2

NI are independent distributed, where N ¼

PI i¼1ni; xi¼n1i Pni j¼1xijand W ¼ 1 NI PI i¼1 Pni

j¼1ðx1ij  x1i Þ. Moreover, from Remark 1, we know

ffiffiffi N p ðxlÞ lðxWÞ1=2

is the truncated student’s t distribution with

N  I degrees of freedom. Therefore, the two-sided 100(1 

a

)% for

l

is

 x 1 þ t1a 2 ffiffiffiffiffi  xW N q  1 ;   x 1  t1a 2 ffiffiffiffiffi  xW N q  1 ; if 1  t1a 2 ffiffiffiffiffi  xW N q >0  x 1 þ t1a 2 ffiffiffiffiffi  xW N q  1 ;1   ; otherwise: 8 > > > < > > > : ð3:13Þ

4. Simulation studies and numerical examples 4.1. Simulation studies

To illustrate the merits of the proposed method, we present simulation studies of the confidence intervals and type I er-rors applied to a variety of scale parameter configurations and different combinations of small sample sizes for two and three populations. In the simulation, we exhibit the coverage probabilities, the average lengths of the.95 confidence intervals and

also evaluate the type I errors at the nominal significance level.05 based on r(

l

), r⁄

(

l

), two generalized pivot quantities (GP1

and GP2), and the simple t-test method (S.T.). The results given inTables 1–4below are based on 10,000 simulation runs for

each combination.

FromTables 1 and 2, we see that although the confidence intervals based on r(

l

) and GP2 have shorter average lengths comparing to the other three methods, the coverage probabilities are too liberal to attain the proposed coverage probabilities of.95 for each combination. The confidence intervals based on the simple t-test method have good coverage probabilities but these coverage probabilities decrease when the heterogeneity increases. Moreover, when the scale parameter is small

rela-tive to

l

, the interval lengths constructed by the simple t-test are unbounded (i.e., a one-sided interval). In these cases, the

simple t-test method gives less information about the target value than those based on the other methods. The GP1 method performs well in the coverage probabilities when the scale parameters are large compared to mu, but the performance grows

worse when the scale parameters decrease. On the other hand, the confidence intervals based on r⁄

(

l

) not only have almost

exact coverage probabilities in each combination (except for few exceptions), but the average lengths are also quite decent and acceptable. Therefore, in terms of the overall comparisons, the higher order likelihood-based method outperforms the other four methods.

Table 1

Simulation results of 95% confidence interval ofl= 1 for two populations.

(n1, n2) k1 k2 r⁄(l) GP1 GP2 r(l) S.T.

CP Length CP Length CP Length CP Length CP Length

(5, 10) 0.2 1 0.951 9.387 0.963 5.451 0.933 1.810 0.923 1.637 0.954 1 0.5 1 0.948 14.340 0.961 1.335 0.935 1.000 0.917 1.554 0.945 1 1 3 0.952 1.054 0.953 1.475 0.935 1.064 0.926 0.786 0.943 1.024 3 10 0.947 0.456 0.943 0.444 0.937 0.384 0.920 0.387 0.939 0.486 1 10 0.948 0.477 0.950 0.456 0.940 0.400 0.923 0.409 0.933 0.791 (10, 5) 0.2 1 0.931 23.093 0.969 5.489 0.939 2.261 0.847 1.368 0.955 1 0.5 1 0.944 13.672 0.959 12.439 0.930 6.529 0.897 1.534 0.949 1 1 3 0.949 1.958 0.955 2.633 0.935 1.481 0.906 0.967 0.950 1.506 3 10 0.948 0.598 0.945 0.746 0.938 0.603 0.903 0.464 0.949 0.623 1 10 0.947 0.840 0.945 0.845 0.934 0.679 0.925 0.573 0.951 1.361 (10, 10) 0.2 1 0.951 7.914 0.968 2.803 0.944 1.309 0.929 1.659 0.958 1 0.5 1 0.955 2.320 0.955 4.332 0.933 1.561 0.933 1.363 0.948 1 1 3 0.945 0.873 0.954 0.896 0.939 0.655 0.924 0.708 0.946 0.918 3 10 0.945 0.410 0.943 0.358 0.938 0.331 0.921 0.360 0.947 0.456 1 10 0.949 0.459 0.950 0.484 0.943 0.375 0.926 0.399 0.947 0.795

(8)

Furthermore, fromTables 3 and 4, we can see the type I errors based on the simple t-test method are not stable since the type I errors decrease as the mean parameter under the null hypothesis increases. Similarly, the type I errors obtained by the GP1 methods do not perform well under those small sample sizes and small scale parameters configurations. The type I er-rors based on the GP1 method are getting worse compared to the nominal level.05 as the mean parameters increase. The

type I errors based on GP2 and r(

l

) are respectively around 0.6 to.07 and.07 to.10 for each combination which are too large

compared to the nominal level.05. By contrast, the type I errors based on r⁄(

l

) are not only stable, but the values are also very

close to the nominal level .05. Thus, we can say that the proposed procedure can well tolerate heterogeneity among popu-lations and give robust and reliable results under different scenarios.

Table 2

Simulation results of 95% confidence interval ofl= 1 for three populations.

(n1, n2, n3) k1 k2 k3 r⁄(l) GP1 GP2 r(l) S.T.

CP Length CP Length CP Length CP Length CP Length

(5, 8, 10) 0.1 0.1 1 0.949 4.865 0.972 6.162 0.937 2.856 0.923 1.611 0.965 1 0.1 0.5 1 0.949 3.382 0.966 21.096 0.934 7.197 0.922 1.456 0.959 1 1 1 5 0.948 0.671 0.952 0.807 0.934 0.580 0.923 0.542 0.948 0.807 1 1 10 0.949 0.467 0.958 0.606 0.939 0.429 0.925 0.396 0.943 0.766 1 5 10 0.948 0.393 0.944 0.497 0.931 0.402 0.921 0.337 0.938 0.511 (5, 10, 8) 0.1 0.1 1 0.952 5.964 0.972 8.408 0.937 6.293 0.917 1.589 0.963 1 0.1 0.5 1 0.948 3.784 0.970 7.983 0.930 2.515 0.918 1.492 0.952 1 1 1 5 0.945 0.754 0.958 0.618 0.938 0.433 0.919 0.586 0.951 0.873 1 1 10 0.950 0.537 0.957 0.559 0.939 0.380 0.919 0.435 0.947 0.844 1 5 10 0.945 0.413 0.947 0.591 0.929 0.471 0.917 0.350 0.938 0.524 (10, 8, 5) 0.1 0.1 1 0.930 7.016 0.975 7.526 0.932 2.687 0.839 1.306 0.967 1 0.1 0.5 1 0.946 5.952 0.970 11.160 0.934 5.868 0.889 1.467 0.965 1 1 1 5 0.943 0.957 0.950 0.977 0.930 0.500 0.901 0.668 0.950 0.974 1 1 10 0.945 0.720 0.945 1.063 0.935 0.544 0.898 0.518 0.953 0.956 1 5 10 0.946 0.521 0.946 0.507 0.933 0.441 0.902 0.406 0.948 0.711

CP: coverage probability; length: average length.

Table 3

Type I errors for H0:l¼l0vs: H1:l–l0at I ¼ 2 anda¼ 0:05.

(k1, k2) Tests n1= 5, n2= 10 n1= 10, n2= 5 l0 l0 0.2 0.8 1.2 2.0 5.0 0.2 0.8 1.2 2.0 5.0 (0.2, 1) r⁄ (l) 0.0522 0.0528 0.0529 0.0493 0.0506 0.0550 0.0495 0.0567 0.0551 0.0524 GP1 0.0439 0.0432 0.0382 0.0310 0.0344 0.0514 0.0345 0.0275 0.0317 0.0259 GP2 0.0575 0.0700 0.0643 0.0610 0.0607 0.0658 0.0650 0.0605 0.0677 0.0714 r(l) 0.0774 0.0746 0.0750 0.0686 0.0702 0.1027 0.0903 0.0989 0.0912 0.0912 S.T. 0.0615 0.0430 0.0489 0.0426 0.0324 0.0502 0.0378 0.0410 0.0381 0.0313 (0.5, 1) r⁄ (l) 0.0485 0.0562 0.0569 0.0553 0.0575 0.0528 0.0560 0.0522 0.0552 0.0530 GP1 0.0590 0.0363 0.0382 0.0337 0.0260 0.0495 0.0394 0.0340 0.0302 0.0244 GP2 0.0685 0.0632 0.0655 0.0662 0.0657 0.0624 0.0642 0.0687 0.0672 0.0714 r(l) 0.0832 0.0862 0.0844 0.0813 0.0808 0.0901 0.0950 0.0922 0.0922 0.0883 S.T. 0.0555 0.0507 0.0514 0.0504 0.0509 0.0469 0.0515 0.0461 0.0434 0.0484 (1, 3) r⁄ (l) 0.0526 0.0528 0.0500 0.0578 0.0564 0.0542 0.0548 0.0573 0.0570 0.0553 GP1 0.0574 0.0514 0.0440 0.0402 0.0352 0.0609 0.0554 0.0437 0.0417 0.0294 GP2 0.0619 0.0660 0.0627 0.0645 0.0679 0.0655 0.0655 0.0660 0.0670 0.0662 r(l) 0.0783 0.0783 0.0744 0.0810 0.0790 0.0966 0.0985 0.0941 0.0939 0.0939 S.T. 0.0583 0.0545 0.0503 0.0539 0.0524 0.0514 0.0471 0.0464 0.0535 0.0424 (1, 5) r⁄ (l) 0.0507 0.0566 0.0517 0.0551 0.0570 0.0585 0.0537 0.0558 0.0563 0.0575 GP1 0.0587 0.0467 0.0472 0.0404 0.0355 0.0602 0.0562 0.0502 0.0395 0.0347 GP2 0.0600 0.0595 0.0647 0.0575 0.0614 0.0609 0.0670 0.0674 0.0644 0.0684 r(l) 0.0788 0.0803 0.0768 0.0779 0.0790 0.1009 0.0936 0.0990 0.0980 0.0989 S.T. 0.0634 0.0616 0.0610 0.0573 0.0499 0.0502 0.0471 0.0465 0.0460 0.0371 (1, 10) r⁄ (l) 0.0528 0.0550 0.0484 0.0559 0.0487 0.0559 0.0538 0.0517 0.0582 0.0571 GP1 0.0559 0.0497 0.0497 0.0444 0.0357 0.0579 0.0452 0.0512 0.0444 0.0357 GP2 0.0622 0.0610 0.0609 0.0590 0.0540 0.0617 0.0565 0.0619 0.0595 0.0565 r(l) 0.0751 0.0759 0.0727 0.0777 0.0697 0.1040 0.0977 0.0967 0.0997 0.0981 S.T. 0.0775 0.074 0.0682 0.0589 0.0478 0.0532 0.0468 0.0507 0.0431 0.0344

(9)

Table 4

Type I errors for H0:l¼l0vs: H1:l–l0at I ¼ 3 anda¼ 0:05.

(k1, k2, k3) Tests (n1, n2, n3) = (5, 8, 10) (n1, n2, n3) = (5, 10, 8) l0 l0 0.2 0.8 1.2 2.0 5.0 0.2 0.8 1.2 2.0 5.0 (0.1, 0.1, 1) r⁄ (l) 0.0481 0.0564 0.0542 0.0563 0.0551 0.0550 0.0539 0.0516 0.0576 0.0549 GP1 0.0424 0.0274 0.0268 0.0270 0.0260 0.0388 0.0274 0.0224 0.0200 0.0234 GP2 0.0656 0.0600 0.0620 0.0648 0.0638 0.0640 0.0654 0.0608 0.0558 0.0668 r(l) 0.0716 0.0774 0.0748 0.0796 0.0729 0.0845 0.0789 0.0769 0.0814 0.0787 S.T. 0.0483 0.0391 0.0336 0.0258 0.0187 0.0472 0.0355 0.0290 0.0247 0.0194 (0.1, 0.5, 1) r⁄ (l) 0.0539 0.0572 0.0528 0.0548 0.0554 0.0516 0.0563 0.0534 0.0570 0.0525 GP1 0.0558 0.0314 0.0282 0.0254 0.0146 0.0452 0.0364 0.0300 0.0240 0.0156 GP2 0.0684 0.0666 0.0642 0.0682 0.0666 0.0614 0.0684 0.0658 0.0656 0.0672 r(l) 0.0812 0.0832 0.0749 0.0761 0.0769 0.0778 0.0841 0.0804 0.0804 0.0791 S.T. 0.0594 0.0462 0.0395 0.0377 0.0283 0.0558 0.0450 0.0422 0.0378 0.0271 (1, 1, 5) r⁄ (l) 0.0520 0.0505 0.0560 0.0567 0.0558 0.0517 0.0525 0.0542 0.0548 0.0577 GP1 0.0610 0.0470 0.0464 0.0424 0.0284 0.0582 0.0504 0.0412 0.0372 0.0316 GP2 0.0646 0.0670 0.0682 0.0724 0.0634 0.0650 0.0708 0.0668 0.0658 0.0740 r(l) 0.0782 0.0755 0.0797 0.0824 0.0768 0.0843 0.0840 0.0839 0.0800 0.0864 S.T. 0.0572 0.0594 0.0557 0.0510 0.0436 0.0544 0.0516 0.0524 0.0472 0.0409 (1, 1, 10) r⁄ (l) 0.0485 0.0516 0.0494 0.0564 0.0557 0.0549 0.0541 0.0528 0.0543 0.0581 GP1 0.0570 0.0478 0.0482 0.0400 0.0294 0.0584 0.0536 0.0436 0.0408 0.0304 GP2 0.0620 0.0640 0.0662 0.0648 0.0614 0.0640 0.0682 0.0608 0.0668 0.0648 r(l) 0.0746 0.0724 0.0730 0.0792 0.0767 0.0853 0.0852 0.0797 0.0820 0.0864 S.T. 0.0579 0.0534 0.0537 0.0522 0.0394 0.0588 0.0496 0.0484 0.0458 0.0399 (1, 5, 10) r⁄ (l) 0.0517 0.0514 0.0514 0.0537 0.0524 0.0518 0.0534 0.0540 0.0557 0.0525 GP1 0.0642 0.0540 0.0556 0.0448 0.0410 0.0566 0.0520 0.0476 0.0490 0.0398 GP2 0.0688 0.0682 0.0680 0.0658 0.0656 0.0624 0.0658 0.0646 0.0684 0.0666 r(l) 0.0779 0.0777 0.0775 0.0815 0.0766 0.0800 0.0828 0.0827 0.0865 0.0806 S.T. 0.0638 0.0659 0.0660 0.0553 0.0544 0.0634 0.0625 0.0611 0.0603 0.0477 Table 5

Data forExample 1.

Population i 1 2 3 0.7312 1.3932 1.6999 1.7314 0.5934 1.2698 0.7109 1.6046 0.7887 0.0303 2.0649 1.0535 0.7044 1.2238 0.7973 0.0538 1.4988 1.4685 xi 0.7816 1.1556 1.2252 wi 31.3779 17.7229 0.4820 wi¼Pnj¼1i ðx1ij  x1i Þ. Table 6

The 95% confidence intervals for the common mean.

Method Point estimate ^l Interval estimate Length

r⁄ (l) 1.221 (0.961, 1.728) 0.767 GP1 1.817 (0.972, 2.089) 1.117 GP2 1.258 (0.952, 1.691) 0.739 r(l) 1.221 (0.980, 1.605) 0.625 S.T. 1.078 (0.553, 20.711) 20.158 Table 7

Data forExample 2(ni= 10, i = 1, 2, 3, 4).

xi 2.635 2.055 1.748 2.023

wi 4.5147 107.7351 3.1558 67.4330

wi¼ Pni

(10)

4.2. Two numerical examples

Example 1. We first present a three population IG simulated data with (n1, n2, n3) = (5, 6, 7) and (

l

, k1, k2, k3) = (1, 0.2, 1, 10) as

illustrative example. The original data and the summary data are depicted inTable 5. The interval estimations based on five

methods are given inTable 6. Four confidence intervals based on r⁄(

l

), r(

l

), GP1 and GP2 give satisfactory result under the

heterogeneous data set when compared with that based on the simple t-test method. Although the one based on r⁄(

l

) is a

little wider than those of GP2 and r(

l

), in general, it gives better coverage probabilities compared with them.

Example 2. The data is available in p.462, Nelson[20]. In this data, there are 60 ‘‘times-to-breakdown’’ in minutes of an

insulating fluid subjected to high voltage stress. Since IG distribution is widely applied as a lifetime model in reliability anal-ysis, here we consider the failure time of the insulating fluid for each group as an IG distributed random variable. If the experiment was under control, the mean of each group should be the same. For illustrative purpose, we pick the first four

groups as demonstration and apply the procedure induced by Tian[10]to test the equality of the means for the first four

groups. The resulting p-value is.8693; we can follow up by constructing the confidence interval for the common mean

parameter. The summary data and the results are given inTables 7 and 8, respectively.

FromTable 7, we see wi, i = 1, 2, 3, 4 the estimators of the reciprocal of the scale parameters are quite different among

groups implying the existence of heterogeneity.

InTable 8, all five intervals cover the corresponding point estimates and those based on r⁄

(

l

), r(

l

) and GP2 give satisfac-tory interval lengths compared to GP1 and the simple t-test. In this case, the simple t-test only provides a one-sided interval. 5. Conclusions

In this paper, we presented a higher order likelihood-based procedure to construct the confidence interval of the common mean of several independent IG populations. In our simulation, we compared this procedure with four alternative methods. The numerical examples showed that the proposed method gives nearly exact coverage probabilities and the type I errors calculated are close to the nominal level.05 even for small sample sizes and small scale parameters. The method is able to integrate the information of several heterogeneous IG populations, and therefore is useful for a variety of practical applications.

Acknowledgement

The author cordially thanks the Editor-in-Chief and anonymous reviewers for their valuable comments which led to the improvement of this paper. This research has been supported by grant NSC 98-2118-M-025-001 from the National Science Council of Taiwan.

References

[1] M.E. Wise, Skew probability curves with negative powers of time and related to random walks in series, Statistica Neerlandica 25 (1971) 159–180. [2] M.E. Wise, Skew Distributions in Biomedicine Including Some with Negative Powers of Time in Statistical Distributions in Scientific Work, vol. 2, D.

Reidel, Dordrecht, Holland, 1975.

[3] M.E. Wise, S.B. Osborn, J. Anderson, R.W.S. Tomlinson, A stochastic model for turnover of radiocalcium based on the observed laws, Math. Biosci. 2 (1968) 199–224.

[4] T. Lancaster, A stochastic model for the duration of a strike, J. Roy. Stat. Soc. A 135-2 (1972) 257–271.

[5] S.K. Bhattacharya, Bayesian normal analysis with an inverse Gaussian prior, Ann. Inst. Statist. Math., Part A (1987) 623–626. [6] R.S. Chhikara, J.L. Folks, The Inverse Gaussian Distribution, Marcel Dekker, New York, 1989.

[7] V. Seshadri, The Inverse Gaussian Distribution: A Case Study in Exponential Families, Oxford University Press, New York, 1993. [8] V. Seshadri, The Inverse Gaussian Distribution: Statistical Theory and Applications, Springer, New York, 1999.

[9] G.S. Mudholkar, L. Tian, An entropy characterization of the inverse Gaussian distribution and related goodness-of-fit tests, J. Stat. Planning Inference 102 (2002) 211–221.

[10] L. Tian, Testing equality of inverse Gaussian means under heterogeneity based on generalized test variable, Comput. Stat. Data Anal. 51 (2006) 1156– 1165.

[11] R.D. Ye, T.F. Ma, S.G. Wang, Inferences on the common mean of several inverse Gaussian populations, Comput. Stat. Data Anal. 54 (2010) 906–915. [12] N. Reid, Likelihood and higher-order approximations to tail areas: a review and annotated bibliography, Can. J. Statist. 24 (1996) 141–166. [13] J. Wu, A.C.M. Wong, G. Jiang, Likelihood-based confidence interval for a log-normal mean, Stat. Med. 22 (2003) 1849–1860.

Table 8

The 95% confidence intervals for the common mean.

Method Point estimate ^l Interval estimate Length

r⁄ (l) 2.110 (1.484, 4.384) 2.900 GP1 3.249 (1.511, 7.740) 6.229 GP2 2.232 (1.446, 3.736) 2.290 r(l) 2.110 (1.480, 3.653) 2.173 S.T. 2.116 (1.031, 1) 1

(11)

[14] J. Wu, A.C.M. Wong, Improved interval estimation for the two-parameter Birnbaum–Saunders distribution, Comput. Stat. Data Anal. 47 (2004) 809– 821.

[15] L. Tian, G.E. Wilding, Confidence interval of the ratio of means of two independent inverse Gaussian distributions, J. Stat. Planning Inference 133 (2005) 381–386.

[16] D.R. Cox, D.V. Hinkley, Theoretical Statistics, Chapman & Hall, London, 1974.

[17] O.E. Barndorff-Nielsen, Inference on full and partial parameters based on the standardized signed log likelihood ratio, Biometrika 73 (1986) 307–322. [18] O.E. Barndorff-Nielsen, Modified signed log-likelihood ratio, Biometrika 78 (1991) 557–563.

[19] D.A.S. Fraser, N. Reid, J. Wu, A simple general formula for tail probabilities for frequentist and Bayesian inference, Biometrika 86 (1999) 249–264. [20] W. Nelson, Applied Life Data Analysis, Addison-Wesley, 1982.

參考文獻

相關文件

The disadvantage of the inversion methods of that type, the encountered dependence of discretization and truncation error on the free parameters, is removed by

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

Monopolies in synchronous distributed systems (Peleg 1998; Peleg

Corollary 13.3. For, if C is simple and lies in D, the function f is analytic at each point interior to and on C; so we apply the Cauchy-Goursat theorem directly. On the other hand,

Corollary 13.3. For, if C is simple and lies in D, the function f is analytic at each point interior to and on C; so we apply the Cauchy-Goursat theorem directly. On the other hand,

Specifically, in Section 3, we present a smoothing function of the generalized FB function, and studied some of its favorable properties, including the Jacobian consistency property;

Specifically, in Section 3, we present a smoothing function of the generalized FB function, and studied some of its favorable properties, including the Jacobian consis- tency

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in