• 沒有找到結果。

On the fading number of multiple-input single-output fading channels with memory

N/A
N/A
Protected

Academic year: 2021

Share "On the fading number of multiple-input single-output fading channels with memory"

Copied!
5
0
0

加載中.... (立即查看全文)

全文

(1)

On the Fading Number of Multiple-Input

Single-Output Fading Channels with Memory

Stefan M. Moser

1

Department of Communication Engineering National Chiao Tung University (NCTU)

Hsinchu, Taiwan Email: stefan.moser@ieee.org

Abstract— We derive new upper and lower bounds on the fad-ing number of non-coherent multiple-input sfad-ingle-output (MISO) fading channels of general (not necessarily Gaussian) regular law with spatial and temporal memory. The fading number is the second term, after the double-logarithmic term, of the high signal-to-noise ratio (SNR) expansion of channel capacity.

In case of an isotropically distributed fading vector it is proven that the upper and lower bound coincide, i.e., the general MISO fading number with memory is known precisely.

The upper and lower bounds show that a type of beam-forming is asymptotically optimal.

I. INTRODUCTION

It has been recently shown in [1], [2] that, whenever the matrix-valued fading process is of finite differential entropy rate, the capacity of non-coherent input multiple-output (MIMO) fading channels typically grows only double-logarithmically in the signal-to-noise ratio (SNR).

This is in stark contrast to both, the coherent fading channel where the receiver has perfect knowledge about the channel state, and to the non-coherent fading channel with non-regular channel law, i.e., its differential entropy rate is not finite. In the former case the capacity grows logarithmically in the SNR with a factor in front of the logarithm equal to the minimum of the number of receive and transmit antennas [3].

In the latter case the asymptotic growth rate of the capacity depends highly on the specific details of the fading process. In the case of Gaussian fading, non-regularity means that the present fading realization can be predicted precisely from the past realizations. However, note that in any practical system the past realizations are not known a priori, but need to be estimated either by known past channel inputs and outputs or by means of special training signals. Depending on the spectral distribution of the fading process, the dependence of such estimations on the available power can vary largely which gives rise to a huge variety of possible high-SNR capacity behaviors: it is shown in [4], [5], and [6] that depending on the spectrumF(·) of the non-regular Gaussian fading process, the asymptotic behavior of the channel capacity can be varied, e.g., double-logarithmic, logarithmic, or a fractional power thereof. Similarly, Liang and Veeravalli show in [7] that the capacity of a Gaussian block-fading channel depends critically on 1This work was supported in part by the National Science Council under

NSC 94-2218-E-009-037.

the assumptions one makes about the time-correlation of the fading process: if the correlation matrix is rank deficient, the capacity grows logarithmically in the SNR, otherwise double-logarithmically.

In this paper we will only consider non-coherent channels with regular fading processes, i.e., the capacity at high SNR will be growing double-logarithmically. To quantify the rates at which this poor power efficiency begins, [1], [2] introduce the

fading number as the second term in the high-SNR asymptotic

expansion of channel capacity. Hence, the capacity can be written as

C(SNR) = log(1 + log(1 +SNR)) + χ + o(1) (1) where o(1) tends to zero as the SNR tends to infinity, and

where χ is a constant, denoted fading number, that does not

depend on the SNR.

Explicit expressions for the fading number are known for a number of fading models. For channels with memory, the fading number of single-input single-output (SISO) fading channels is derived in [1], [2] and the single-input multiple-output (SIMO) case is derived in [8], [2].

The fading number of the multiple-input single-output (MISO) fading channel has been derived in general only for the memoryless case [1], [2]:

χIID(HT) = sup

ˆx=1 

log π + Elog |HTˆx|2− h(HTˆx). (2) Here HT denotes the transpose of the vectorH. This fading number is achievable by inputs that can be expressed as the product of a constant unit vector in CnT and a circularly symmetric, scalar, complex random variable of the same law that achieves the memoryless SISO fading number [1], [2]

χIID(H) = log π + E



log |H|2− h(H), (3)

and the memoryless SISO fading number with partial2receiver

side-information S

χIID(H|S) = log π + E



log |H|2− h(H|S). (4)

2A random variableS is said to contain only partial information about H

ifh(H|S) > −∞. Since we assume that E|H|2< ∞, this is equivalent to assumingI(S; H) < ∞.

(2)

Hence, the asymptotic capacity of a memoryless MISO fad-ing channel is achieved by formfad-ing where the beam-direction is chosen not to maximize the SNR, but the fading number.

In [9], [10] Koch & Lapidoth investigate the fading number of MISO fading channels with memory where the fading process is Gaussian. For the case of a mean-d Gaussian

vector process with memory where {Hk − d} is spatially independent and identically distributed (spatially IID) with each component being a zero-mean, unit-variance, circularly symmetric, complex Gaussian process, the fading number is shown to be3

χGauss, spatially IID({HTk})

= −1 + log d2− Ei−d2+ log 1

2, (5)

where 2denotes the prediction error when predicting one of

the components of the fading vector based on the observation of its past.

Furthermore, Koch & Lapidoth derive an upper bound to the fading number for the general Gaussian case, i.e., for a mean vector d, {Hk − d} is a zero-mean, circularly symmetric, stationary, ergodic, complex Gaussian process with matrix-valued spectral distribution function F(·) and with covariance matrixK. Assuming that the prediction error covariance matrix Σ is non-singular (regularity assumption) they show that

χGauss({HTk}) ≤ −1 + log d2− Ei  −d2  + log K σmin , (6) where d∗= maxˆx=1 |d Tˆx|  Var(HT kˆx) ; (7)

σmin denotes the smallest eigenvalue of Σ; and where  · 

denotes the Euclidean operator norm of matrices, i.e., the largest singular value.

In this paper we extend these results to general (i.e., not necessarily Gaussian) fading channels.

The remaining of this paper is structured as follows: after introducing the channel model in detail in the following section we will present the main results, i.e., a new upper and lower bound on the MISO fading number, in Section III. We then specialize these results to the case of isotropically distributed fading processes in Section IV and to Gaussian fading in Section V. For isotropically distributed fading we will show that the upper and lower bound coincide. In the Gaussian case we shall derive the above mentioned results of Koch & Lapidoth as special cases of our bounds.

We conclude in Section VI.

II. THECHANNELMODEL

We consider a MISO fading channel whose time-k output Yk∈ C is given by

Yk= HTkxk+ Zk (8) 3Note that all results in this paper are in nats.

where xk ∈ CnT denotes the time-k channel input vector; where the random vectorHkdenotes the time-k fading vector;

and whereZkdenotes additive noise. HereC denotes the com-plex field,CnT denotes then

T-dimensional complex Euclidean

space, andnT is the number of transmit antennas. We assume

that the additive noise is an IID zero-mean white Gaussian process of varianceσ2> 0, i.e., {Zk} ∼ IID NC0, σ2.

As for the multi-variate fading process{Hk}, we shall only assume that it is stationary, ergodic, of finite second moment EHk2< ∞, and of finite differential entropy rate

h({Hk})  limn↑∞1

nh(H1, . . . , Hn) > −∞ (9)

(the regularity assumption). Neither transmitter nor receiver knows the realization of the fading vector; they only know its law.

Finally, we assume that the fading process {Hk} and the additive noise process {Zk} are independent and of a joint law that does not depend on the channel input{xk}.

As for the input, we consider two different constraints: a peak-power constraint and an average-power constraint. We use E to denote the maximal allowed instantaneous power in the former case, and to denote the allowed average power in the latter case. For both cases we set

SNR E

σ2. (10)

The capacityC(SNR) of the channel (8) is given by C(SNR) = lim n→∞ 1 nsup I (X n 1; Y1n) (11)

where the supremum is over the set of all probability distri-butions on Xn1 satisfying the constraints, i.e.,

Xk2≤ E, almost surely, k = 1, 2, . . . , n (12) for a peak-power constraint, or

1 n n  k=1 EXk2≤ E (13) for an average-power constraint.

Specializing [1, Theorem 4.2], [2, Theorem 6.10] to MISO fading, we have lim SNR↑∞ C(SNR) − log logSNR < ∞. (14) The fading numberχ is now defined as in [1, Definition 4.6],

[2, Definition 6.13] by χ({HT k})  limSNR↑∞ C(SNR) − log logSNR . (15)

Prima facie the fading number depends on whether a

peak-power constraint (12) or an average-peak-power constraint (13) is imposed on the input. Since a peak-power constraint is more stringent than an average-power constraint, we will derive the upper bound using the average-power constraint and the lower bound using the peak-power constraint. In case of an isotropically distributed fading process we shall see that both constraints lead to identical fading numbers.

(3)

III. MAINRESULTS

We first state a new upper bound to the fading number of a MISO fading channel:

Theorem 1: Consider a non-coherent MISO fading channel

with memory (8) where the stationary and ergodic fading process {Hk} takes value in CnT and satisfies h({Hk}) >

−∞ and EHk2 < ∞. Then, irrespective of whether a peak-power constraint (12) or an average-power constraint (13) is imposed on the input, the fading numberχ{HT

k}  is upper-bounded by χ{HT k}≤ sup ˆ x0 −∞ log π + Elog |HT 0ˆx0|2  − hHT 0ˆx0 {HTˆx}−1=−∞  (16) where ˆx xx denote a vectors of unit length.

Proof: The proof is rather long and in part pretty technical. We therefore give here only an outline.

Similar to the proof of the SIMO fading number with memory [8], [2] we need a lemma that limits the possible joint input distributions onX1, . . . , Xn to such under which

each random vector Xk has the same law with an average power equal to the constraintE. Note that we are not allowed to assume a stationary input a priori because—even though it feels very natural—we are not aware of any proof showing that the capacity-achieving input distribution of a stationary channel is stationary. The lemma does not prove this either, but at least it allows us to restrict the input to have the same marginals.

Unfortunately, the proof is complicated by the fact that this lemma does not guarantee equal marginals for the time epochs

k on the border of a block. However, these edge effects wash

out once we let the blocklengthn tend to infinity.

The proof then proceeds as follows: the mutual information between joint input and joint output is split up into a term describing the memoryless case and a term that takes care of the memory: lim n→∞ 1 nI(X n 1; Y1n) ≤ limn→∞ 1 n n  k=1 I(Xk; Yk) + IHT kXˆk; {HTXˆ}k−1=1 ˆXk1  . (17)

Here ˆX denotes the unit vector X

X. Note that the memory

in the fading process cannot be fully exploited because the receiver has only one antenna, i.e., it only gains the knowledge about the fading vector when it is multiplied with a beam-direction.

The first term in the sum is then further upper bounded as follows: I(Xk; Yk) = IRk; HTkXˆkRk+ Zk + I ˆXk; HTkXˆkRk+ Zk Rk (18) ≤ CSISO,IID,H=HT kXˆk(E) + I ˆXk; HTkXˆkRk+ Zk Rk, (19)

where Rk stands forXk and where CSISO,IID,H=HT

kXˆk(E)

denotes the capacity of a memoryless SISO fading channel with fading H = HT

kXˆk. Note that due to the stationarity of {Hk} and to the fact that we may restrict ourselves to inputs with equal marginals, this SISO capacity actually does not depend on time. The total bound on the MISO capacity, however, still does:

C(E) ≤ limn→∞ sup QXn1 eq. marg. 1 n n  k=1 CSISO,IID,H=HT kXˆk(E) + I ˆXk; HTkXˆkRk+ Zk Rk + IHT kXˆk; {HTXˆ}k−1=1 ˆXk1  . (20)

This is partially mended in a next step: it is shown that the capacity-achieving input distribution can be written as IID blocks of a given fixed length κ + 1 independent of n. This

allows us to get rid of the limit of n. We then let the power

tend to infinity in order to be able to invoke the fading number of a SISO fading channel. Note that for this we need to swap a supremum and a limit which needs justification. The bound now looks as follows:

χ({HT k}) ≤ sup Qˆ0 eq. marg. 1 κ + 1 κ  j=0  log π + Elog HT 0Xˆ(κ+j) mod (κ+1) 2  − hHT 0Xˆ(κ+j) mod (κ+1) HT−1Xˆ(κ−1+j) mod (κ+1), . . . , HT −κXˆ(0+j) mod (κ+1), ˆXκ0  . (21)

From this the result follows by replacing the supremum over the distribution of ˆXκ0 by the supremum over all possible values of ˆxκ0 and then letκ tend to infinity.

Next we state a lower bound to the fading number:

Theorem 2: Consider a non-coherent MISO fading channel

with memory (8) where the stationary and ergodic fading process {Hk} takes value in CnT and satisfies h({Hk}) >

−∞ and EHk2< ∞. Then the fading number χ{HTk}  is lower-bounded by χ{HT k}≥ sup ˆ x log π + Elog |HT 0ˆx|2  − hHT 0ˆx {H T ˆx}−1=−∞  (22) where ˆx  xx denotes a vector of unit length.

Moreover, this lower bound is achievable by IID inputs that can be expressed as the product of a constant unit vector ˆx ∈ CnT and a circularly symmetric, scalar, complex IID random process {Xk} such that

log |Xk|2∼ U ([log log E, log E]) . (23) Note that this input satisfies the peak-power constraint (12) (and therefore also the average-power constraint (13)).

(4)

Proof: We only give an outline of the proof. The details

are omitted.

The lower bound is based on the assumption of a specific input distribution which is chosen to be of the form

Xk= Xk· ˆx (24) where ˆx is a deterministic unit vector (the beam-direction) and where {Xk} is IID circularly symmetric satisfying (23). This choice is motivated by our intuition which tells that for a stationary channel model a stationary input should be sufficient for achieving the capacity and by the fact that in the SISO and SIMO case such an IID input actually suffices to achieve capacity at high SNR [1], [2], [8]. Hence, this choice for{Xk} achieves the fading number of the SISO fading channel

Yk= (HTkˆx) · Xk+ Zk (25) with fading process{Hk} = {HTkˆx}. The lower bound is then derived by proving 1 nI(X n 1; Y1n) ≈ 1 n n  k=1 IXk; Yk {HTˆx}k−1=1  . (26) Here the approximation is due to some correction terms that will tend to zero once we let the power tend to infinity. Therefore, letting the power grow to infinity and using the fading number (4) of a memoryless SISO fading channel with side-information, we get χ{HT k}  ≥ χIID  HT 0ˆx {H T ˆx}−1=−∞  . (27)

IV. SPECIALCASE OFISOTROPICALLYDISTRIBUTED FADING

We next consider the special case of isotropically distributed fading processes, i.e., for every deterministic unitary nT× nT

matrixU

Hk= UHL k, (28) where we use “L=” to denote equal in law. In this case we have the following corollary:

Corollary 3: Consider a non-coherent MISO fading

chan-nel with memory (8) where the stationary and ergodic fading process{Hk} takes value in CnT, satisfies h({Hk}) > −∞ and EHk2 < ∞, and is isotropically distributed. Then

the upper bound (16) and the lower bound (22) coincide and the fading numberχiso

 {HT k}  is given by χiso  {HT k}  = log π + Elog |HT 0ˆe|2  − hHT 0ˆe {HTˆe}−1=−∞  (29) where ˆe is any deterministic unit vector.

Proof: This result follows immediately from Theorem 1

and 2 by noting that for anyˆe

HT kˆe

L

= HT

kUTˆe = HTkˆe (30) where the first equality in law follows from (28) and the second equality by defining a new unit vectorˆe UTˆe. Note

that for the MISO case isotropically distributed is equivalent to rotation commutative in the generalized sense as defined in [1, Definition 4.37] or [2, Definition 6.37].

V. SPECIALCASE OFGAUSSIANFADING

In this section we assume that the fading process{Hk} is a mean-d Gaussian process such that {Hk− d} is a zero-mean, circularly symmetric, stationary, ergodic, complex Gaussian process with matrix-valued spectral distribution functionF(·), and with covariance matrix K. Furthermore, we assume that the prediction error covariance matrixΣ is non-singular (reg-ularity assumption).

A. Upper Bound for Gaussian Fading

We start with a new derivation of the upper bound (6) based on Theorem 1. We will see that (6) is in general less tight than (16).

We start by loosening the upper bound (16) as follows:

χ{HT k}  ≤ sup ˆ x0 −∞ log π + Elog |HT 0ˆx0|2  − hHT 0ˆx0  + hHT 0ˆx0  − hHT 0ˆx0 {HTˆx}−1=−∞  (31) ≤ sup ˆ x0 log π + Elog |HT 0ˆx0|2  − hHT 0ˆx0  + sup ˆ x0 −∞ IHT 0ˆx0; {HTˆx}−1=−∞  . (32)

In [1, Corollary 4.28], [2, Corollary 6.28] it has been shown that the IID MISO fading number (2) for Gaussian fading is given by χ(HT) = sup ˆ x log π + Elog HTˆx 2− h HTˆx (33) = −1 + log d2 ∗− Ei−d2 (34) where d∗ is given in (7). This proves the equivalence of the first supremum in (32) with the first three terms of (6). It therefore only remains to prove that

sup ˆ x I  HT 0ˆx; {HTˆx}−1=−∞  ≤ logK σmin , (35)

where σmin is the smallest eigenvalue of the prediction error

covariance matrix Σ. To this goal note that sup ˆ x I  HT 0ˆx; {H T ˆx}−1=−∞  ≤ sup ˆ x I  HT 0ˆx; {H T ˆx}−1=−∞, H−1−∞  (36) = sup ˆ x I  HT 0ˆx; H−1−∞  (37) = sup ˆ x hHT 0ˆx  − hHT 0ˆx H−1−∞  (38) = sup ˆ x  logπeˆxKˆx− hHT 0ˆx H−1−∞  (39) ≤ sup ˆ x log  πeˆxKˆx− inf ˆ x h  HT 0ˆx H−1−∞  . (40) To compute the second term in (40), we express the fading

(5)

based on the past realizations. Note that the remaining error ˇ

H0∼ NC(0, Σ) is independent of ¯H0andH−1−∞. Hence

hHT 0ˆx H−1−∞  = h( ¯HT 0+ ˇHT0)ˆx H−1−∞  (41) = h ˇHT 0ˆx H−1−∞ (42) = h ˇHT 0ˆx  (43) = logπeˆxΣˆx. (44) The bound (35) now follows from the Rayleigh-Ritz Theorem [11, Theorem 4.2.2], [2, Theorem A.9]

min

ˆ

x ˆx

Σˆx = σ

min, (45)

and the definition of the Euclidean norm of matrices max ˆ x ˆx Kˆx = max ˆ x ˆx SSˆx = max ˆ x Sˆx 2= S2= K. (46) Note that this bound is overly optimistic regarding the memory: the bound is based on the idea that the receiver can get some estimate on the channel state vector based on the knowledge of all past state vectors. It ignores the fact that we have a MISO channel where the receiver only has a single antenna and therefore never receives the channel state vector

Hk as a whole, but only as a summed up version HTkXˆk.

B. Spatially IID Gaussian Fading

We next specialize the assumptions to the case where

{ ˜Hk} = {Hk − d} is a spatially IID process where each component is a zero-mean unit-variance circularly symmetric complex Gaussian process of spectral distribution function F(·). For this case we will now present a new derivation of the result (5) based on our new bounds.

Note that we cannot apply Corollary 3 here: even though

{ ˜Hk} is isotropically distributed, {Hk} is not due to its mean vectord. However, the term IHT

0ˆx0; {HTˆx}−1=−∞ 

does not depend on the particular choice of ˆx:

IHT 0ˆx0; {HTˆx}−1=−∞  = IHT 0ˆx0− dTˆx0; {HTˆx− dTˆx}−1=−∞  (47) = I ˜HT 0ˆx0; { ˜HTˆx}−1=−∞  (48) = I ˜HT 0ˆe; { ˜HTˆe}−1=−∞  (49) = IH0(1); {H(1)}−1=−∞ (50) = log 1 2. (51)

Eq. (5) now follows from (34), Theorems 1 and 2 by noting that max ˆx=1 |dTˆx|  Var(HT kˆx) = max ˆx=1|d Tˆx| = d. (52)

VI. DISCUSSION& CONCLUSION

We have derived two bounds for a MISO fading channel of general law including memory. Both bounds show the same structure involving the maximization of a deterministic beam-direction ˆx, which suggests that beam-forming is optimal at high SNR. However, one has to be aware that the beam-direction is not chosen to maximize the SNR, but to maximize the fading number.

The differences between the upper and lower bound lies in the details of the maximization: while in the lower bound one single direction unit vectorˆx is chosen for all time, the upper bound allows for different ˆxk for different timesk.

We are convinced that the lower bound is actually tight: intuition tells that for our stationary channel model a stationary input should be sufficient for achieving the capacity. As a matter of fact in the SISO and SIMO case it has been shown that actually an IID input suffices to achieve capacity at high SNR [1], [2], [8]. Furthermore, considering the symmetry in the upper bound (21), the last step in the proof of the upper bound seems not tight. Unfortunately, we haven’t been able yet to show that the supremum in (21) is achieved by an IID input.

In the case of isotropically distributed fading the particular choice of direction has no influence on the fading process and therefore the upper and lower bounds coincide.

In the case of Gaussian fading we could show that the bounds presented in [9] and [10] are special cases of the new bounds presented here, where the new upper bound (16) is in general tighter than (6).

The success of further attempts on deriving the MISO fading number precisely will be crucial to the investigation of the fading number of general MIMO fading channels.

ACKNOWLEDGMENTS

Helpful comments from Daniel H¨osli, Tobias Koch, Amos Lapidoth and Natalia Miliou are gratefully acknowledged.

REFERENCES

[1] A. Lapidoth and S. M. Moser, “Capacity bounds via duality with applications to multiple-antenna systems on flat fading channels,” IEEE

Trans. Inform. Theory, vol. 49, no. 10, pp. 2426–2467, Oct. 2003.

[2] S. M. Moser, “Duality-based bounds on channel capacity,” Ph.D. dissertation, Swiss Federal Institute of Technology, Zurich, Oct. 2004, Diss. ETH No. 15769. [Online]. Available: http://moser.cm.nctu.edu.tw/ [3] ˙I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Europ.

Trans. Telecommun., vol. 10, no. 6, pp. 585–595, Nov.–Dec. 1999.

[4] A. Lapidoth, “On the high SNR capacity of stationary Gaussian fading channels,” in Proc. 41st Allerton Conf. Comm., Contr. and Comp., Allerton H., Monticello, Il, Oct. 1–3, 2003, pp. 410–419.

[5] T. Koch, “On the asymptotic capacity of multiple-input single-output fading channels with memory,” Master’s thesis, Signal and Inform. Proc. Lab., ETH Zurich, Switzerland, Apr. 2004, supervised by Prof. Dr. Amos Lapidoth.

[6] A. Lapidoth, “On the asymptotic capacity of stationary Gaussian fading channels,” IEEE Trans. Inform. Theory, vol. 51, no. 2, pp. 437–446, Feb. 2005.

[7] Y. Liang and V. V. Veeravalli, “Capacity of noncoherent time-selective Rayleigh-fading channels,” IEEE Trans. Inform. Theory, vol. 50, no. 12, pp. 3095–3110, Dec. 2004.

[8] A. Lapidoth and S. M. Moser, “The fading number of single-input multiple-output fading channels with memory,” IEEE Trans. Inform.

Theory, vol. 52, no. 2, pp. 437–453, Feb. 2006.

[9] T. Koch and A. Lapidoth, “Degrees of freedom in non-coherent station-ary MIMO fading channels,” in Proc. Winter School Cod. and Inform.

Theory, Bratislava, Slovakia, Feb. 20–25, 2005, pp. 91–97.

[10] ——, “The fading number and degrees of freedom in non-coherent MIMO fading channels: a peace pipe,” in Proc. IEEE Int. Symp. Inf.

Theory, Adelaide, Australia, Sept. 4–9, 2005, pp. 661–665.

[11] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 1985.

參考文獻

相關文件

Monopolies in synchronous distributed systems (Peleg 1998; Peleg

➢The input code determines the generator output. ➢Understand the meaning of each dimension to control

• elearning pilot scheme (Four True Light Schools): WIFI construction, iPad procurement, elearning school visit and teacher training, English starts the elearning lesson.. 2012 •

• A function is a piece of program code that accepts input arguments from the caller, and then returns output arguments to the caller.. • In MATLAB, the syntax of functions is

For terminating simulations, the initial conditions can affect the output performance measure, so the simulations should be initialized appropriately. Example: Want to

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

The min-max and the max-min k-split problem are defined similarly except that the objectives are to minimize the maximum subgraph, and to maximize the minimum subgraph respectively..

Given a graph and a set of p sources, the problem of finding the minimum routing cost spanning tree (MRCT) is NP-hard for any constant p &gt; 1 [9].. When p = 1, i.e., there is only