• 沒有找到結果。

C LimitsonSupportRecoveryofSparseSignalsviaMultiple-AccessCommunicationTechniques

N/A
N/A
Protected

Academic year: 2022

Share "C LimitsonSupportRecoveryofSparseSignalsviaMultiple-AccessCommunicationTechniques"

Copied!
16
0
0

加載中.... (立即查看全文)

全文

(1)

Limits on Support Recovery of Sparse Signals via Multiple-Access Communication Techniques

Yuzhe Jin, Student Member, IEEE, Young-Han Kim, Member, IEEE, and Bhaskar D. Rao, Fellow, IEEE

Abstract—In this paper, we consider the problem of exact sup- port recovery of sparse signals via noisy linear measurements. The main focus is finding the sufficient and necessary condition on the number of measurements for support recovery to be reliable. By drawing an analogy between the problem of support recovery and the problem of channel coding over the Gaussian multiple-access channel (MAC), and exploiting mathematical tools developed for the latter problem, we obtain an information-theoretic framework for analyzing the performance limits of support recovery. Specif- ically, when the number of nonzero entries of the sparse signal is held fixed, the exact asymptotics on the number of measurements sufficient and necessary for support recovery is characterized. In addition, we show that the proposed methodology can deal with a variety of models of sparse signal recovery, hence demonstrating its potential as an effective analytical tool.

Index Terms—Compressed sensing, Gaussian multiple-access channel (MAC), noisy linear measurement, performance tradeoff, sparse signal, support recovery.

I. INTRODUCTION

C

ONSIDER the problem of estimating a sparse signal in high dimension via noisy linear measure-

ments , where is the measurement

matrix and is the measurement noise. A sparse signal in- formally refers to a signal whose representation in a certain basis contains a large proportion of zero elements. In this paper, we mainly consider signals that are sparse with respect to the canonical basis of the Euclidean space. The goal is to estimate the sparse signal by making as few measurements as possible. This problem has received much attention from many research principles, motivated by a wide spectrum of applications such as compressed sensing [1], [2], biomagnetic inverse problems [3], [4], image processing [5], [6], bandlim- ited extrapolation and spectral estimation [7], robust regression and outlier detection [8], speech processing [9], channel esti- mation [10], [11], echo cancellation [12], [13], and wireless communication [10], [14].

Manuscript received March 03, 2010; revised March 11, 2011; accepted June 09, 2011. Date of current version December 07, 2011. This work was supported in part by the National Science Foundation under Grants CCF-0830612 and CCF-0747111. The material in this paper was presented in part at the 2008 IEEE International Conference on Acoustics, Speech, and Signal Processing and the 2008 IEEE International Symposium on Information Theory. A short version of this paper was presented at the 2010 IEEE International Symposium on In- formation Theory.

The authors are with the Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093-0407 USA (e-mail:

yuzhe.jin@gmail.com; yhk@ucsd.edu; brao@ece.ucsd.edu).

Communicated by J. Romberg, Associate Editor for Signal Processing.

Digital Object Identifier 10.1109/TIT.2011.2170116

Computationally efficient algorithms for sparse signal re- covery have been proposed to find or approximate the sparse signal in various settings. A partial list includes matching pursuit [15], orthogonal matching pursuit [16], LASSO [17], basis pursuit [18], FOCUSS [3], sparse Bayesian learning [19], finite rate of innovation [20], CoSaMP [21], and subspace pur- suit [22]. At the same time, many exciting mathematical tools have been developed to analyze the performance of these algo- rithms. In particular, Donoho [1], Donoho et al. [23], Candès and Tao [24], and Candès et al. [25] presented sufficient con- ditions for -norm minimization algorithms, including basis pursuit, to successfully recover the sparse signals with respect to certain performance metrics. Tropp [26], Tropp and Gilbert [27], and Donoho et al. [28] studied greedy sequential selection methods such as matching pursuit and its variants. In these papers, the structural properties of the measurement matrix , including coherence metrics [15], [23], [26], [29] and spectral properties [1], [24], are used as the major ingredient of the performance analysis. By using random measurement matrices, these results translate to relatively simple tradeoffs between the dimension of the signal , the number of nonzero entries in , and the number of measurements to ensure asymptoti- cally successful reconstruction of the sparse signal. When the measurement noise is present, i.e., , the performance of the sparse signal recovery algorithms has been measured by the Euclidean distance between the true signal and the estimate [23], [25].

In many applications, however, finding the exact support of the signal is important even in the noisy setting. For example, in applications of medical imaging, magnetoencephalography (MEG) and electroencephalography (EEG) are common ap- proaches for collecting noninvasive measurements of external electromagnetic signals [30]. A relatively fine spatial resolution is required to localize the neural electrical activities from a huge number of potential locations [31]. In the domain of cognitive radio, spectrum sensing plays an important role in identifying available spectrum for communication, where estimating the number of active subbands and their locations becomes a nontrivial task [32]. In multiple-user communication systems such as a code-division multiple-access (CDMA) system, the problem of neighbor discovery requires identification of active nodes from all potential nodes in a network based on a linear superposition of the signature waveforms of the active nodes [14]. In all these problems, finding the support of the sparse signal is more important than approximating the signal vector in the Euclidean distance. Hence, it is important to understand performance issues in the exact support recovery of sparse signals with noisy measurements. Information-theoretic tools

0018-9448/$26.00 © 2011 IEEE

(2)

have proven successful in this direction. Wainwright [33], [34]

considered the problem of exact support recovery using the optimal maximum-likelihood decoder. Necessary and sufficient conditions are established for different scalings between the sparsity level and signal dimension. Using the same decoder, Rad [35] derived sharp upper bounds on the error probability of exact support recovery. Meanwhile, Fletcher et al. [36], [37] improved the necessary condition with the same decoder.

Wang et al. [38], [39] also presented a set of necessary con- ditions for exact support recovery. Akçakaya and Tarokh [40]

analyzed the performance of a joint typicality decoder and applied it to find a set of necessary and sufficient conditions under different performance metrics including the one for exact support recovery. In addition, a series of papers have leveraged many information-theoretic tools, including rate-distortion theory [41], [42], expander graphs [43], belief propagation and list decoding [44], and low-density parity-check codes [45], to design novel algorithms for sparse signal recovery and to analyze their performances.

In this paper, we develop sharper asymptotic tradeoffs be- tween the signal dimension , the number of nonzero entries , and the number of measurements for reliable support recovery in the noisy setting. Especially, when is fixed, we show that is sufficient and necessary. We give a com- plete characterization of that depends on the values of all nonzero entries of . This result provides a clear insight into the role of nonzero entries in support recovery, which improves upon many existing results where only the minimum nonzero magnitude entered the performance tradeoffs. When increases in certain manners as specified later, we obtain sufficient and necessary conditions for perfect support recovery which can be tight in the order.

Our main results are inspired by the analogy to communi- cation over the Gaussian multiple-access channel (MAC) [46], [47]. According to this connection, the columns of the measure- ment matrix form a common codebook for all senders. Code- words from the senders are individually multiplied by unknown channel gains, which correspond to nonzero entries of . Then, the noise-corrupted linear combination of these codewords is observed. Thus, support recovery can be interpreted as decoding messages from multiple senders.

Despite these similarities between the problem of support recovery and that of MAC communication, there are also important differences between them, namely, the common codebook problem and the unknown channel gain problem, which make a straightforward translation of known results nontrivial. We customize tools from multiple-user information theory (e.g., distance decoding and Fano’s inequality) to tackle the support recovery problem. Moreover, the analytical frame- work in this paper can be extended to different models of sparse signal recovery, such as non-Gaussian measurement noise, sources with random activity levels, and multiple measurement vectors (MMVs).

Some analogies between sparse signal recovery (in a broad sense) and channel coding have been observed from various perspectives in parallel work [41], [48, Sec. IV-D], [38, Sec.

II-A], [40, Sec. III-A], [28, Sec. 11.2]. We first note that our ap- proach is different from the analytical perspective in [41] where

the Gaussian channel capacity and rate-distortion analysis were employed to established design constraints, and is also different from the point-to-point Gaussian channel coding perspective in [48, Sec. IV-D] and [38, Sec. II-A]. In [40, Sec. III-A], the sparse signal recovery problem was related to communica- tion over a single-user multiple-input–single-output (MISO) channel, which was then employed to obtain a necessary condi- tion under the assumption that the channel gains were known at the receiver. Unlike these approaches, we connect the problem of sparse signal recovery explicitly to a MAC communication problem where no coordination exists among senders. The advantage of this approach is evident in our main result that establishes matching sufficient and necessary conditions for reliable support recovery. To be fair, we note that the similarity between sparse signal recovery and multiple-user detection was described in [28, Sec. 11.2], but only at an intuitive level. Here we clarify the connection between the two problems and extend the analytical tool set for multiple-user communication, which is useful particularly in establishing the sufficient condition for support recovery.

The rest of the paper is organized as follows. We formally state the support recovery problem in Section II. To motivate the main results of the paper and their proof techniques, we discuss in Section III the similarities and differences between the sup- port recovery problem and the multiple-access communication problem. Our main results are presented in Section IV, together with comparisons to existing results in the literature. The proofs of the main theorems are presented in Appendixes I–IV, respec- tively. Section V further extends the results to different signal models and measurement procedures. Section VI concludes the paper with further discussions.

Throughout this paper, a set is a collection of unique ob- jects. Let denote the -dimensional real Euclidean space.

Let denote the set of natural numbers. Let denote the set . The notation denotes the cardinality of set , denotes the -norm of a vector , and denotes the Frobenius norm of a matrix . The ex-

pression denotes ,

denotes as for some con-

stant , denotes and

, denotes ,

and denotes .

II. PROBLEMFORMULATION

Let , where for all . Let

be such that are chosen uniformly at random from without replacement. Then, the signal of interest is generated as

if

if . (1)

Thus, the support of is . According

to the signal model (1), . Throughout this paper, we assume is known. The signal is said to be sparse when

.

(3)

We measure through the linear operation

(2)

where is the measurement matrix, is the

measurement noise, and is the noisy measurement.

We further assume that the elements of the measurement matrix are independently generated according to , and the noise is independently and identically distributed (i.i.d.) ac- cording to the Gaussian distribution . We assume is known.

Upon observing the noisy measurement , one wishes to re- cover the support of the sparse signal . A support recovery map is defined as

(3) Given the signal model (1), the measurement model (2), and the support recovery map (3), the performance metric is defined to be the average probability of error in support recovery, i.e.,

for each (unknown) signal value vector . Note that the probability here is taken over the random signal support vector

, the measurement matrix , and the noise .

III. ANINFORMATION-THEORETICPERSPECTIVE ON

SPARSESIGNALRECOVERY

In this section, we will introduce an interpretation of the problem of sparse signal recovery via a communication problem over the Gaussian MAC. The similarities and differences be- tween the two problems will be elucidated, hence progressively unraveling the intuition and facilitating technical preparation for the main results and their proof techniques.

A. Brief Review of the Gaussian MAC

We start by reviewing the background on the -sender MAC. Suppose the senders wish to transmit information to a common receiver. Each sender has access to a codebook , where is a codeword and is the number of codewords in . The rate for the sender is . To transmit information, each sender chooses a codeword from its codebook, and all senders transmit their codewords simultaneously over a Gaussian MAC [49]

(4) where denotes the input symbol from sender to the channel at transmission time , denotes the channel gain as- sociated with sender , is the additive noise, i.i.d. , and is the channel output.

Upon receiving , the receiver needs to determine the codewords transmitted by each sender. Since the senders in- terfere with each other, there is an inherent tradeoff among their operating rates. The notion of capacity region is introduced to capture this tradeoff by characterizing all possible rate tuples

at which reliable communication can be achieved with diminishing probability of decoding error. By as- suming each sender obeys the power constraint

for all and all , the capacity region of a Gaussian MAC with known channel gains [49] is

(5)

B. Connecting Sparse Signal Recovery to the Gaussian MAC In the measurement model (2), one can remove the columns in which are nulled out by zero entries in and obtain the following effective form of the measurement procedure:

(6) By contrasting (6) to the Gaussian MAC (4), we can draw the following key connections that relate the two problems [46].

1) A nonzero entry as a sender: We can view the existence of a nonzero entry position as sender that accesses the MAC.

2) as a codeword: We treat the measurement matrix as a codebook with each column , , as a codeword.

Each element of is fed one by one to the channel (4) as the input symbol , resulting in uses of the channel. The noise and measurement can be related to the channel noise and channel output in the same fashion.

3) as a channel gain: The nonzero entry in (6) plays the role of the channel gain in (4). Essentially, we can interpret the vector representation (6) as consecutive uses of the -sender Gaussian MAC (4) with appropriate stacking of the inputs/outputs into vectors.

4) Similarity between objectives: In the problem of sparse signal recovery, the goal is to find the support

of the signal. In the problem of MAC communication, the receiver’s goal is to determine the indices of codewords, i.e., , that are transmitted by the senders.

Based on the aforementioned aspects, the two problems share significant similarities which enable leveraging the information- theoretic methods for performance analysis of support recovery of sparse signals. However, as we will see next, there are domain specific differences between the support recovery problem and the channel coding problem that should be addressed accord- ingly to rigorously apply the information-theoretic approaches.

C. Key Differences

1) Common codebook: In MAC communication, each sender uses its own codebook. However, in sparse signal recovery, the “codebook” is shared by all “senders.” All senders choose their codewords from the same codebook and hence operate at the same rate. Different senders will not choose the same codeword, or they will collapse into one sender.

2) Unknown channel gains: In MAC communication, the ca- pacity region (5) is valid assuming that the receiver knows

(4)

the channel gain [50]. In contrast, for sparse signal re- covery problem, is actually unknown and needs to be estimated. Although coding techniques and capacity re- sults are available for communication with channel un- certainty, a closer examination indicates that those results are not directly applicable to our problem. For instance, channel training with pilot symbols is a common practice to combat channel uncertainty [51]. However, it is not ob- vious how to incorporate the training procedure into the measurement model (2), and hence the related results are not directly applicable.

Once these differences are properly accounted for, the con- nection between the problems of sparse signal recovery and channel coding makes available a variety of information-the- oretic tools for handling performance issues pertaining to the support recovery problem. Based on techniques that are rooted in channel capacity results, but suitably modified to deal with the differences, we will present the main results of this paper in the next section.

IV. MAINRESULTS ANDTHEIRIMPLICATIONS

A. Fixed Number of Nonzero Entries

To discover the precise impact of the values of the nonzero entries on support recovery, we consider the support recovery of a sequence of sparse signals generated with the same signal value vector . In particular, we assume that is fixed. Define the auxiliary quantity

(7)

For example, when

We can see from Section III that this quantity is closely related to the two-sender MAC capacity with equal-rate constraint.

The following two theorems summarize our main results under this setup. The subscript in denotes possible de- pendence between and . The proofs of the theorems are presented in Appendixes I and II, respectively.

Theorem 1: If

(8) then there exists a sequence of support recovery maps

, , such that

(9) Theorem 2: If

(10)

then for any sequence of support recovery maps , , we have

(11) We provide the following observations. First, Theorems 1 and 2 together indicate that is sufficient and necessary for exact support recovery. The constant is ex- plicitly characterized, capturing the role of all nonzero entries of a sparse signal in support recovery. Second, the proof of The- orem 2 for the necessary condition employs the assumption that the values of the nonzero entries are known. Immediately, it fol- lows that even if the values of the nonzero entries are known, the sufficient condition for successfully recovering the support is still given by (8). This observation indicates that the unknown channel gain problem indeed does not pose a serious obstacle in support recovery for the case of fixed . Further, the benefit of exploiting the connection between sparse signal recovery and multiple-access communication is also supported by the theo- rems. Resorting to channel capacity results enables us to explic- itly extract the constant and obtain the tight sufficient and necessary conditions.

B. Growing Number of Nonzero Entries

Next, we consider the support recovery for the case where the number of nonzero entries grows with the dimension of the signal . We assume that the magnitude of a nonzero entry is bounded from both below and above.

First, we present a sufficient condition for exact support re- covery. The proof is given in Appendix III.

Theorem 3: Let be a sequence of vectors satis-

fying and for

all . If

(12)

then there exists a sequence of support recovery maps

, , such that

Note that, according to our proof technique, the upper bound is not needed for performing support recovery, and it does not appear in the sufficient condition above. In the proof, how- ever, we use the assumption that the nonzero signal values are uniformly bounded from above to show that the probability of error tends to zero as . To better understand Theorem 3, we present the following implication of (12) that shows the tradeoffs between the order of versus and .

Corollary 1: Under the assumption of Theorem 3

(5)

TABLE I

SUFFICIENTCONDITIONS FORSUPPORTRECOVERY INDIFFERENT SPARSITYREGIONS[WHEN ]

TABLE II

SUFFICIENTCONDITIONS FORSUPPORTRECOVERY IN THE EXISTINGLITERATURE[WHEN ]

provided that

In particular, we have the following:

1) when , the sufficient number of measure-

ments is ;

2) when , the sufficient number of

measurements is .

Table I summarizes the sufficient conditions on paired with different relations between and in Corollary 1.

In the existing literature, Wainwright [34], Akçakaya and Tarokh [40], and Rad [35] derived sufficient conditions for exact support recovery. Under the same assumption of The- orem 3, the sufficient conditions presented in these papers, respectively, are summarized in Table II.1

To compare the results, we first examine the case of (i.e., sublinear sparsity). Note that in the regime where

, our sufficient condition on is among the best existing results. In the remaining sublinear regime and in the linear regime, i.e., , our results are not as tight as the best existing results. More discussions will be provided in Section IV-C.

Next, we present a necessary condition, the proof of which is given in Appendix IV.

Theorem 4: Let be a sequence of vectors satis-

fying and for

all . If

(13)

1We use Theorem 5 in [35] in the table. The sufficient condition in Corollary 6.6 therein seems to be incorrect.

TABLE III

NECESSARYCONDITIONS FORSUPPORTRECOVERY[WHEN ]

then for any sequence of support recovery maps , , we have

To compare with existing results under the same assumption2 of Theorem 4, we first note that when (linear spar- sity), Theorem 4 indicates as the neces- sary condition. Compared to the best known sufficient condi- tion (see Table II), there is a nontrivial gap. When (sublinear sparsity), we summarize the necessary conditions developed in previous papers in Table III.3

In this case, is the best known necessary condition.4

C. Further Discussions

We offer more insights into the analytical framework and proof techniques.

The sufficient conditions in this paper are derived based on the distance decoding technique which was used in channel de- coding problem [52]. In order to perform the distance decoding, the channel gains need to be known or can be estimated. This is in contrary to the fact that the nonzero entries of a sparse signal are unknown, and therefore raises the unknown channel gain problem in Section III-C. To tackle this problem, we employ the following procedure in the proofs for sufficient conditions.

1) Find an estimate of , and denote it by .

2) Find a set of points which can be viewed as -covering of the -dimensional hypersphere of radius . By construc- tion of , there exists a such that

with high probability.

3) Find such that

(14)

for some . We declare as the es-

timated support of the sparse signal. As a byproduct, the

2The necessary conditions derived in [34], [39], and [40] were originally de- rived under slightly different assumptions. Here we adapted them to compare the asymptotic orders of .

3This result is implied in [40], by identifying in Theorem 1.6 therein, and clarifying the order of . The proof of Theorem 1.6 states that [below its (25)] asymptotically reliable support recovery is not possible

if . Note that

. Hence, we consider an

appropriate necessary condition resulting from the proof in [40].

4Note that when , we can show that is necessary for both linear and sublinear sparsity [39]. Hence, when ,

is the best known necessary condition.

(6)

elements of the corresponding can be viewed as esti- mates of the values of the nonzero entries.

The success of this support recovery procedure is closely re- lated to the estimation quality of and the cardinality of the set . Accordingly, our methodology shows different strength in different regions of sparsity levels. First, in the case for fixed number of nonzero entries, consistent estimation of can be obtained, and the cardinality of can be bounded from above.

This provides the opportunity to discover the exact sufficient and necessary conditions for successful support recovery. Next, in the case with growing number of nonzero entries, the estima- tion quality of and the cardinality of must be carefully controlled. To this end, the constraint , which is im- plied by Theorem 3, is needed for the estimation of to be consistent, and as the upper bound for the nonzero mag- nitudes is needed for controlling the cardinality of . Note that for the sublinear sparsity with , our sufficient and necessary conditions both indicate , and hence are tight in terms of order. As increases with at a faster rate, our sufficient and necessary conditions have gaps, which is a consequence of the difficulty in consistently esti- mating and handling the large size of .

Another interesting region which has been extensively dis- cussed in previous work is the case where

[34], [37], [38]. Although Theorem 4 can be extended to provide a necessary condition for this case, it does not offer improve- ment upon existing results. Theorem 3 may not be extended to this scenario, which indicates that our analytical technique for proving sufficient conditions is not suited for this scaling.

V. EXTENSIONS

The connection between the problems of support recovery and channel coding can be further explored to provide the performance tradeoff for different models of sparse signal recovery. Next, we discuss its potential to address several important variants.

A. Non-Gaussian Noise

Note that the rules for support recovery, mainly reflected in (20) and (26) in the proof of Theorem 1 in Appendix I, are similar to the method of nearest neighbor decoding in infor- mation theory. Following the argument in [52], one can show that by replacing the assumption in (2) on measurement noise by any non-Gaussian noise with , the previous sufficient conditions continue to hold.

B. Random Signal Activities

In Theorem 1, is assumed to be a fixed vector of nonzero entries. We now relax this condition to allow random , which leads to sparse signals whose nonzero entries are randomly gen- erated and located. For simplicity of exposition, assume that is fixed. Interestingly, the model (2) with this new assumption can now be contrasted to a MAC with random channel gains

(15)

The difference between (15) and (4) is that the channel gains are random variables in this case. Specifically, in order to contrast the problem of support recovery of sparse signals, should be considered as being realized once and then kept fixed during the entire channel use [46]. This channel model is usually termed as a slow fading channel [50].

The following theorem states the performance of support re- covery of sparse signals under random signal activities.

Theorem 5: Suppose has bounded support, and Then, there exists a sequence of

support recovery maps , , such

that

where is defined as in (7).

Proof: Note that

(16) (17) where (16) follows from Fatou’s lemma [53] and (17) follows by applying the proof of Theorem 1 to the integrand.

Theorem 5 implies that generally, rather than having a di- minishing error probability, we have to tolerate certain error probability which is upperbounded by , when the nonzero values are randomly generated. Conversely, in order to design a system with probability of success at least ,

one can find that satisfies . Note that

can be viewed as the outage probability of a slow fading MAC given the target rate of each sender [50].

Thus, represents the probability that the channel gains are realized too poorly to support the target rate.

C. Multiple Measurement Vectors

Recently, increasing research effort has been focused on sparse signal recovery with MMVs [54]–[58]. In this problem, we wish to measure multiple sparse signals

, and that possess a

common sparsity profile, that is, the locations of nonzero

(7)

entries are the same in each . We use the same measurement

matrix to perform

(18)

where ,

is the measurement noise,

and is the noisy measurement.

Note that model (2) can be viewed as a special case of the MMV model (18) with . The methodology that has been developed in this paper has a potential to be extended to deal with the performance issues for the MMV model by noting the following connections to channel coding [46]. First, the same set of columns in are scaled by entries in different , forming outputs as elements in different . The nonzero entries of can then be viewed as the coefficients that connect different pairs of inputs and outputs of a channel. Second, each mea- surement vector can be viewed as the received symbols at receiver antenna , and hence the MMV model indeed corre- sponds to a single-input–multiple-output (SIMO) MAC. Third, the aim is to recover the locations of nonzero rows of upon receiving . This implies that, in the language of SIMO MAC communication, the receiver will decode the information sent by all senders through multiple receiver antennas. Via proper accommodation of the method developed in this paper, the ca- pacity results for the SIMO MAC can be leveraged to shed light on the performance tradeoff of sparse signal recovery with MMV.

VI. CONCLUDINGREMARKS

In this paper, we developed techniques rooted in multiple- user information theory to address the performance issues in the exact support recovery of sparse signals, and discovered neces- sary and sufficient conditions on the number of measurements.

It is worthwhile to note that the interpretation of sparse signal re- covery as MAC communication opens new avenues to different theoretic and algorithmic problems in sparse signal recovery.

We conclude this paper by briefly discussing several interesting potential directions stemming from this interpretation.

1) Among the large collection of algorithms for sparse signal recovery, the sequential selection methods, including matching pursuit [15] and orthogonal matching pursuit (OMP) [16], determine one nonzero entry at a time, re- move its contribution in the residual signal, and repeat this procedure until a certain stopping criterion is satisfied. In contrast, the class of convex relaxation methods, including basis pursuit [18] and LASSO [17], jointly estimate the nonzero entries. The sequential selection methods can be potentially viewed as successive interference cancellation (SIC) decoding [50] for MACs, whereas the convex relax- ation methods can be viewed as joint decoding. It would be interesting to ask whether one can make these analo- gies more precise and use them to address performance issues of these methods. Similarities at an intuitive level between OMP and SIC have been discussed in [47] with performance results supported by empirical evidence.

More insights are yet to be explored.

2) The design of channel codes and the development of decoding methods have been extensively studied in the contexts of information theory and wireless communi- cation. Some of these ideas have been transformed into design principles for sparse signal recovery [43]–[45], [59], [60]. Thus far, however, the efforts in utilizing the codebook designs and decoding methods are mainly fo- cused on the point-to-point channel model, which implies that the recovery methods iterate between first recovering one nonzero entry or a group of nonzero entries by treating the rest of them as noise and then removing the recovered nonzero entries from the residual signal. In this paper, we established the analogy between the sparse signal recovery and the multiple-access communication. It motivates us to envision opportunities beyond a point-to-point channel model. One important question is, for example, whether we can develop practical codes for joint decoding and reconstruction techniques to simultaneously recover all the nonzero entries.

APPENDIXI PROOF OFTHEOREM1

The proof of Theorem 1 employs the distance decoding tech- nique [52]. Let denote the th column of .

For simplicity of exposition, we describe the support re- covery procedure for two distinct cases on the number of nonzero entries.

Case 1: : In this case, the signal of interest is . Consider the following support recovery procedures. Fix . First form an estimate of as

(19)

Declare that is the estimated location for the nonzero entry, i.e., , if it is the unique index such that

(20) for either or . If there is none or more than one, pick an arbitrary index.

We now analyze the average probability of error

Due to the symmetry in the problem and the measurement ma- trix generation, we assume without loss of generality , that is

for some . In the following analysis, we drop superscripts and subscripts on for notational simplicity when no ambiguity arises. Define the events for

such that

(8)

Then

(21) Let

Then, by the union of events bound and the fact that

(22)

We bound each term in (22). First, by the weak law of large

numbers (LLN), Next, we consider

. If

(23)

For any , as , by the LLN

Hence, we have for the first term in (23)

Following a similar reasoning using LLN, for the second term in (23)

and for the third term

Therefore, for any

which implies that

Similarly, if

Hence,

For the third term in (22), we need the following lemma, whose proof is presented at the end of this Appendix.

Lemma 1: Let . Let be a real sequence satisfying

Let be an i.i.d. random sequence where . Then, for any

Continuing the proof of Theorem 1, we consider

for . Then

Since is independent of and , it follows from the def-

inition of and Lemma 1 (with and

) that

for , if is sufficiently small. Thus

and therefore

which tends to zero as , if

(24) Therefore, by (22), the probability of error tends to zero as , if (24) is satisfied. Finally, since is chosen arbitrarily, we have the desired proof of Theorem 1.

Case 2: : In this case, the signal of interest is

, where and .

Consider the following support recovery procedures. Fix . First, form an estimate of as

(25)

For , let be a minimal set of points in satisfying the following properties.

(9)

i) , where is the -dimensional hyper-

sphere of radius , i.e., ,

ii) For any , there exists such that .

The following properties are useful.

Lemma 2:

1) such that .

2) is monotonically nondecreasing in

for fixed .

Lemma 2–1) will be proved at the end of this Appendix, whereas Lemma 2–2) is obvious.

Given and , fix . Declare

is the recovered support of the signal, if it is the unique set of indices such that

(26)

for some . If there is none or more than one such set, pick an arbitrary set of indices.

Next, we analyze the average probability of error

As before, we assume without loss of generality that for , which gives

for some . Define the event

and

such that

Then

(27)

where in this case

We now bound the terms in (27). First, by the LLN, . Next, we consider . Note that, for any

(28)

By applying the LLN to each term in (28), as similarly done in Case 1, and using Lemma 2–1), we have

which implies that .

Next, we consider for

. Note that

(29) For notational simplicity, define ,

, , and

.

(10)

For any permutation of and any

(30)

Conditioned on and the chosen ,

is a fixed quantity satisfying

for some positive that depends on and only, and is nondecreasing in . Meanwhile, is independent of , and for . Hence, by Lemma 1 (with and ), (30) is upperbounded by

Hence, by the union of events bound

Furthermore, conditioned on , and hence

by Lemma 2–2). Thus

(31) Note that the probability upperbound (31) depends on only through . Grouping the events

with the same

which tends to zero as , if

(32) for all . Since is arbitrarily chosen, the proof of Theorem 1 is complete.

Now, we prove Lemma 1. For simplicity, let . Denote . The moment generating function of is

(33)

Note that is a noncentral random vari-

able. Its moment generating function is given by [61] as

, for . By

changing variable , we have

(11)

Back to (33), we obtain

The Chernoff bound implies

Define

Clearly, Denote

Then, let us focus on the minimization problem

It can be shown that the minimizing is

and hence

Next, for fixed and

For , there is only one stationary point , which is a solution to . Check the second derivative

This confirms that is the minimum point of , for . Hence, for fixed and with

As a result

(12)

Hence, by changing the base of logarithm

Finally, we verify Lemma 2–1). For any , according to LLN

Note that . According to the definition

of , there must exist such that

. Fundamental geometry implies

Hence

Choosing completes the proof.

APPENDIXII PROOF OFTHEOREM2

The main techniques for the proof of Theorem 2 include Fano’s inequality and the properties of entropy. It mimics the proof of the converse for the channel coding theorem [49] with proper modifications.

For any , denote the tuple of random variables by . From Fano’s inequality [49], we have

(34)

where for no-

tation simplicity. On the other hand, by a basic permutation ar- gument

(35)

where and

which tends to zero as . Hence, combining (34) and (35), we have

(36)

(37)

(38) where (36) follows the fact that conditioning reduces entropy, (37) follows the chain rule of mutual information [49], and (38) follows since we condition on the measurement matrix and

is independent of and .

Consider

(39)

where the last inequality follows since the Gaussian random variable maximizes the differential entropy given a variance constraint. To further upperbound (39), note that

(40)

and

According to the law of total variance

(13)

Returning to (38), we have

(41) Therefore

(42)

for all . Due to the fact that , we have

(43)

for all . Since , we reach the con-

clusion

for all , which completes the proof of Theorem 2.

APPENDIXIII PROOF OFTHEOREM3 We show that

provided that the condition

(44)

is satisfied. Note that (44) implies that

, which in turn implies

that .

We follow the proof of Theorem 1 in Appendix I. Recall that in Case 2 of the proof of Theorem 1, we first proposed the support recovery rule (26). Then, we formed estimates of the nonzero values, and used them to test all possible sets of in- dices. The key step was to analyze two types of errors. On the one hand, the true support should satisfy the reconstruction rule

(26) with high probability. On the other hand, the probability that at least one incorrect support possibility satisfies this rule was controlled to diminish as the problem size increases.

By mainly replicating the steps in Appendix I with necessary accommodations to the new setting with growing number of nonzero entries, we present the proof of Theorem 3 as follows.

1) We first modify the support recovery rule by replacing (26) with

(45)

2) The cardinality of a minimal can be upper- bounded by

for some . This can be easily shown by first parti- tioning the -dimensional hypercube of side into iden- tical elementary hypercubes with side not exceeding and then, for each elementary hypercube that intersects the hypersphere, picking an arbitrary point on the hyper- sphere within that elementary hypercube. The resulting set of points provides the upper bound above for . 3) Define and to be the largest and smallest eigen-

values of the matrix

respectively. We replace the definition of by

Consider the asymptotic behaviors of the events. First, note that

(46)

where is -distributed with mean

and variance . Then,

has mean and

variance .

It has been shown [62] that

Then, as , has asymptotic

mean and variance . Since

(14)

, we have . Hence, .

Second, and are shown [63] to almost surely converge to and , respectively, where

. Thus, .

4) Next, we analyze the probability that the true support sat- isfies the recovery rule. Note that

(47) By using the fact that almost surely as

and Lemma 2–1), we have .

5) Now, suppose we have proceeded to a step similar to (30) [that is, to be exact, equipped with the modified rule (45) and a proper ]. Define the auxiliary vector

as

if if

if .

(48)

Then

From Lemma 1, it follows that (for sufficiently small )

6) Note that, from [33]

Together with the modifications above, we follow the proof steps of Theorem 1 to reach

(49)

Note that

(50) It can be readily seen that from condition (44), the upper bound in (50) becomes negative and thus as

.

APPENDIXIV PROOF OFTHEOREM4

The proof of Theorem 2 can be adapted to establish Theorem 4; see [64] for detail. Since we need a bound corresponding to only the sum rate, however, we use the following simple argu- ment.

Suppose that each user uses a codebook of size

given by . (As-

sume without loss of generality that is an integer.) This is equivalent to assuming that each nonzero entry appears in its predefined subset of , i.e.,

(51) Under this specific setup, if exact support recovery is asymp- totically successful, it follows that every user can operate at the rate . Immediately, (5) implies the neces- sary condition

(15)

which leads to

We conclude the proof by noting that the special setup in (51) is equivalent to the original setup in Section II in terms of the average probability of error in support recovery due to the sym- metry in the random matrix .

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewers for their insightful comments. Y. Jin would like to thank L. Yu for insightful discussions on perspectives of MAC communication.

REFERENCES

[1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.

52, no. 4, pp. 1289–1306, Apr. 2006.

[2] E. J. Candes, “Compressive sampling,” in Proc. Int. Congr. Mathe- maticians, 2006, pp. 1433–1452.

[3] I. Gorodnitsky and B. Rao, “Sparse signal reconstruction from limited data using FOCUSS: A re-weighted norm minimization algorithm,”

IEEE Trans. Signal Process., vol. 45, no. 3, pp. 600–616, Mar. 1997.

[4] I. F. Gorodnitsky, J. S. George, and B. D. Rao, “Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm,” J. Electroencephalog. Clin. Neurophysiol., vol. 95, pp.

231–251, 1995.

[5] B. D. Jeffs, “Sparse inverse solution methods for signal and image processing applications,” in Proc. Int. Conf. Acoust. Speech Signal Process., 1998, pp. 1885–1888.

[6] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R.

G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 83–91, Mar. 2008.

[7] S. D. Cabrera and T. W. Parks, “Extrapolation and spectral estima- tion with iterative weighted norm modification,” IEEE Trans. Signal Process., vol. 39, no. 4, pp. 842–851, Apr. 1991.

[8] Y. Jin and B. D. Rao, “Algorithms for robust linear regression by ex- ploiting the connection to sparse signal recovery,” in Proc. Int. Conf.

Acoust. Speech Signal Process., 2010, pp. 3830–3833.

[9] W. C. Chu, Speech Coding Algorithms. New York: Wiley-Inter- science, 2003.

[10] S. F. Cotter and B. D. Rao, “Sparse channel estimation via matching pursuit with application to equalization,” IEEE Trans. Commun., vol.

50, no. 3, pp. 374–377, Mar. 2002.

[11] W. U. Bajwa, J. Haupt, G. Raz, and R. Nowak, “Compressed channel sensing,” in Proc. Conf. Inf. Sci. Syst., 2008, pp. 5–10.

[12] D. L. Duttweiler, “Proportionate normalized least-mean-squares adap- tation in echo cancelers,” IEEE Trans. Acoust. Speech Signal Process., vol. 8, no. 5, pp. 508–518, Sep. 2000.

[13] B. D. Rao and B. Song, “Adaptive filtering algorithms for promoting sparsity,” in Proc. Int. Conf. Acoust. Speech Signal Process., 2003, pp.

361–364.

[14] D. Guo, “Neighbor discovery in ad hoc networks as a compressed sensing problem,” in Proc. Inf. Theory Appl. Workshop, 2009.

[15] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp.

3397–3415, Dec. 1993.

[16] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in Proc. 27th Asilomar Conf. Signals Syst. Comput., 1993, pp. 40–44.

[17] R. Tibshirani, “Regression shrinkage and selection via the LASSO,” J.

R. Stat. Soc. B, vol. 58, no. 1, pp. 267–288, 1996.

[18] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129–159, 2001.

[19] M. E. Tipping, “Sparse Bayesian learning and the relevance vector ma- chine,” J. Mach. Learn. Res., vol. 1, pp. 211–244, 2001.

[20] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process., vol. 50, no. 6, pp.

1417–1428, Jun. 2002.

[21] D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301–321, May. 2009.

[22] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp.

2230–2249, May 2009.

[23] D. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans.

Inf. Theory, vol. 52, no. 1, pp. 6–18, Jan. 2006.

[24] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, Dec. 2005.

[25] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl.

Math., vol. 59, no. 8, pp. 1207–1223, 2006.

[26] J. A. Tropp, “Greedy is good: Algorithmic results for sparse approxi- mation,” IEEE Trans. Inf. Theory, vol. 50, no. 10, pp. 2231–2242, Oct.

2004.

[27] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measure- ments via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol.

53, no. 12, pp. 4655–4666, Dec. 2007.

[28] D. Donoho, Y. Tsaig, I. Drori, and J. Starck, “Sparse solution of un- derdetermined linear equations by stagewise orthogonal matching pur- suit,” 2006, to be published.

[29] D. L. Donoho and X. Huo, “Uncertainty principles and ideal atomic de- composition,” IEEE Trans. Inf. Theory, vol. 47, no. 7, pp. 2845–2862, Nov. 2001.

[30] S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain map- ping,” IEEE Signal Process. Mag., vol. 18, no. 6, pp. 14–30, Nov. 2001.

[31] D. Wipf and S. Nagarajan, “A unified Bayesian framework for MEG/EEG source imaging,” NeuroImage, vol. 44, no. 3, pp. 947–966, 2008.

[32] Z. Tian and G. B. Giannakis, “Compressed sensing for wideband cog- nitive radios,” in Proc. Int. Conf. Acoust. Speech Signal Process., 2007, pp. 1357–1360.

[33] M. Wainwright, “Information-theoretic bounds on sparsity recovery in the high-dimensional and noisy setting,” in Proc. Int. Symp. Inf.

Theory, Jun. 2007, pp. 961–965.

[34] M. Wainwright, “Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting,” IEEE Trans. Inf. Theory, vol.

55, no. 12, pp. 5728–5741, Dec. 2009.

[35] K. R. Rad, “Sharp upper bound on error probability of exact sparsity recovery,” in Proc. Conf. Inf. Sci. Syst., 2009, pp. 14–17.

[36] A. K. Fletcher, S. Rangan, and V. K. Goyal, “Resolution limits of sparse coding in high dimensions,” in Proc. Neural Inf. Process. Syst., 2008, pp. 449–456.

[37] A. K. Fletcher, S. Rangan, and V. K. Goyal, “Necessary and sufficient conditions for sparsity pattern recovery,” IEEE Trans. Inf. Theory, vol.

55, no. 12, pp. 5758–5772, Dec. 2009.

[38] W. Wang, M. J. Wainwright, and K. Ramchandran, “Information-the- oretic limits on sparse signal recovery: Dense versus sparse measure- ment,” in Proc. Int. Symp. Inf. Theory, 2008, pp. 2197–2201.

[39] W. Wang, M. J. Wainwright, and K. Ramchandran, “Information-the- oretic limits on sparse signal recovery: Dense versus sparse measure- ment matrices,” IEEE Trans. Inf. Theory, vol. 56, no. 6, pp. 2967–2979, Jun. 2008.

[40] M. Akçakaya and V. Tarokh, “Shannon theoretic limits on noisy compressive sampling,” IEEE Trans. Inf. Theory, vol. 56, no. 1, pp.

492–504, Jan. 2010.

[41] S. Sarvotham, D. Baron, and R. G. Baraniuk, “Measurements vs. bits:

Compressed sensing meets information theory,” in Proc. 44th Allerton Conf. Commun. Control Comput., 2006.

[42] A. K. Fletcher, S. Rangan, and V. K. Goyal, “On the rate-distortion per- formance of compressed sensing,” in Proc. Int. Conf. Acoust. Speech Signal Process., Apr. 2007, pp. 885–888.

[43] S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient and robust compressed sensing using optimized expander graphs,” IEEE Trans. Inf. Theory, vol. 55, no. 9, pp. 4299–4308, Sep. 2009.

[44] H. V. Pham, W. Dai, and O. Milenkovic, “Sublinear compressive sensing reconstruction via belief propagation decoding,” in Proc. Int.

Symp. Inf. Theory, 2009, pp. 674–678.

[45] F. Zhang and H. D. Pfister, “Compressed sensing and linear codes over real numbers,” in Proc. Inf. Theory Appl. Workshop, 2008, pp.

558–561.

[46] Y. Jin and B. D. Rao, “Insights into the stable recovery of sparse solutions in overcomplete representations using network information theory,” in Proc. Int. Conf. Acoust. Speech Signal Process., 2008, pp.

3921–3924.

(16)

[47] Y. Jin and B. D. Rao, “Performance limits of matching pursuit algo- rithms,” in Proc. Int. Symp. Inf. Theory, 2008, pp. 2444–2448.

[48] J. Tropp, “Just relax: Convex programming methods for identifying sparse signal in noise,” IEEE Trans. Inf. Theory, vol. 52, no. 3, pp.

1030–1051, Mar. 2006.

[49] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York: Wiley, 2006.

[50] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.

Cambridge, U.K.: Cambridge Univ. Press, 2005.

[51] B. Hassibi and B. Hochwald, “How much training is needed in mul- tiple-antenna wireless links?,” IEEE Trans. Inf. Theory, vol. 49, no. 4, pp. 951–963, Apr. 2000.

[52] A. Lapidoth, “Nearest neighbor decoding for additive non-Gaussian noise channels,” IEEE Trans. Inf. Theory, vol. 42, no. 5, pp.

1520–1529, Sep. 1996.

[53] S. I. Resnick, A Probability Path. Boston, MA: Birkhauser, 1999.

[54] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse so- lutions to linear inverse problems with multiple measurement vectors,”

IEEE Trans. Signal Process., vol. 53, no. 7, pp. 2477–2488, Jul. 2005.

[55] D. P. Wipf and B. D. Rao, “An empirical Bayesian strategy for solving the simultaneous sparse approximation problem,” IEEE Trans. Signal Process., vol. 55, no. 7, pp. 3704–3716, Jul. 2007.

[56] J. Chen and X. Huo, “Theoretical results on sparse representations of multiple-measurement vectors,” IEEE Trans. Signal Process., vol. 54, no. 12, pp. 4634–4643, Dec. 2006.

[57] R. Zdunek and A. Cichocki, “Improved M-FOCUSS algorithm with overlapping blocks for locally smooth sparse signals,” IEEE Trans.

Signal Process., vol. 56, no. 10, pp. 4752–4761, Oct. 2008.

[58] Y. C. Eldar and M. Mishali, “Robust recovery of signals from a struc- tured union of subspaces,” IEEE Trans. Inf. Theory, vol. 55, no. 11, pp.

5302–5316, Nov. 2009.

[59] M. Akçakaya and V. Tarokh, “A frame construction and a universal dis- tortion bound for sparse representations,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2443–2450, Jun. 2008.

[60] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Trans. Signal Process., vol. 58, no. 1, pp. 269–280, Jan. 2010.

[61] H. O. Lancaster, The Distribution. New York: Wiley, 1969.

[62] C.-P. Chen and F. Qi, “Completely monotonic function associated with the Gamma functions and proof of Wallis’ inequality,” Tamkang J.

Math., vol. 36, pp. 303–307, 2005.

[63] J. W. Silverstein, “The smallest eigenvalue of a large dimensional Wishart matrix,” Ann. Probab., vol. 13, pp. 1364–1368, 1985.

[64] Y. Jin, “Algorithm development for sparse signal recovery and perfor- mance limits using multiple-user information theory,” Ph.D. disserta- tion, Dept. Electr. Comput. Eng., Univ. California San Diego, La Jolla, CA, 2011.

Yuzhe Jin (S’07) received the B.E. degree in computer science and technology from Tsinghua University, Beijing, China, in 2005 and the Ph.D. degree in electrical and computer engineering from University of California, San Diego, La Jolla, in 2011.

His research interests are in statistical signal processing, sparse signal re- covery, compressed sensing, information theory, natural language processing, and machine learning.

Young-Han Kim (S’99–M’06) received the B.S. degree with honors in elec- trical engineering from Seoul National University, Seoul, Korea, in 1996 and the M.S. degrees in electrical engineering and statistics and the Ph.D. degree in electrical engineering from Stanford University, Stanford, CA, in 2001, 2006, and 2006, respectively.

In July 2006, he joined the University of California San Diego, La Jolla, where he is currently an Assistant Professor of Electrical and Computer En- gineering. His research interests are in statistical signal processing and infor- mation theory, with applications in communication, control, computation, net- working, data compression, and learning.

Dr. Kim is a recipient of the 2008 NSF Faculty Early Career Development (CAREER) Award and the 2009 U.S.–Israel Binational Science Foundation Bergmann Memorial Award.

Bhaskar D. Rao (F’00) received the B.Tech. degree in electronics and electrical communication engineering from the Indian Institute of Technology, Kharagpur, India, in 1979 and the M.S. and Ph.D. degrees in electrical engineering from the University of Southern California, Los Angeles, in 1981 and 1983, respectively.

Since 1983, he has been with the University of California San Diego, La Jolla, where he is currently a Professor with the Electrical and Computer Engineering Department and holder of the Ericsson endowed chair in wireless access net- works. He is the Director of the Center for Wireless Communications. His in- terests are in the areas of digital signal processing, estimation theory, and op- timization theory, with applications to digital communications, speech signal processing, and human–computer interactions.

Dr. Rao was elected to an IEEE Fellow for his contributions in high-resolu- tion spectral estimation. His research group has received several paper awards.

His paper received the best paper award at the 2000 Speech Coding Workshop and his students have received student paper awards at the 2005 and 2006 Inter- national Conference on Acoustics, Speech and Signal Processing as well as the best student paper award at the 2006 Neural Information Processing Systems (NIPS). A paper he coauthored with B. Song and R. Cruz received the 2008 Stephen O. Rice Prize Paper Award in the Field of Communications Systems. He has been a member of the Statistical Signal and Array Processing technical com- mittee, the Signal Processing Theory and Methods Technical Committee, the Communications Technical Committee of the IEEE Signal Processing Society.

He has also served on the editorial board of the EURASIP Signal Processing Journal.

參考文獻

相關文件

6 《中論·觀因緣品》,《佛藏要籍選刊》第 9 冊,上海古籍出版社 1994 年版,第 1

For a decreasing function, using left endpoints gives us an overestimate and using right endpoints results in an underestimate.. We will use  6 to get

B) An experimentally determined equation that describes how the rate of reaction depends on temperature, orientation and number of collisions.. C) A theoretical equation that

11) Carbon-11 is used in medical imaging. The half-life of this radioisotope is 20.4 min.. 31) If each of the following represents an alkane, and a carbon atom is located at each

• helps teachers collect learning evidence to provide timely feedback & refine teaching strategies.. AaL • engages students in reflecting on & monitoring their progress

Robinson Crusoe is an Englishman from the 1) t_______ of York in the seventeenth century, the youngest son of a merchant of German origin. This trip is financially successful,

fostering independent application of reading strategies Strategy 7: Provide opportunities for students to track, reflect on, and share their learning progress (destination). •

Strategy 3: Offer descriptive feedback during the learning process (enabling strategy). Where the