• 沒有找到結果。

Two-Dimensional Maximum A Posteriori (MAP) Detection

2 Introduction to Holographic Data Storage Systems

3.3 Two-Dimensional Maximum A Posteriori (MAP) Detection

Two-dimensional maximum a posteriori (2D-MAP) detection, which was proposed in [13] as the 2D4 (Two-Dimensional Distributed Data Detection) algorithm, is actually the well-known max-Log-MAP algorithm. As opposed to the hard decision

in PDFE, the information kept by each pixel is a log-likelihood ratio (LLR), or more intuitively, a reliability value that indicates its probability of being “1” and being “0”.

To be specific, an LLR with a more positive value indicates a greater probability of a pixel to be a ‘0”, and a more negative LLR indicates a greater probability for it to be a

“1”. Following the same reason, an LLR value which is around zero then indicates that the decision concerning current pixel is still somewhat ambiguous. In each iteration, the reliability value at each pixel will be re-calculated, based on the knowledge of LLRs of the nearest neighbors in last iteration.

Again, to delve into the details of this algorithm, 2D-MAP detection could also be explained by steps:

(a) Likelihood feedback: Under the current assumption of equal a priori statistics (i.e., P[A(i,j)=1]=P[A(i,j)=0]), the MAP rule actually reduces to the maximum likelihood (ML) rule. If we use Nij to represent a set of neighboring pixels that surround pixel (i,j), and use Ω to represent the set of all possible combinations of neighborhood patterns, then for every pixel, the likelihood is calculated via:

[ ( , ) | ( , )]

Note we have assumed that a pixel is independent of one another, so that the probability of a neighborhood pattern can be calculated by multiplying all the component pixel probabilities together. Ideally, the expectation in (3.1) shall be taken over the entire page, so that MLPD is achieved. Such a task is computationally unreasonable, while computing the bit likelihood based on a subset of the observation page is feasible. Then, information shall be spread through iterations, and MLPD could be approached. Here, a more concrete demonstration of (3.1) is provided in Figure 3.3, and the aforementioned subset is simply a 3×3 region surrounding the current pixel being detected.

In Figure 3.3, the calculation of a bit likelihood is shown. Note that the final result of this computation is denoted as L1U, where “1” means that it is the likelihood under the hypothesis that the current pixel is sent as a “1”, and the subscript “U” means that this likelihood is an updating term for the likelihood calculated in last iteration. We distinguish L1U from L1, because L1 would not be directly replaced by L1U. This would become clear after the explanation of the second step in 2D-MAP detection.

Given the channel information, all the 28=256 neighborhood patterns are concerned in the calculation of this summation of conditional probability.

One more critical element to complete this step is the a priori probabilities, which are intended to be involved in the computation of the probability of a neighborhood

Figure 3.3 Likelihood feedback in the sum-product implementation of 2D-MAP algorithm

pattern. Actually, the likelihood P[Z(i,j)|A(i,j)] can be thresholded to make a decision based solely on Z(i,j). Nonetheless, a significantly more reliable decision will result from a likelihood considering other observations too. As a result, [13] suggested to infuse the likelihood information from neighboring pixels by applying (3.1) with P[A(l,m)] substituted by P[Z(l,m)|A(l,m)]. The replacement of a priori probabilities by the likelihood values acquired in last iteration is known as the propagation of extrinsic information, which is commonly used for turbo decoding. By this replacement, the

structure of an iterative detection is established. Note that the propagation of extrinsic information also deprives the pixel probabilies of their independence, which is the reason for 2D-MAP algorithm being not optimal.

As demonstrated with (3.1) and Figure 3.3, we know that a huge amount of multiplications and additions are involved. As is typically addressed, this is a sum-product implementation of MAP algorithm. In practice, motivated by the

approximation [17],

( ) ( )

log

i

X

i ≈ max log(i

X

i) (3.2)

the min-sum approach is adopted as in (3.3).

{ }

The first term in (3.3) is actually the normalized squared Euclidean distance, and the second term is supposed to be a sum of log-likelihoods. Once again, a more concrete demonstration of this implementation of 2D-MAP has been provided, as in Figure 3.4.

In Figure 3.4, z1,P represents the expected received pixel value given a sent “1”

and a certain neighborhood pattern, and the final computation result LL1U represents an updating term for the log-likelihood value. As can be seen, calculating the summation of 256 product terms as in Figure 3.3 has been replaced by finding the minimum among the 256 sum terms, which is a significant reduction in computation.

In the calculation of the second term in (3.3), each logarithm of a priori

probability is substituted by the corresponding log-likelihoods, which was referred to as the propagation of extrinsic information earlier. To be specific, P[Z(i,j)=1] will be replaced by LL1(i,j), and P[Z(i,j)=0] will be replaced by LL0(i,j). Now, suppose that for all the neighborhood patterns considered for the current pixel, the eight LL0s are subtracted from the original sum of log-likelihoods. This operation results in an equal

Figure 3.4 Likelihood feedback in the min-sum implementation of 2D-MAP algorithm

shift of the value of LL1U and LL0U, and after the introduction of step two in 2D-MAP algorithm, we will know that this equal shift does not make any differences with respect to the detection results. Therefore, another way to calculate the second term in (3.3) is suggested: we only include LLR(i,j) in the summation if Z(i,j)=1 in the hypothesis of the neighborhood pattern, as the example shown in the lower part of Figure 3.4. In this manner, half of the addition operations are saved on average. Then the equation (3.3) could be adjusted to become:

( ) 2 ( 1)

Where the term in the left hand side represents the updating term for log-likelihood being calculated in k-th iteration, given a hypothetical sent value, X+X(i,j) represents a 3×3 pixel pattern, with pixel (i,j) as the center, X represents X+X(i,j)-{X(i,j)}, and L(k-1) is the corresponding LLR matrix in (k-1)-th iteration. The symbol․represents an inner

product of two matrices. Note that H(X+X(i,j)) is the black-box function of expected channel output given a 3×3 pattern X+X(i,j). In the incoherent intensity channel model, the content of this function could be further specified as the inner product between a known IPI matrix and X+X(i,j), that is, H․X+X(i,j).

(b) Update: Updating of LLRs is executed at the end of each iteration. To avoid

abrupt changes in updated LLR values, a forgetting factor is applied in updating as

The choice of β concerns the trade-off between speed of convergence and accuracy of the converged results. A larger β may lead to faster convergence but possibly inferior performance. To determine β and number of iteration needed, the simulations are made according to the two different hypotheses about channel:

<1> the complete channel model

In order to observe the converging behavior with respect to different β value, we have plotted the BER against the number of iterations for different choices of β under adequate SNR settings for CH1 and CH2, as is shown in Figure 3.5. For deeper interpretation, two extra plots are provided in Figure 3.6, in which the upper plot shows the iteration number required to reach 10-3 BER versus β, and the lower plot shows the converged BER versus β; a lower iteration number to reach 10-3 BER indicates a faster convergence, and a lower converged BER indicates a better converged performance. Our goal here is to find a β value that strikes a balance between the convergence rate and converged performance. The upper plot in Figure 3.6 points out a suitable range for β to be 0.7-0.8, and the lower plot points out the suitable range for β to be 0.4-0.7. Combined with the concern of iteration number, which can be observed in Figure 3.6, we determine to use β = 0.7 for 15 iterations in our

implementation of 2D-MAP detection.

<2> Incoherent Intensity Channel Model

The same simulations are done for the incoherent intensity channel model. The converging behavior is shown in Figure 3.7, and the indicators of convergence rate and converged performance are shown in Figure 3.8. The upper plot in Figure 3.8 indicates that a suitable β would be 0.7, while the lower plot indicates a suitable range of β to be from 0.3 to 0.8. Combined with the concern of iteration number shown in Figure 3.7, again we decide to apply a β of 0.7 for 20 iterations in the implementation of 2D-MAP detection.

Figure 3.5 BER plotted against iteration number for different β (Complete Channel Model)

Figure 3.6 Iteration number required to reach 10-3 BER and converged BER plotted against β

Figure 3.7 BER plotted against iteration number for different β (Incoherent Intensity Channel Model)

Figure 3.8 Iteration number required to reach 10-3 BER and converged BER plotted against β

相關文件