• 沒有找到結果。

以注意力為基礎之高動態範圍顯像

N/A
N/A
Protected

Academic year: 2021

Share "以注意力為基礎之高動態範圍顯像"

Copied!
54
0
0

加載中.... (立即查看全文)

全文

(1)

多媒體工程研究所

以 注 意 力 為 基 礎 之 高 動 態 範 圍 顯 像

Attention-based High Dynamic Range Imaging

研 究 生:顏志晟

指導教授:林文杰 教授

(2)

Attention-based High Dynamic Range Imaging

研 究 生:顏志晟 Student:Zhi-Cheng Yan

指導教授:林文杰 Advisor:Wen-Chieh Lin

國 立 交 通 大 學

多 媒 體 工 程 研 究 所

碩 士 論 文

A Thesis

Submitted to Institute of MultimediaEngineering College of Computer Science

National Chiao Tung University in partial Fulfillment of the Requirements

for the Degree of Master

in

Computer Science November 2009

Hsinchu, Taiwan, Republic of China

(3)

以注意力為基礎之高動態範圍顯像

研究生 : 顏志晟 指導教授 : 林文杰 博士

國立交通大學

多媒體工程研究所

摘 要

有很多色調映射(Tone mapping)的方法考慮到人眼視覺系統。然而,很少有將注 意力納為設計的考量因素。注意力在人眼視覺中其實佔有相當重要的角色,因此 我們提出關於注意力和適應之間作用的局部色調映射。我們的方法主要分為兩個 部分。第一,我們採用高動態範圍(High Dynamic Range)的顯著圖來辨別高動態 範圍影像中會注意以及不會注意的區域第二,我們根據在心裡物理學發現到的注 意力與是應力的模型提出兩種色調映射的方法。我們將我們的方法套用在高動態 範圍影像以及影片上並和三種著名的色調映射方法結果做比較。我們的方法比起 最佳化的色調映射方法可以保留更多的細節並且比直方圖調整的方法色彩更鮮 明。

(4)

Attention-based High Dynamic Range Imaging

Student: Zhi-Cheng Yan

Advisor: Dr. Wen-Chieh Lin

Institute of Multimedia Engineering

College of Computer Science

National Chiao Tung University

ABSTRACT

There are many tone mapping methods which consider human visual system; however, few of them takes the attention effect into account. As attention plays an important role in human visual system, we proposed a local tone mapping method that respects attention and adaptation effects. Our approach is composed of two stages: first, we adopt the HDR saliency map to identify the attentive regions and nonattentative regions in an HDR image; second, we proposed two types of tone mapping functions that are locally adjusted according to attention and adaptation models found in psychophysics. We applied our tone mapping approach to HDR images and videos and compared with the results generated by three state-of-the-art tone mapping algorithms. The comparison shows that our approach preserves more detail than the optimization tone mapping method and produces more colorful results than histogram adjustment in HDR images.

(5)

致謝

首先,我想感謝林文杰老師這兩年來的細心指導尤其是在我的寫作能力以及 邏輯訓練上幫助相當大。我也要感謝 CAIG LAB 以及林文杰老師實驗室的所有 同學們,謝謝你們這兩年來的互相鼓勵以及課業上的幫助。 接著,我想謝謝家人們在背後的支柱。一直以來我都是一直是家中得到最多 的孩子,家人們疼愛愛護我。希望我將來能夠成為家中的樑柱,撐起這個小小的 家。不負家人對我的期望。 我也想感謝,從小到大我指導過我的老師們以及每段時期所認識的重要的好朋友 們。尤其是在小學的時候,啟蒙我的王遠東老師。在國中的時候,關愛我的張稚 慧老師。在高中的時候,照顧我的洪錦坤老師。大學的時候,培養我程式能力以 及非常容忍我任性要求的李蔡彥教授。如果沒有你們在成長中陪伴著我,教導我 做人的道理,我沒辦法在人生中拿到碩士學歷。如果沒有你們的陪伴,我可能無 法順利通過人生中每一個重要階段,非常感謝你們的幫助。 最後 我想引用陳之藩謝天中的一句話:"要感謝的人太多了,那麼就謝天吧!"

(6)

Contents

1 Introduction 1 2 Related Work 4

2.1 Global tone mapping methods . . . 4

2.2 Local tone mapping methods . . . 5

3 Background 7 3.1 Adaptation and attention in Human Visual System . . . 8

3.2 Computational model of visual attention . . . 13

4 Approach 19 4.1 Attention map computation . . . 21

4.2 Tone mapping function adjustment . . . 22

5 Experimental Results 26 5.1 Validation of our approach . . . 26

5.2 HDR Images . . . 30

5.3 HDR Videos . . . 31

5.4 Discussion and limitation . . . 31 6 Conclusion and Future Work 37

(7)

Bibliography 38

(8)

List of Figures

3.1 This figure shows a typical contrast response function of a hypothetical visual neuron summarized by Pestilli et al.[1]. The black curve is the response of the neuron under neutral condition. The red curve shows the effect of attention on the response, which increases the sensitivity of the neuron. The green curve describes the effect of adaptation which causes the neuron to need more contrast in order to achieve the same magnitude of response. . . 9 3.2 stimuli used in the experiments of Pestilli et al. [1]. Their experiment is a

two-alternative forced-choice (2FAC) orientation-discrimination task on Gabor patches. . . 11 3.3 This figure shows three attentional conditions in Pestilli et al.[1]. . . 11 3.4 This figure shows the findings in Pestilli et al. [1]. Each column reports data

from an observer. The abscissa is contrast. The ordinate is the percentage of questions that a subject answered correctly. The black, red, and green curve represents the neutral, valid, and invalid condition, respectively The blue hori-zontal line means the estimated threshold value at 70% accuracy . . . 12

(9)

3.5 This figure shows the experimental result in Ling et al. [2] They conducted a very similar experiment as Pestilli et al. They used four Gabor patches instead of two. While subjects adapt to these gratings, they were required to attend one of the four patches (sustained attention) to view all four stimuli (neutral condi-tion). After 50 ms to 16 s, one test patch with different contrast would appear at one of the four locations and subjects were asked to report the orientation of this test patch. The result recorded contrast thresholds at 75% accuracy with different durations. . . 13 3.6 Flowchart of the saliency map proposed by Itti et al.[3] . . . 15 4.1 Our algorithm . . . 20 4.2 Plot of Equation 4.5. We treat β = 1 as the baseline. The red and blue curve

corresponds to β = 1.2 and β = 0.8, respectively. . . 23 4.3 Plot of Equation 4.7. We set δ = 2 as the baseline curve that represents the

neutral condition. δ = 13 denotes the curve in adaptation condition . . . 24 5.1 Memorial Church in Stanford University. Randiance map courtesy of Paul

De-bevec. (a) is produced only by the transient attention function T A(C) (b)is generated only by the adaptation function SA(C). . . 27 5.2 Memorial Church in Stanford University. Randiance map courtesy of Paul

De-bevec. These results are produced by tone mapping method with different δ. . . . 28 5.3 Memorial Church in Stanford University. Randiance map courtesy of Paul

De-bevec. These results are produced by tone mapping method with different range of β. . . 29 5.4 Memorial Church in Stanford University. Randiance map courtesy of Paul

De-bevec. . . 32 5.5 Bathroom. Randiance map courtesy of Paul Debevec. . . 33

(10)

5.6 MtTamWest.hdr Randiance map courtesy of ILM (a) Histogram adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result . 34 5.7 dani belgium.hdr Randiance map courtesy of Karol Myszkowski (a) Histogram

adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result . . . 35 5.8 The scene is captured by Grzegorz Krawczyk. (a)-(c) and (g)-(i) are produced

by our algorithm. (d)-(f) and (j)-(l) are produced by photographic tone produc-tion. . . 36

(11)

List of Tables

4.1 The differences between saliency maps of HDR and LDR images . . . 21

(12)

C H A P T E R

1

Introduction

The luminance in the real world ranges widely from 105cd/m2to 10−1cd/m2 [4]; however, the

traditional digital image can only cover a limited dynamic range of intensities. To faithfully describe the dynamic range of the real world luminance, High Dynamic Range Image (HDRI), which typically stores physical values of luminance, was proposed [5]. HDRI plays an im-portant role in image acquisition and processing since it makes images look more realistic. In computer graphics, HDRI is used to represent the results of global illumination or to render a scene through an HDR environment map or HDR textures.

Although HDR has important applications in computer graphics, it cannot be shown on con-ventional displays or traditional image output devices since the dynamic range of these devices is limited. Therefore, a tone mapping function that reduces the dynamic range of an HRDI to a low dynamic range image (LDRI) while retaining realistic color and contrast is needed. Al-though some HDR display devices [6] emerge recently, they are still under development and very expensive. In fact, compression of HDR data is very crucial for HDR devices to be readily accessible since HDR data require larger storage. An efficient and effective tone mapping

(13)

2 rithm plays a key role in HDR compression[7][8][9].

As tone mapping is highly related to visual perception, many tone mapping algorithms [10][4] were proposed based on the studies in Human Visual System (HVS). These tone map-ping algorithms usually apply an identical function to the entire image; however, vision studies suggest that attention plays a crucial role in the information process of human visual system. It enhances the performance on hyperacuity, visual search, orientation detection, and discrimina-tion and localizadiscrimina-tion tasks [11]. In particular, it is found in many vision literatures that contrast sensitivity would increase in our neuron response for attentive stimuli while the contrast sensi-tivity would decrease for nonattentative stimuli [1][12][13][14]. Furthermore, it has also been shown that the contrast sensitivity of HVS decreases once human eyes adapt to the luminance of a region. According to these findings, it is apparent that we need to take the attention and adaption effects into account when designing a tone mapping algorithm.

In this thesis, we propose a local tone mapping algorithm that respects the attention and adaption effects of human visual system. We adopt the HDR saliency map proposed by Petit et al. [15][16] to obtain an attention map that can estimate the attentive and nonattentative re-gions of an HDRI. Once the attentive and nonattentative rere-gions are detected, we then locally adjust the tone mapping function according to the contrast response model found by Pestilli et al. [1]. This model was obtained from a series of psychophysical experiments, which explore the influences of the attention and adaption effects on the contrast sensitivity of HVS. We also extend our tone mapping algorithm to handle HDR videos. Our experiments on HDR images and videos demonstrate the effectiveness of our algorithm.

The contributions of this thesis are: first, we introduce a local tone mapping method which considers the attention and adaptation effects of human visual system; second, our tone map-ping method is locally adjusted both in the spatial and temporal domain when dealing with HDR videos.

(14)

3

The rest of this thesis is organized as follows. We first review the related work in Chapter 2 and briefly introduce the background about the attention and adaption effects in vision and neuroscience fields in Chapter 3. We describe our approach in Chapter 4. In Chapter 5, we show our experimental results. We conclude our work in Chapter 6.

(15)

C H A P T E R

2

Related Work

An HDRI can store the dynamic range of luminance in the real world. However, many exist-ing display devices cannot display such a high dynamic range. HDRI needs the tone mappexist-ing method to map the luminance range in the world to that of a conventional display device. Ex-isting tone mapping methods can be roughly categorized into two types. One is the global tone mapping method, which applies the same function to all pixels of an image. The other is the local tone mapping method, which transforms the luminance of each pixel according to the luminance values of its neighborhood pixels. Global tone mapping method is usually more efficient in computation than local tone mapping method. We will briefly review several tone mapping methods that are closely related to our work in this chapter.

2.1

Global tone mapping methods

Tumblin and Rushmeier [17] proposed two observer models based on humans’ visual responses to the light in television and film systems. One is for the real world and the other is for display devices. Under the assumption that a real-world observer should be the same as a display

(16)

2.2 Local tone mapping methods 5 server, they proposed a tone mapping operator for grey-level HDRIs. Ward [18] adopts a simple linear operator in his global tone mapping method that can preserve apparent contrast and visi-bility.

Ferwerda et al.[4] first applied psychophysical studies to their tone mapping operator, which can capture the image properties of color appearance, visual acuity and light/dark adaptation. Ward et al. [19] used the histogram adjustment technique to define their tone mapping function. As human eyes have the best view in the fovea, they filtered an input HDRI to obtain a foveal sample image. Histogram adjustment is then performed according to the foveal image. Ward et al. assume that all pixels would participate in the adaptation; however, in human visual system, eye movements are critical for acquiring and processing visual information [20]. Therefore, it should be more reasonable to compute the foveal sample image based on a viewers fixation positions.

Pattanaik et al. [10] used an adaptation model, which is a simplification of Hunt’s model [21], to transform the luminance of a scene into retinal-response-like vectors, which are con-verted into appearance vectors. The dispaly intensity of the rendered scence is then computed from the appearance vectors using the inverse appearance and adaptation model. Mantiuk et al. [22] formulate tone mapping as an optimization problem, which minimizes the visible contrast distortions between a human visual system and the display. Van Hateren [23] used the model of human cones to perform tone mapping in which all components are represented as temporal kernels to transform HDR Images/videos into LDR images/videos.

2.2

Local tone mapping methods

Global tone mapping is usually used to compress an HDRI [5] for its computational efficiency; however, it may cause some loss of details since all pixels are transformed using the same mapping function

(17)

2.2 Local tone mapping methods 6 It is more desirable to have the tone mapping function at different locations be adaptively adjusted to preserve the details. Chiu et al. [24] defined a scaling function and used it as guide-line to scale the pixel values. Tumblin and Greg [25] noticed that the artists usually compress the contrast of large features and add the details in their drawings. They applied anisotropic diffusion [26], which treats the intensity as heat, to find the boundary of an object in an image and proposed a detail-preserving contrast reduction method to mimic an artist’s drawing pro-cess. Fattal et al. [27] compressed large magnitude of gradient and solved a Poisson equation to obtain LDR images. Their method avoids some noisy appearance of LCIS methods.

Durand and Dorsey [28] showed that the bilateral filtering [29] is a robust statistical esti-mator. It is an edge-preserving smoothing operator and has the similar property of anisotropic diffusion. The authors used two-scale decomposition: base image and detail image. The base image was created by bilateral filtering the input HDRI such that the high luminance is pre-served. The detail image was the division of the intensity by the base image. They performed contrast reduction on the base image and multiplied the result with the detail image to produce LDRI.

Inspired by the photographic technique, Reinhard et al. [30] adopted the concepts of the zone system [31][32][33] and the dodging-and-burning technique to perform the dynamic range compression. Chen et al. [34] defined different tone mapping functions for different objects in an image, in which objects are detected using the Earth Mover’s Distance (EMD) [35].

Recently, several interactive tools are developed to perform tone mapping locally. Lischinski et al. [36] used brush and stroke to set constraints, which are propagated to form a completely adjusted tone map under an image-guided energy minimization framework. Liang et al. [37] used a simple touch screen to set constraints and modified the stroke-based algorithm [36] so that the local tone mapping can be executed efficiently on mobile devices

(18)

C H A P T E R

3

Background

In this chapter, we introduce attention and adaptation in the human visual system and their com-putational models that are used in our tone mapping algorithm. There are two kinds of attention activities in human visual system [38]. One is transient or exogenous attention and the other is sustained or endogenous attention. Transient attention is an unconscious and bottom-up pro-cess, which is affected by salient stimulus in the scene. Sustained attention is a task-relevant and top-down process. It is voluntary and the reason why we can pay attention to our interest-ing information. Attention influences the performance of many visual tasks. In particular, it affects the visual adaptation mechanism as well. We will introduce the psychophysical studies about the interaction between attention and adaptation in the fields of neuroscience and vision in section 3.1.

In section 3.2, we briefly describe the visual attention model used in our tone mapping approach. We adopt the HDR saliency map, which is developed based on the saliency map proposed by Itti et al. [3], to simulate the activity of transient attention because saliency map is constructed from the low-level image features, which is a bottom-up process. Saliency map

(19)

3.1 Adaptation and attention in Human Visual System 8 has also been widely used in video compression, image processing, global illumination, and computer vision. We do not explicitly model the effect of sustained attention in tone mapping function for two reasons. First, sustained attention is task-dependent and related to cognition; however, the relation between cognition and sustained attention is still study. Therefore, it is difficult to build the computational model of sustained attention. Second, the interaction be-tween sustained attention and contrast sensitivity can be roughly considered as a mixed effect of transient attention and adaptation according to the psychophysical study on sustained atten-tion [2]. Therefore, the effect of sustained attenatten-tion is implicitly modeled in the adaptaatten-tion mechanism in our approach.

3.1

Adaptation and attention in Human Visual System

In the vision field, there are many studies about the interaction between attention and adaptation [1][12][13][14]. Figure 3.1 describes how attention and adaptation affect the contrast response in visual neurons. The black curve represents the situation when there are no effects of attention and adaptation. When our eyes adapt to the luminance of a scene, our visual neurons would be less sensitive to contrast and the black curve would shift toward the green curve. In other words, the contrast of the scene needs to be increased to trigger the same level of neural response when the adaptation effect occurs. Adaptation also affects our ability to detect the just-noticeable contrast difference [39]. Our eyes can discriminate finer contrast difference more efficiently before adaptation.

In contrast to adaptation, attention induces an opposite effect to the contrast response of visual neurons. Attention effect would move the black curve toward the red curve in Figure 3.1, i.e., neurons need less contrast to attain the same response. Contrast appears more intense when humans’ eyes pay attention to as attention stimulates the visual neurons causing stronger response. Psychophysical studies [40] suggest that transient attention changes the response gain modulation, which leads to greater attention modulation for high contrast level and affects the

(20)

3.1 Adaptation and attention in Human Visual System 9 asymptotes of the curve. Sustained attention influences the contrast gain modulation, which shifts the curve but does not affect the asymptotes.

Figure 3.1: This figure shows a typical contrast response function of a hypothetical visual neuron summarized by Pestilli et al.[1]. The black curve is the response of the neuron under neutral condition. The red curve shows the effect of attention on the response, which increases the sensitivity of the neuron. The green curve describes the effect of adaptation which causes the neuron to need more contrast in order to achieve the same magnitude of response.

(21)

3.1 Adaptation and attention in Human Visual System 10 The interaction between transient attention and adaptation was explored in the psychophys-ical study by Pestilli et al. [1]. Figure 3.2 shows their experiments. At the beginning of each trial, subjects must adapt to one of the two adaptation conditions: adapt-0 and adapt-100. At the adapt-0 condition, subjects adapt to the background of the stimulus for 20 s. At the adapt-100 condition, subjects were stimulated with two counterphase flickering Gabor patches with 100% contrast for 70 s. After adapting, a white rectangle was shown as a guide cue in order to ma-nipulate transient attention during the test trial. This guide cue presented at the fixation point (neutral) or above one of the Gabor patches (peripheral) for 50 ms. Transient attention would be directed to the location of white rectangle instantaneously. After 50 ms, there was an inter stimulus interval (ISI) for 50 ms. After the duration of ISI, the two tilted test Gabor patches with contrast less than 100% were presented simultaneously for 30 ms. After presenting stimulus, subjects need to answer the orientation of stimulus, which is indicated by response cue.

There are three attentional conditions in each test trials as shown in Figure 3.3. In the neutral-cue condition, which is the control group, the guide cue was in neutral condition. Sub-jects would be asked to discriminate the orientation of the Gabor patch indicated by the response cue. In the valid-cue or attentive condition, the guide cue and response cue were at the same direction and subjects needed to report the tilt of the Gabor patch presented by response cue. In the invalid-cue or nonattentative condition, subjects would need to discriminate the tilt of Gabor filter not preceded by peripheral cue.

(22)

3.1 Adaptation and attention in Human Visual System 11

Figure 3.2: stimuli used in the experiments of Pestilli et al. [1]. Their experiment is a two-alternative forced-choice (2FAC) orientation-discrimination task on Gabor patches.

(23)

3.1 Adaptation and attention in Human Visual System 12 Figure 3.4 shows the results of Pestilli et al. [1]. In their findings, in order to obtain the same accuracy in the attentive condition, subjects need to receive higher contrast if they do not attend to the Gabor patch indicated by the response cue in advance. Attention would increase the contrast sensitivity in the valid-cue situation. The blue horizontal lines show the estimated threshold values at a 70% accuracy. Full adaptation (adapt-100) at the beginning would increase the contrast threshold. They also noted that adaptation only affects the threshold, while attention affects both the threshold and the asymptotes.

Figure 3.4: This figure shows the findings in Pestilli et al. [1]. Each column reports data from an observer. The abscissa is contrast. The ordinate is the percentage of questions that a subject answered correctly. The black, red, and green curve represents the neutral, valid, and invalid condition, respectively The blue horizontal line means the estimated threshold value at 70% accuracy

(24)

3.2 Computational model of visual attention 13 Sustained attention will also affect the contrast sensitivity. Ling et al. [2] discovered that humans need a lower contrast threshold to reach 75% accuracy on the orientation discrimination task at the beginning of sustained attention; however, as time goes by, the contrast threshold for successful discrimination of orientation needs to be increased. Figure 3.5 shows the results of their experiment. The sustained attention has the same effect as transient attention for a short duration but the effect of adaptation takes over afterward.

Figure 3.5: This figure shows the experimental result in Ling et al. [2] They conducted a very similar experiment as Pestilli et al. They used four Gabor patches instead of two. While subjects adapt to these gratings, they were required to attend one of the four patches (sustained attention) to view all four stimuli (neutral condition). After 50 ms to 16 s, one test patch with different contrast would appear at one of the four locations and subjects were asked to report the orientation of this test patch. The result recorded contrast thresholds at 75% accuracy with different durations.

3.2

Computational model of visual attention

Many computational models of visual attention have been proposed based on the feature inte-gration theory [41]. The theory says that fairly simple visual features are computed over the

(25)

3.2 Computational model of visual attention 14 entire scene in the first step of visual processing, but only those attentive features will be further processed to form a united object representation. Itti et al. [3] modeled the salient positions where primates would pay attention in an image. They used several operations to produce a saliency map, in which they applied the winner-take-all algorithm and inhabitition of return mechanism to simulate the behavior of attention. Later, Itti [42] modified their model to add motion features in video compression and showed that the priority map produced by the saliency map is effective in video compression. Petit et al. [15] modified the saliency map in order to discover the salient locations in an HDR image.

In our algorithm, we use the HDR saliency map [15] as the model of transient attention since it has the same bottom-up process property as transient attention. The HDR saliency map is developed based on original saliency map [3]. Figure 3.6 depicts the model of the origi-nal saliency map. In this model, Itti et al. [3] first built Gaussian pyramids with nine spatial scales σ ∈ [0, 8]. The finest scale is σ = 0 and the coarsest scale is σ = 8. They decompose an image into three parts: intensity, color, and orientation. For each part, they performed the center-surround differences operation and across-scale combination to simulate the feature in-tegration theory [42]. The center-surround operator Θ mimics the function of visual receptive field, which states that visual neurons are more sensitive to the stimuli in the center of a visual space than those in the peripheral of the visual space. Θ operator upsamples a coarser-scale image to get an estimated finer-scale image and then subtract the estimated finer-scale image from the real finer-scale image to obtain a feature map.

(26)

3.2 Computational model of visual attention 15

(27)

3.2 Computational model of visual attention 16 The intensity image I in Figure 3.6 is computed using

I = r + g + b

3 , (3.1) where r, g, b is the red, green, and blue channel of the input image, respectively. The six feature maps of intensity are computed as follows,

I(c, s) = | I(c) Θ I(s) |, (3.2) where c ∈ {2, 3, 4} and s = c + δ with δ ∈ {3, 4}.

Itti et al. define their color maps R, G, B, and Y according to the opponent process theory [41], which states that there are three opponent channels in human visual system: red versus green, yellow versus blue, and black versus white.

R = r − g + b 2 , G = g − r + b 2 , B = b − r + g 2 , Y = r + g 2 − |r − g| 2 − b. (3.3) As the responses excited by one color of opponent channel will inhibit those by the other color, they define 12 color feature maps as

RG(c, s) = | (R(c) − G(c)) Θ (R(s) − G(s)) |,

BY (c, s) = | (B(c) − Y (c)) Θ (B(s) − Y (s)) |, (3.4) where c ∈ {2, 3, 4} and s = c + δ with δ ∈ {3, 4}.

The orientation feature map is obtained using the Gabor pyramids O(σ, θ), where σ repre-sents the scale and θ ∈ {0◦, 45◦, 90◦, 135◦} is the orientation of the Gabor filter

(28)

3.2 Computational model of visual attention 17 All feature maps of each part (intensity, colors, and orientation ) are normalized and com-bined into three conspicuity maps using the across-scale addition operatorL

, which downsam-ples the feature map at each level to the scale-four level and sums these feature maps. These three conspicuity maps are then normalized and summed into the final saliency map.

¯ I = 4 M c=2 M s=c+3 N (I(c, s)), ¯ C = 4 M c=2 M s=c+3 N (RG(c, s)) + N (BY (c, s)), ¯ O = X θ∈{0◦,45,90,135} N ( 4 M c=2 M s=c+3 N (O(c, s, θ))), S = 1 3( N ( ¯I) + N ( ¯C) + N ( ¯O) ), (3.6) where N (.) consists of normalizing the values in the map to a fixed range [0, M ], finding the location of the global maximum M and the average ¯m of all local maxima in the map, and multiplying the map by (M − ¯m)2.

Itti et al. [43] also extended the original saliency map to handle videos. They added another two features into the saliency map: flicker and motion. The feature maps of flicker are calculated from the absolute luminance difference between the current frame and the previous frame. The feature maps of motion are computed from Gabor pyramids or intensity pyramids. They shift pyramids one pixel away in four directions to obtain the shifted pyramids Sn(σ, θ). The motion

pyramids is computed using Reichardt model [44],

Rn(σ, θ) = | On(σ, θ) ∗ Sn−1(σ, θ) − On−1(σ, θ) ∗ Sn(σ, θ) |, (3.7)

where the subscripts n and n − 1 denote the current frame and previous frame. The symbol ∗ represents a pixel-wise product. Finally, they compute the feature maps Rn and conspicuity map ¯Rnof motion by

(29)

3.2 Computational model of visual attention 18 Rn(c, s, θ) = | Rn(c, θ)ΘRn(s, θ) |, (3.8) and ¯ Rn = X θ N ( 4 M c=2 M s=c+3 N (Rn(c, s, θ))), (3.9)

(30)

C H A P T E R

4

Approach

In this chapter, we describe our tone mapping method in details. Figure 4.1 depicts the flowchart of our algorithm, which consists of two parts: attention map computation and tone mapping ad-justment. When we acquire an HDRI, we need to calculate its attention map in order to find what regions would draw humans’ attention in this HDRI. We will describe how to capture the attention map in section 4.1.

We adjust our tone mapping function to account for transient attention and adaptation effects based on the psychophysical findings of Pestilli et al. [6]. According to section 3.1, we design an attention function that would adjust a baseline tone mapping function for neutral condition according to the computed attention map pixel by pixel. We also design an adaptation function to reduce the contrast of an HDRI. Finally, the tone mapping function locally adjusted by the attention function and adaptation function are associated by using weighted sum. We will de-scribe these functions in section 4.2.

(31)

20

(32)

4.1 Attention map computation 21

4.1

Attention map computation

We first need to determine the attentive regions and nonattentative regions in an HDR im-age. There are several computational models of visual attentions. We use the HDR saliency map [15][16], which is developed based on the saliency map [3]. As mentioned in Petit et al. [15][16], the prediction of original saliency map in HDRI is less accurate than that in LDRI. Hence, they modified the saliency map in two aspects. One is intensity and the other is ori-entation. Table 4.1 shows the differences between the saliency map proposed by Itti et al. [3] and Petit et al. [15][16]. These modifications are mainly made to handle the larger dynamic range of HDR data as the original saliency map is designed for LDRI. Petit et al. construct the conspicuity maps at scale 4, and then normalize and sum up these conspicuity maps to obtain the saliency map of an HDRI. As the sizes of the input image and its saliency map are not the same, we resize the saliency map so that the salient and unsalient regions of the input image can be determined.

Table 4.1: The differences between saliency maps of HDR and LDR images Saliency Map for LDRI Saliency Map for HDR Intensity I(c, s) = | I(c) Θ I(s) | I(c, s) = | I(c) Θ I(s) |I(s) Orientation O(c, s, θ) = | O(c, θ) Θ O(c, θ) | O(c, s, θ) = O(c,θ)I(s)

Since the saliency map can predict the location of visual attention, we use the equation 4.1 to obtain the attention map. We treat those pixels whose saliency values are above 0 as attentive regions while those below 0 as nonattentative regions. Moreover, we rescale the saliency values of the attention map to [−1, 1]. Our tone mapping function would changes according to this attention map.

A(x, y) = 2

(33)

4.2 Tone mapping function adjustment 22 where S(x, y) is the saliency value at pixel (x,y) of the saliency map, max is the maximum of the saliency map, and min is the minimum of the saliency map.

4.2

Tone mapping function adjustment

We first convert an input HDRI from the RGB color space to XYZ space in order to get the luminance of the HDRI. We then calculate the Weber contrast [45] of every HDR pixel defined as follows,

C(x, y) = L(x, y) − Lw Lw

, (4.2) where L(x, y) is the luminance of the pixel (x,y) and

Lw = exp( 1 N X ∀x,y log(L(x, y)). (4.3) For the simplicity of notation, we will omit (x, y) of C(x, y) and L(x, y) in the following text.

We adopt the NaKa-Rushton function (Equation 4.4), which is used in many tone mapping methods [30][10][46], as the base model of our tone mapping function.

R = a1RmaxC

α

+ a 2C50α

, (4.4) where R is the neuron response, C is the contrast of stimulus, Rmax is the maximal firing rate

of population, α is the slope of the contrast response function, and C50 is the contrast required

to produce 50% of the neuron’s maximum response.

Pestilli et al. [40] indicate that adaptation changes the contrast gain by modulating it through the variable a2and transient attention changes the response gain via the variable a1.

As mentioned in Section 3.1, transient attention influences the contrast sensitivity function. It increases contrast sensitivity when we pay attention to some locations and decreases sensitiv-ity when we do not keep an eye on them. We apply the findings of Pestilli et al. [40] to adjust our tone mapping function, which takes advantage of the transient attention effects. Equation 4.5,which we call attention function, lists our tone mapping function under transient attention.

(34)

4.2 Tone mapping function adjustment 23

T A(C) = β · (C + 1)

2 + C , (4.5) where β is used to adjust the tone mapping function according to the value of attention map. Specifically,we use Equation 4.6 to adjust β,

β = 0.2 ∗ A(x, y) + 1 (4.6) Figure 4.2 illustrates the behavior of our tone mapping function due to transient attention. We set β = 1 as the baseline, which corresponds to the contrast sensitivity function in neutral condition in Figure 3.1. The tone mapping function will shift toward the red curve when β gradually increases. This simulates the curve in the valid condition in Figure 3.4. If β gradually decreases, the function will move toward the blue curve. This equation behaves like the experi-mental result in Pestilli et al. [40]. To avoid over or under exposure, we restrict 0.8 ≤ β ≤ 1.2.

Figure 4.2: Plot of Equation 4.5. We treat β = 1 as the baseline. The red and blue curve corresponds to β = 1.2 and β = 0.8, respectively.

(35)

4.2 Tone mapping function adjustment 24 Sustained attention will enhance contrast sensitivity like transient attention does for a short period of time, but it will impair perception like adaptation does for a long duration. We consider the effect of sustained attention as the mixture of transient attention and adaptation. The effect of adaptation dominates the effect of sustained attention, so we approximate the effect of sustained attention as the effect of adaptation. We use Equation 4.7 to model the function of adaptation.

SA(C) = C + 1

δ + C, (4.7) where δ is used to adjust the tone mapping function based on the adaptation effect. We call Equation 4.7 the adaptation function. As we consider adaptation a steady response, which does not vary a lot in a short period, we fix δ at 13 in our approach. Figure 4.3 shows the function used in our model. δ = 2 and δ = 13 denotes the curve in the neutral and adaptation condition, respectively.

Figure 4.3: Plot of Equation 4.7. We set δ = 2 as the baseline curve that represents the neutral condition. δ = 13 denotes the curve in adaptation condition

(36)

4.2 Tone mapping function adjustment 25 Because transient attention and adaptation act independently and change contrast sensitivity function simultaneously [1], we use a weighting function to combine transient attention and adaptation(Equation 4.8). Adaptation is more important as it affects contrast sensitivity func-tion longer than transient attenfunc-tion, which is short-lived. Adaptafunc-tion should weight more than transient attention in HDRI. Therefore, we set a higher weight for the adaptation term

R(C) = 1

3T A(C) + 2

3SA(C) (4.8) After we obtain R, the intensity value of each color channel in an HDRI is mapped to that of an LRDI as follows,        Rd Gd Bd        =        R(C) Rw L R(C) Gw L R(C) Bw L        , (4.9) where Rd, Gd, Bd is the RGB value of a pixel in the LDRI. Rw, Gw, Bw is the RGB value of

the corresponding pixel in the HDRI.

Our tone mapping algorithm can also be applied to an HDR video by modifying two com-ponents First, we adopt the video saliency map [43], which includes flicker feature map and motion feature map to compute the attention map in an HDR video. Second, we modify the weighting function to address the change of contrast sensitivity due to the temporal property of videos

R(C) = f (A) · T A(C) + (1 − f (A)) · SA(C), (4.10) where

f (A) = A + 3

6 . (4.11) Equation 4.11 is inspired by the study of Pestilli et al. [1] who found transient attention plays a more important role than adaptation when people watch a video since transient attention would alter the contrast sensitivity function, which had been optimized by adaptation. The weighting function f (A) depends on the time course of transient attention.

(37)

C H A P T E R

5

Experimental Results

5.1

Validation of our approach

We validate the necessity of the attention function and adaptation function used in our tone mapping approach via three experiments. First, we show what would happen if we just use one of our tone mapping functions. As shown in Figure 5.1, the function of attention and adaptation are complementary. The regions that are worse in the result of transient attention can be com-plemented by the regions in the result of adaptation and vise versa. This also demonstrates that attention and adaptation can both optimize visual performance [1]. Second, Figure 5.2 shows the results of our tone mapping function when different δ values are used in the adaptation function. As δ increases, the results becomes better. Third, we change the attention function by varying the range of β. As the range of β increases, one cna observe that the results become darker in Figure 5.3

(38)

5.1 Validation of our approach 27

(a) Transient Attention (b) Adaptation Figure 5.1: Memorial Church in Stanford University. Randiance map courtesy of Paul Debevec. (a) is produced only by the transient attention function T A(C) (b)is generated only by the adaptation function SA(C).

(39)

5.1 Validation of our approach 28

(a) δ = 2 (b) δ = 12

(c) δ = 30 (d) δ = 50

Figure 5.2: Memorial Church in Stanford University. Randiance map courtesy of Paul Debevec. These results are produced by tone mapping method with different δ.

(40)

5.1 Validation of our approach 29

(a) 0.8 < β < 1.2 (b) 0.4 < β < 1.6

(c) 0.01 < β < 2 (d) 0.001 < β < 3

Figure 5.3: Memorial Church in Stanford University. Randiance map courtesy of Paul Debevec. These results are produced by tone mapping method with different range of β.

(41)

5.2 HDR Images 30

5.2

HDR Images

In this section, we compare our results with three state-of-the-art tone mapping algorithms. We use the executable code provided in [5] for histogram adjustment [19] and photographic tone reproduction [30]. We also used the pfstmo package to run optimization tone mapping[22]. We tuned the parameters of these approaches to ensure their results are the best performance of these approaches We set γ = 2.0 for those approaches that require gamma correction. In all of our experiments, We set 0.8 < β < 1.2 and δ = 12. Figure 5.4 to 5.7 show the comparison of our results and the results of histogram adjustment [19], optimization tone mapping [22], and photographic tone reproduction [30]. We highlight the differences of these results in thumbnails below or next to each image.

In Figure 5.4, the histogram adjustment operator preserves more details but the produced image appears overexposed. The result produced by the optimization tone mapping operator places more emphasis on higher frequencies and becomes so darker that it makes some details invisible. The photographic tone reproduction operator produces a brighter result than opti-mization tone mapping but provides less details and contrast than our method.

Figure 5.5 shows the tone mapping results of the randiance map of a bathroom. Histogram adjustment does not preserve good chromaticity. Optimization tone mapping performs better around the lamp region but produces a dimmer scene. Photographic tone reproduction produces a similar result as ours, but its result loses visibility around the lamps region.

Figure 5.6 shows the tone mapping results of MtTamWest.hdr. The result of histogram ad-justment is not very saturation. The optimization tone mapping operator makes the rock more mottled than other tone mapping methods. Our method produces higher contrast in the region of plants (the bottom thumbnail).

(42)

5.3 HDR Videos 31 Figure 5.7 shows the tone mapping results of DaniBelgium.hdr. The result produced by histogram adjustment operator is washed out. The leaf veins in our result is clearer than those in other results; however, the plant region (the bottom thumbnail in Figure 5.7) in optimization tone mapping shows higher contrast.

5.3

HDR Videos

We show some frames of tone-mapped HDR Videos produced by photographic tone reproduc-tion and our method side by side in Figure 5.8. The original HDR video is captured by Grzegorz Krawczyk. Our result produced more yellowish background when there is a sun in the scene. In general, our result exhibits more details as well as a brighter scene.

5.4

Discussion and limitation

In general, the overall results of histogram adjustment [19] exhibit more details but less contrast. Optimization tone mapping [22] produces a sharper image but preserve fewer details than our approach. Our tone mapping function is similar to photographic tone reproduction [30], so our results are similar to those of photographic tone reproduction in most parts of the tested images; however, our approach can preserve details better and maintain higher contrast in most tested images. A limitation of our approach is that our tone mapping function has a weakness for dealing with whitely cloudy sky.

(43)

5.4 Discussion and limitation 32

(a) Histogram adjustment (b) Optimization tone mapping

(c) Photographic tone reproduction (d) Our result

(44)

5.4 Discussion and limitation 33

(a) Histogram adjustment (b) Optimization tone mapping

(c) Photographic tone reproduction (d) Our result Figure 5.5: Bathroom. Randiance map courtesy of Paul Debevec.

(45)

5.4 Discussion and limitation 34

(a)

(b)

(c)

(d)

Figure 5.6: MtTamWest.hdr Randiance map courtesy of ILM (a) Histogram adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result

(46)

5.4 Discussion and limitation 35

(a)

(b)

(c)

(d)

Figure 5.7: dani belgium.hdr Randiance map courtesy of Karol Myszkowski (a) Histogram adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result

(47)

5.4 Discussion and limitation 36

Figure 5.8: The scene is captured by Grzegorz Krawczyk. (a)-(c) and (g)-(i) are produced by our algorithm. (d)-(f) and (j)-(l) are produced by photographic tone production.

(48)

C H A P T E R

6

Conclusion and Future Work

We present a tone mapping method that considers the attention and adaptation effects in Human Visual System. We implicitly model the effect of sustained attention in adaptation and propose two models for transient attention and adaptation based on studies in psychophysics and neuro-science. We adopt the HDR saliency map [15][16] based on Itti et al. [3] to model the bottom-up process of attention. We also demonstrate that our results can preserve the contrast and details better than those produced by three state-of-the-art tone mapping methods: histogram adjust-ment [19], photography tone mapping operator[30], and optimization tone mapping operator [22].

In the future, when researchers learn more about the brain functions, we will replace the saliency map with a more sophisticated computational model of attention. We will also conduct some experiments to measure the visual quality of our results against the real scene in the world [47]. We would also like to apply our approach to HDR compression [7][8][9].

(49)

Bibliography

[1] F. Pestilli, G. Viera, and M. Carrasco, “How do attention and adaptation affect contrast sensitivity?,” Journal of vision, vol. 7, no. 7, 2007.

[2] S. Ling and M. Carrasco, “When sustained attention impairs perception,” Nature Neuro-science, vol. 9, pp. 1243–1245, September 2006.

[3] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, 1998. [4] J. A. Ferwerda, S. N. Pattanaik, P. Shirley, and D. P. Greenberg, “A model of visual

adap-tation for realistic image synthesis,” in SIGGRAPH ’96: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 249–258, 1996.

[5] E. Reinhard, High Dynamic Range Imaging.: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, November 2005.

[6] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM Trans. Graph., vol. 23, no. 3, pp. 760–768, 2004.

[7] G. Ward and M. Simmons, “Subband encoding of high dynamic range imagery,” in SIG-GRAPH ’06: ACM SIGSIG-GRAPH 2006 Courses, ACM, 2006.

(50)

Bibliography 39 [8] G. Ward and M. Simmons, “Jpeg-hdr: a backwards-compatible, high dynamic range

ex-tension to jpeg,” in SIGGRAPH ’05: ACM SIGGRAPH 2005 Courses, ACM, 2005. [9] R. Xu, S. N. Pattanaik, and C. E. Hughes, “High-dynamic-range still-image encoding in

jpeg 2000,” IEEE Comput. Graph. Appl., vol. 25, no. 6, pp. 57–64, 2005.

[10] S. N. Pattanaik, J. Tumblin, H. Yee, and D. P. Greenberg, “Time-dependent visual adapta-tion for fast realistic image display,” in SIGGRAPH ’00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 47–54, 2000.

[11] E. L. Cameron, J. C. Tai, and M. Carrasco, “Covert attention affects the psychometric function of contrast sensitivity,” Vision Research, vol. 42, no. 8, pp. 949 – 967, 2002. [12] M. Carrasco, S. Ling, and S. Read, “Attention alters appearance.,” Nat Neurosci, vol. 7,

pp. 308–313, March 2004.

[13] J. C. M. acute accent]nez Trujillo and S. Treue, “Attentional modulation strength in corti-cal area mt depends on stimulus contrast,” Neuron, vol. 35, no. 2, pp. 365 – 370, 2002. [14] M. Carrasco, C. Penpeci-Talgar, and M. Eckstein, “Spatial covert attention increases

con-trast sensitivity across the csf: support for signal enhancement,” Vision Research, vol. 40, no. 10-12, pp. 1203 – 1215, 2000.

[15] J. Petit, R. Br´emond, and J.-P. Tarel, “Saliency maps of high dynamic range images,” in APGV ’09: Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization, pp. 134–134, 2009.

[16] R. Bremond, J. Petit, and J.-P. Tarel, “Saliency maps of high dynamic range images,” in Media Retargeting Workshop in conjunction with ECCV’10, 2010.

[17] J. Tumblin and H. Rushmeier, “Tone reproduction for realistic images,” IEEE Comput. Graph. Appl., vol. 13, no. 6, pp. 42–48, 1993.

(51)

Bibliography 40 [18] G. Ward, “A contrast-based scalefactor for luminance display,” in Graphics gems IV,

pp. 415–421, 1994.

[19] G. W. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction op-erator for high dynamic range scenes,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291–306, 1997.

[20] J. M. Henderson, “Object identification in context: the visual processing of natural scenes.,” Canadian journal of psychology, vol. 46, pp. 319–341, September 1992.

[21] R. W. G. Hunt, The reproduction of colour. 2004.

[22] R. Mantiuk, S. Daly, and L. Kerofsky, “Display adaptive tone mapping,” ACM Trans. Graph., vol. 27, no. 3, pp. 1–10, 2008.

[23] J. H. Van Hateren, “Encoding of high dynamic range video with a model of human cones,” ACM Trans. Graph., vol. 25, no. 4, pp. 1380–1399, 2006.

[24] K. Chiu, M. Herf, P. Shirley, S. Swamy, C. Wang, and K. Zimmerman, “Spatially nonuni-form scaling functions for high contrast images,” in In Proceedings of Graphics Interface 93, pp. 245–253, 1993.

[25] J. Tumblin and G. Turk, “Lcis: a boundary hierarchy for detail-preserving contrast re-duction,” in SIGGRAPH ’99: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 83–90, 1999.

[26] P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 629–639, 1990. [27] R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range

com-pression,” in SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, (New York, NY, USA), pp. 249–256, ACM, 2002.

(52)

Bibliography 41 [28] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” in SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, (New York, NY, USA), pp. 257–266, ACM, 2002. [29] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in ICCV ’98:

Proceedings of the Sixth International Conference on Computer Vision, (Washington, DC, USA), p. 839, IEEE Computer Society, 1998.

[30] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” in IN PROCEEDINGS OF SIGGRAPH 2002, pp. 267–276, 2002.

[31] A. Adams, The Camera.The Ansel Adams Photography series. Little, Brown and Com-pany, 1980.

[32] A. ADAMS, The negative.The Ansel Adams Photography series. Little, Brown and Com-pany., 1981.

[33] A. ADAMS, The print. The Ansel Adams Photography series. Little, Brown and Company, 1983.

[34] H.-T. Chen, T.-L. Liu, and T.-L. Chang, “Tone reproduction: A perspective from luminance-driven perceptual grouping,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, vol. 2, pp. 369–376, 2005.

[35] Y. Rubner, C. Tomasi, and L. J. Guibas, “A metric for distributions with applications to image databases,” in ICCV ’98: Proceedings of the Sixth International Conference on Computer Vision, (Washington, DC, USA), p. 59, IEEE Computer Society, 1998.

[36] D. Lischinski, Z. Farbman, M. Uyttendaele, and R. Szeliski, “Interactive local adjustment of tonal values,” ACM Trans. Graph., vol. 25, no. 3, pp. 646–653, 2006.

(53)

Bibliography 42 [37] C.-K. Liang, W.-C. Chen, and N. Gelfand, “TouchTone: Interactive local image adjust-ment using point-and-swipe,” in Computer Graphics Forum (Proc. Eurographics), p. to appear, 2010.

[38] C. Hickey, W. van Zoest, and J. Theeuwes, “The time course of exogenous and endogenous control of covert attention.,” Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale, November 2009.

[39] J. Nachmias and R. V. Sansbury, “Letter: Grating contrast: discrimination may be better than detection,” Vision Res, vol. 14, pp. 1039–1042, Oct 1974.

[40] F. Pestilli, S. Ling, and M. Carrasco, “A population-coding model of attention’s influ-ence on contrast response: Estimating neural effects from psychophysical data,” Vision Research, vol. 49, no. 10, pp. 1144 – 1153, 2009.

[41] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psy-chology, vol. 12, no. 1, pp. 97 – 136, 1980.

[42] L. Itti, “Automatic foveation for video compression using a neurobiological model of vi-sual attention,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1304–1318, 2004.

[43] L. Itti and N. Dhavale, “Realistic avatar eye and head animation using a neurobiological model of visual attention,” in Proc. SPIE, pp. 64–78, SPIE Press, 2003.

[44] W. Reichardt, “Evaluation of optical motion information by movement detectors,” J Comp Physiol A, vol. 161, pp. 533–547, Sep 1987.

[45] http://en.wikipedia.org/wiki/Contrast(vision).

[46] L. Meylan, D. Alleysson, and S. Ssstrunk, “Model of retinal local adaptation for the tone mapping of color filter array images,” Journal of the Optical Society of America A, vol. 24, pp. 2807–2816, 2007.

(54)

Bibliography 43 [47] R. Mantiuk, S. Daly, K. Myszkowski, and H.-P. Seidel, “Predicting visible differences in high dynamic range images - model and its calibration,” in Human Vision and Elec-tronic Imaging X, IS&T/SPIE’s 17th Annual Symposium on ElecElec-tronic Imaging (2005), vol. 5666, pp. 204–214, 2005.

參考文獻

相關文件

However, to closely respond to various contextual changes locally and globally, more attention is given to the development of personal attributes expected of our students

The localization plays important role in supersymmetric (exact solvable) field theory. A special Wilson loop is also solvable by

have demonstrated using two- dimensional (2D) electronic spectroscopy that surprisingly long-lived (&gt;660 fs) quantum coher- ences between excitonic states play an important role

Calligraphy plays an integral role in the development of Buddhism, including the transcription of scriptures or the distribution of Buddhist words and phrases in writing,

We use neighborhood residues sphere (NRS) as local structure representation, an itemset which contains both sequence and structure information, and then

Solution: pay attention on the partial input object each time Problem: larger memory implies more parameters in RNN. Solution: long-term memory increases memory size without

Note that this method uses two separate object variables: the local variable message and the instance field name.. A local variable belongs to an individual method, and you can use

The simulation environment we considered is a wireless network such as Fig.4. There are 37 BSSs in our simulation system, and there are 10 STAs in each BSS. In each connection,