• 沒有找到結果。

Candidate Selection

在文檔中 煙霧偵測上的時空分析 (頁 20-33)

3  Chapter 3 Smoke Detection Algorithm

3.1  Block Processing

3.1.2 Candidate Selection

Smoke regions come into existence and disappear continuously because of the special particle property during ignition and combustion as shown in Fig.3-5. It is inefficient to track or analyze the target using object-based method. Block-based technique provides a better way to solve this problem. The image will be divided into non-overlapped blocks, and each block has the same size in a same image. First, we will find out the blocks with a gray-level change. The foreground image will be obtained by the GMM approach, and we compute the summation of foreground image for each block as shown in Eq. (3.9)

Fig. 3-5 Smoke regions come into existence and disappear continuously

Foreground regions can be found by the GMM approach, but they could also include static objects. Next, temporal difference of two successive frames will be calculated. In dynamic image analysis, all pixels in the difference image with value

“1” are considered as moving objects in the scene. As we know, video images usually have a great amount of noises due to intrinsic electronic noises and quantification. So the difference of two successive frames pixel inevitably produces false segmentation.

To reduce the disturbance of noises, we also compute its summation for each block to determine the moving property. The block difference is defined as

( ) ( )

2 and T2 is the predefined threshold.

In order to reduce the computational cost, only when the value of background subtraction and temporal difference lager than the predefined thresholds will be regarded as candidates containing moving objects by Eq. (3.11).

1, 1

We consider the information of a particular block over time as a “block process” in the following sections. Fig.3-6 illustrates some results produced by block processing.

Fig. 3-6 Results of block processing

3.2 2-D Spatial Wavelet Analysis

Although the Fourier transform has been the mainstay of transform-based image processing since the late 1950s, a more recent transformation, called the wavelet transform, is now making it even easier to compress, transmit, and analyze many images. Unlike the Fourier transform, whose basis functions are sinusoids, wavelet transforms are based on small waves, called wavelets, of varying frequency and limited duration. This allows them to provide the equivalent of a musical score for an image, revealing not only what notes (frequencies) to play but also when to play them.

Conventional Fourier transforms, on the other hand, provide only the notes or frequency information; temporal information is lost in the transform process.

Now we want to transform an image (M by N) into wavelet domain. The whole 2-D spatial wavelet transform can be decomposed by the horizontal wavelet transform and the vertical wavelet transform. Fig. 3-7 is the diagram of horizontal wavelet transform.

The direction from left to right is the wavelet decomposition, and the direction from left to right is the wavelet synthesis.

Fig. 3-7 Horizontal wavelet transform

Each row of the image will be regarded as mutual independent image sequences and each independent row will process wavelet transform respectively. Briefly, a original image will be decomposed into low-band information on the left side and high-band information on the right side after horizontal wavelet transform. We used L and H stand for low-band and high-band information, respectively.

Vertical wavelet transform will process on L and H obtained by horizontal transforms and the whole wavelet transform will be done. Fig. 3-8 is the diagram of vertical wavelet transform. The direction from left to right is the wavelet decomposition, and the direction from right to left is the wavelet synthesis. The data on the left side was processed by horizontal wavelet transform but not vertical wavelet transform yet. Each column of the image will be regarded as mutual independent image sequences and each independent column will process wavelet

Fig. 3-8 Vertical wavelet transform

transform, respectively. Anyhow, the data can further separate into upside and underside after vertical wavelet transform. The upside is the vertical low-band information and the underside is the vertical high-band information as shown on the right side of Fig. 3-8. To operate in coordination with horizontal transform, the whole image data can separate into four regions, which are horizontal low-band vertical low-band (LL), horizontal low-band vertical high-band (LH), horizontal high-band vertical low-band (HL), and horizontal high-band vertical high -band (HH).

It is well-known that wavelet subimages contain the texture and edge information of the original image. Edges produce local extreme in wavelet subimages [7]. Wavelet subimages LH, HL, and HH contain horizontal, vertical and diagonal high frequency information of the original image, respectively. Fig. 3-9 is the original image and its single level wavelet subimages.

Fig. 3-9 Original image and its single level wavelet subimages

Because smoke blurs the texture and edges in the background of an image, high-frequency information becomes much more invisible when smoke covers part of the scene. Therefore, details will be an important indicator of smoke due to the decrease in value of high-frequency information. Energy of details is calculated for each candidate block: transform coefficients are shown in Fig. 3-10.

Fig. 3-10 Two-dimension wavelet transform and its coefficients

Instead of using energy of the input directly, we prefer computing the energy ratio of the current frame to the background model due to the cancelation of negative effect on different conditions and the capability of impartial measurement in the decrease:

( ) ( )

where BGt is the mean value of the distribution with a highest weight in the GMM background model. The value of the energy ratio α is our first feature in spatial domain, which supports the fact that the texture or edges of the scene observed by the

camera are no longer visible as they used to be in the current input frame. It is also possible to determine the location of smoke using the wavelet subimages as shown in Fig. 3-11.

(a) Original frame without smoke

(b) Frame with smoke

Fig. 3-11 Blurring in the edges is visible by single level wavelet subimages

3.3 1-D Temporal Energy Analysis

A wave is an oscillating function of time or space and is periodic. In contrast, wavelets are localized waves. They have their energy concentrated in time or space and are suited to analysis of transient signals. While wavelet transform and STFT (Short Time Fourier Transform) use waves to analyze signals, the wavelet transform uses wavelets of finite energy (Fig. 3-12). The wavelet analysis is done similar to the STFT analysis. The signal to be analyzed is multiplied with a wavelet function just as it is multiplied with a window function in STFT, and then the transform is computed

(a) (b) Fig. 3-12 Demonstration of (a) a Wave and (b) a Wavelet

for each segment generated. However, unlike STFT, in wavelet transform, the width of the wavelet function changes with each spectral component. The wavelet transform, at high frequencies, gives good time resolution and poor frequency resolution, while at low frequencies, the wavelet transform gives good frequency resolution and poor time resolution.

Here we only use one level of the transform for fast computation. The 1-D wavelet transform of a signal x is calculated by passing it through a series of filters. First the samples are passed through a low pass filter with impulse response g resulting in a convolution of the two: The signal is also decomposed simultaneously using a high-pass filter h. The outputs giving the detail coefficients (from the high-pass filter) and approximation coefficients (from the low-pass filter). It is important that the two filters are related to each other and they are known as a quadrature mirror filter.

However, since half the frequencies of the signal have now been removed, half the samples can be discarded according to Nyquist’s rule. The filter outputs are then subsampled by 2.

[ ] [ ] [

2

]

This decomposition has halved the time resolution since only half of each filter output characterizes the signal. Each output has half the frequency band of the input so the frequency resolution has been doubled.

Fig. 3-13 Block diagram of 1-D DWT

With the subsampling operator ↓

(

y k n

) [ ] [ ]

=y kn (3.17) the above summation can be written more concisely :

( )

2

ylow = x g∗ ↓ (3.18)

( )

2

yhigh = x h∗ ↓ (3.19) Ordinary moving objects such as pedestrians or vehicles have solid characteristic so we can’t see details behind through the bodies. If there is an ordinary moving object going through the candidate block then there will be a sudden energy change because of the transition from the background to the foreground object. On the contrary, initial smoke has semi-transparent nature and becomes less visible as time goes by.

A gradual change of energy is guaranteed to this process and any abrupt variation will be regarded as a noise caused by common disturbance. One-dimension temporal

wavelet analysis of energy ratio α provides a proper evaluation of this phenomenon.

We obtain high-band (details) and low-band (approximations) information by the 1-D DWT shown in Fig.3-13. Therefore, the disturbance can be measured by computing the summation of details for a predefined time interval. Obviously, ordinary solid moving objects produce a great quantity of details in Fig. 3-14(a). Smoke has smooth variation in value of energy ratio and produces few details shown in Fig. 3-14(b). The likelihood of the candidate block to be a smoke region is in inverse proportion to the parameter β

( )

n

[ ]

k

B D n

β =

n (3.20)

where D[n] is the high-frequency information of energy ratio α and n is the number of time with a non-zero value of details.

(a) (b) Fig. 3-14 Comparison of changes in value of energy ratio at the passage of

(a) ordinary moving objects (b) smoke

3.4 1-D Temporal Chromatic Configuration Analysis

Smoke can’t be defined by a specific color appearance. However, it is possible to characterize smoke by considering its effect on the color appearance of the region on which it covers. Besides the gradual change of energy, smoke has the same property of color configuration.

Color analysis is performed in order to identify those pixels in the image that respect chromatic properties of smoke. The RGB color space and photometric invariant features are considered in the analysis. Photometric invariant features are functions describing the color configuration of each image coordinate discounting local illumination variations. Hue and saturation in the HSV color space and the normalized-RGB color space are two photometric invariant features in common use.

We decided to use the normalized-RGB color space for its fast computation since it can be obtained by dividing the R, G and B coordinates by their total sum. The transfer function is given by

B This transformation projects a color vector in the RGB cube into a point on the unit plane described by r + g + b = 1.

From the empirical analysis, smoke lightens or darkens each component in RGB color space of the covered point but smoke doesn’t severely change the values of the rgb color system. However, the values are likely to change in case of a material change. This constrain can be represented by

( ) ( )

when the candidate blocks covered by smoke region instead of ordinary moving

objects. To this end, we draw the RGB color histogram of a specific block in three different situations of a video sequence in order to characterize the presence or absence of smoke. Obviously, the color histogram distribution in Fig. 3-15 (c) is similar to the one in Fig. 3-15 (a). However, the presence of pedestrian produces totally different color histogram distributions between Fig. 3-15 (b) and Fig. 3-15 (a).

(a)

(b)

(c) Fig. 3-15 RGB color histogram of a specific block (a) original image (b) covered by

ordinary moving objects (c) covered by smoke

The details (high-frequency information) of the three channels in the rgb color system are obtained by the 1-D DWT again in Fig. 3-13. We can obviously see that ordinary solid moving objects produce a great quantity of details in Fig. 3-16 (a).

Smoke has smooth variation in rgb color space and produces few details shown in Fig.

3-16 (b). Therefore, the third feature ρ will be calculated by ( )

( [ ] [ ] [ ] )

interval , , 2

max D n D n D n

B r g b

k =n

ρ (3.23)

where Dr[n], Dg[n], and Db[n] stand for details of r, g and b channels respectively and the value of r, g and b are averages of candidate blocks. Again, the likelihood of the candidate block to be a smoke region is in inverse proportion to the parameter ρ.

Fig. 3-16 Comparison of changes in value of details in rgb color space at the passage of (a) ordinary moving objects (b) smoke

在文檔中 煙霧偵測上的時空分析 (頁 20-33)

相關文件