• 沒有找到結果。

中 華 大 學

N/A
N/A
Protected

Academic year: 2022

Share "中 華 大 學"

Copied!
91
0
0

加載中.... (立即查看全文)

全文

(1)

中 華 大 學 碩 士 論 文

應用二維頻譜和倒頻譜特徵於三維模型檢索 系統

3D Model Retrieval Using 2D Spectral and Cepstral Features

系 所 別:資訊工程學系碩士班 學號姓名:M09802056 洪雋彥 指導教授:石昭玲 博 士

中 華 民 國 101 年 8 月

(2)

i

摘 要

隨著網際網路的迅速發展,使得 3D 模型在網路上供使用者存取的數量不斷的增 加,因此需要一個有效的搜尋系統來幫助人們找到他們所要的 3D 資料。對於 3D 模 型資料庫的管理,首先要建立一個有效率的分類及搜尋方法,而捨棄傳統以文字為檢 索依據的方法。目前以 3D 模型本身的內容(content)為檢索依據的搜尋方式是 3D 模型資料庫管理上的最佳利器。因此如何建立一個有效的 3D 模型搜尋系統,讓使用 者可以利用此一系統快速地找到在大型 3D 模型資料庫中符合使用者個人期待的相似 3D 模型,是本篇論文首要目標。

在本論文中提出了利用二維頻譜和倒頻譜的 3D 模型特徵演算法,以及 3D 模型 資料庫的搜尋系統。因此本系統包含兩個部分,一個是特徵擷取。另一部分則是 3D 模型搜尋。在特徵擷取部分,分為頻譜特徵(spectral feature)、倒頻譜特徵(cepstral feature)和區域倒頻譜特徵(local cepstral feature),其中頻譜和倒頻譜特徵又因不同的切 割方法(Subband decomposition)分成多個不同特徵。而在搜尋的部份,可利用特徵向 量在資料庫中找出與使用者想搜尋的 3D 模型相似度較高的模型回應給使用者。實驗 結果顯示本論文所提出的方法正確率高於其他 3D 模型檢索的方法。

關鍵字:三維模型檢索系統、頻譜特徵、倒頻譜特徵、區域倒頻譜特徵。

(3)

ii

ABSTRACT

The advances in 3D data acquisition, graphics hardware, and 3D data modeling and visualizing techniques have led to the proliferation of 3D models. The searching for specific 3D models becomes an important issue. Techniques for effective and efficient content-based 3D model retrieval have therefore become an essential research topic. In this thesis, spectrum and cepstrum because of different cutting methods (Subband decomposition) is divided into a number of different features. Spectral features, cepstral features and local cepstral feature will be used for 3D model retrieval. In the search section, respond to the user from the best match between query model and match model in the database. Experimental results show the proposed methods produce good performances.

Keyword: 3D model retrieval, spectral feature, cepstral feature, local cepstral feature

(4)

iii

致 謝

兩年多的研究生日子總算告一段落了,心中有很多話想要說,可是卻不知怎麼說 出口,兩年的日子又一幕幕浮現眼前,有歡笑、有感動、有辛苦,但是最多的還是那 無法言喻的感激之情。

首先要感謝的是我的指導教授石昭玲老師,在這兩年來對我的諄諄教誨、關心與 指導,還要感謝李建興老師、韓欽銓老師、連振昌老師和周智勳老師,使我如沐春風 與受益良多,在此獻上我最誠摯的感謝與敬意。此外,也感謝石昭玲教授、李建興教 授、韓欽銓教授、李遠坤教授、秦群立教授,在我的碩士論文口試中,能夠給予許多 寶貴的意見。

感謝家人及各位朋友們,在我最需要支持的時候給予我鼓勵讓,我心情放鬆並且 使我擁有信心完成學業,同時要感謝的是實驗室的學長姐正達、宇晨、勝斌、昭弘、

堯文、信吉、佩蓉及翔淵等,同學耀德、珮筠、文楷、子豪、裕邦、永成等,以及學 弟妹于豪、姿蓉及景翔等,感謝你們在這段期間內對我的幫助與鼓勵。當然,也要感 謝其他實驗室的許多其他人,在我需要幫助的時候,伸出溫暖的手。

謝謝你們!

(5)

iv

Contents

摘 要 ... i

致 謝 ... iii

Contents ... iv

List of Tables ... vi

List of Figures ... vii

Chapter 1 ... 1

Introduction ... 1

Chapter 2 ... 3

Related Work ... 3

2.1 View Based Methods ... 3

2.2 Spatial Shape Feature Based Methods... 12

2.3 Frequency Based Methods ... 16

Chapter 3 ... 19

The Proposed Method for 3D Model Retrieval ... 19

3.1 Pre-Processing ... 19

3.1.1 Alignment of 3D model ... 19

3.1.2 3D model Normalization ... 21

3.2 The Elevation Projection ... 22

3.3 The 2D Spectral and Cepstral Feature ... 24

3.3.1 The Spectral Feature ... 25

3.3.1.1 The 2D Spectral Feature Extraction ... 25

3.3.1.2 The 2D Generic Subband Decomposition (GSD) Spectral Feature Extraction ... 25

3.3.1.3 The 2D Generic Logarithmic Subband Decomposition (GLSD) Spectral Feature ... 26

3.3.1.4 The 2D Complement Subband Decomposition (CSD) Spectral Feature ... 27

3.3.1.5 The 2D Complement Logarithmic Subband Decomposition (CLSD) Spectral Feature ... 27

3.3.2 The Cepstral Feature ... 29

3.3.2.1 The 2D Cepstral Feature Extraction ... 29

3.3.2.2 The 2D Generic Subband Decomposition (GSD) Cepstral Feature Extraction ... 29

3.3.2.3 The 2D Generic Logarithmic Subband Decomposition (GLSD) Cepstral Feature ... 30

3.3.2.4 The 2D Complement Subband Decomposition (CSD) Cepstral Feature ... 31 3.3.2.5 The 2D Complement Logarithmic Subband Decomposition (CLSD)

(6)

v

Cepstral Feature ... 32

3.3.2.6 The 2D GSD+CSD (GCSD) Cepstral Feature ... 33

3.3.2.7 The 2D GLSD+CLSD (GLCLSD) Cepstral Feature ... 34

3.4 The 2D Local Cepstral Feature ... 36

3.5 The feature combination ... 38

Chapter 4 ... 40

Experimental result ... 40

4.1 Experiments on the First Database, the PSB Database ... 41

4.2 Experiments on the Second Database, the ESB Database ... 53

4.3 Experiments on the Third Database, the SHREC-W Database ... 61

4.4 Experiments on the Fourth Database, the NIST Database ... 69

Chapter 5 ... 71

Conclusion ... 77

Reference ... 78

(7)

vi

List of Tables

Table 1 The 92 testing categories in the Princeton Shape Benchmark database. |Nc| is the

number of models in a category [50]. ... 42

Table 2 The 92 training categories in the Princeton Shape Benchmark database. |Nc| is the number of models in a category [50]. ... 43

Table 3 The representation of each descriptors. ... 45

Table 4 Retrieval accuracy of 2D spectral and cepstral feature on the PSB database. (a) spectral feature(uniform) (b) cepstral feature(uniform) (c) spectral feature(logarithmic) (d) cepstral feature(logarithmic) (e) GCSD cepstral feature (f) GLCSLD cepstral feature ... 46

Table 5 Retrieval accuracy of the feature combination on the PSB database. ... 47

Table 6 The symbol for size. ... 48

Table 7 Retrieval accuracy of the 2D local cepstral feature on the PSB database. ... 48

Table 8 Comparison of the proposed approach with other descriptors on the PSB database in terms of DCG(%). Note that the approaches marked with * are referenced from Akgule et al. [32]. ... 52

Table 9 The 45 categories in the Engineering Shape Benchmark database. |Nc| is the number of models in a category [51]. ... 53

Table 10 The representation of each descriptors. ... 54

Table 11 Retrieval accuracy of 2D spectral and cepstral feature on the ESB database. (a) spectral feature(uniform) (b) cepstral feature(uniform) (c) spectral feature(logarithmic) (d) cepstral feature(logarithmic) (e) GCSD cepstral feature (f) GLCSLD cepstral feature. ... 55

Table 12 Retrieval accuracy of the feature combination on the ESB database. ... 57

Table 13 The symbol for size. ... 57

Table 14 Retrieval accuracy of the 2D local cepstral feature on the ESB database. ... 57

Table 15 Comparison of the proposed approach with other descriptors on the ESB database in terms of DCG (%). Note that the approaches marked with * are referenced from Akgule et al. [32]. ... 61

Table 16 The 20 categories in the SHREC Watertight database. |Nc| is the number of models in a category [52]. ... 62

Table 17 The representation of each descriptors ... 62

Table 18 Retrieval accuracy of 2D spectral and cepstral feature on the SHREC-W database. (a) spectral feature(uniform) (b) cepstral feature(uniform) (c) spectral feature(logarithmic) (d) cepstral feature(logarithmic) (e) GCSD cepstral feature (f) GLCSLD cepstral feature ... 63

Table 19 Retrieval accuracy of the feature combination on the SHREC-W database. ... 64

Table 20 The symbol for size. ... 65

Table 21 Retrieval accuracy of the 2D local cepstral feature on the SHREC-W database. 65 Table 22 Comparison of the proposed approach with other descriptors on the SHREC-W database in terms of DCG (%). ... 69

Table 23 The 20 categories in the National Institute of Standards and Technology database. |Nc| is the number of models in a category [53]. ... 69

Table 24 The representation of each descriptors. ... 70

Table 25 Retrieval accuracy of 2D spectral and cepstral feature on the NIST database. (a) spectral feature(uniform) (b) cepstral feature(uniform) (c) spectral feature(logarithmic) (d) cepstral feature(logarithmic) (e) GCSD cepstral feature (f) GLCSLD cepstral feature ... 71

Table 26 Retrieval accuracy of the feature combination on the NIST database. ... 72

Table 27 The symbol for size. ... 73

Table 28 Retrieval accuracy of the 2D local cepstral feature on the NIST database. ... 73

(8)

vii

List of Figures

Fig. 1.1 The 3D model of a fighter_jet ... 1

Fig. 2.1 Silhouette contours of the views [3]. ... 3

Fig. 2.2 A typical example of the 10 silhouettes for a 3D model [4]. ... 3

Fig. 2.3 Robustness evaluation using different kinds of attack: (a) original 3D model; (b) the model after similarity transformation; (c) the noise-added Model; and (d) the model after destruction. [5]. ... 4

Fig. 2.4(a) The 3D model is segmented by sphere grid. (b) Six silhouettes from the face of dodecahedron over a hemisphere [7]. ... 4

Fig. 2.5 The depth buffers of a car and the 2D DFT of the six images [8]. ... 5

Fig. 2.6 (a) The voxel grid of 3D model. (b) Six elevations of a 3D military model including front, top, left, right, rear, and bottom elevations [9]. ... 5

Fig. 2.7 The concentric circles of an elevation [9]. ... 6

Fig. 2.8 Curvature maps of two 3D objects [11-13] ... 6

Fig. 2.9 3D models and their corresponding projection images [14]. ... 7

Fig. 2.10 The projection image segmented by several concentric circles [14]. ... 7

Fig. 2.11 The different gray-level image at the different sampled points [15]. ... 7

Fig. 2.12 The depth images from different vertices [16]. ... 8

Fig. 2.13 The depth lines of a race car [16]. ... 8

Fig. 2.14 The SID computation [17]... 8

Fig. 2.15 (a) A 3D model of a cup and (b)–(d) the corresponding cylindrical projections [18]. ... 9

Fig. 2.16 Four examples of 3D models and their corresponding example SSCD images [19]. ... 9

Fig. 2.17 The flowchart of the proposed 3D model retrieval system [20]. ... 10

Fig. 2.18 weighted bipartite graph matching [39]. ... 10

Fig. 2.19 Lian et al. method [40] ... 11

Fig. 2.20 Conceptual diagram of the BLD computation[41]. ... 11

Fig. 2.21 The Geometric Descriptor [21]. ... 12

Fig. 2.22 Several 3-D shape histograms of the example protein 1SER-B. From top to ... 12

bottom, the number of shells decreases and the number of sectors increases [22]. ... 12

Fig. 2.23 Five shape functions based on angle (A3), lengths (D1 and D2), area (D3), and Volumes (D4) [23]. ... 13

Fig. 2.24 The classification of point-pair distances, IN (A), OUT (B), and MIXED (C) [24]. ... 13

Fig. 2.25 Four-dimensional of SPRH feature [27]. ... 14

Fig. 2.26 The MPEG-7’s SSD of two 3D models [29]... 14

Fig. 2.27 Each plane equation for the x-,y-,and z-coordinates, respectively [31]. ... 15

Fig. 2.28 Find intersection line between a plane equation and a mesh [31]. ... 15

Fig. 2.29 Density-based shape description: Measurements of the (multivariate) feature S obtained from the 3D object surface are processed into descriptor vectors, that is, the probability density function of the feature [32]. ... 16

Fig. 2.30 Multi-resolution from Fourier coefficients for spherical harmonics [34]. ... 16

Fig. 2.31 The Spherical Harmonic Shape Representation framework [36]... 17

Fig. 2.32 The stages of the proposed 3D shape matching scheme for retrieval. [38]. ... 17

Fig. 2.33 (a) Different shape structures (b) The Poisson histograms for the shapes in Figure (a)(each color denotes one model. The results show the Poisson histogram has good distinction for different shape structures)[42]. ... 18

(9)

viii

Fig. 3.1 The three principal planes of 3D tie fighter model. ... 20

Fig. 3.2 The original and decomposed 3D tie fighter model. (a) The 3D tie fighter model circumscribed by a bounding box. (b) The bounding box of the 3D tie fighter model is decomposed into a 128×128×128 voxel grid. (c) The normalized 3D tie fighter model. ... 22

Fig. 3.3 Six different views of a 3D fighter_jet model. ... 23

Fig. 3.4 Flow diagram for 2D spectral and cepstral feature extraction. ... 24

Fig. 3.5 Subband decomposition. (a)Generic subband decomposition(GSD).(b) Generic logarithmic subband decomposition (GLSD). ... 26

Fig. 3.6 Shift,Dashed line is angle of the solid line after shift. (For example, Figure 3.6 is divided into 8 equal portions) ... 28

Fig. 3.7 Subband decomposition. (a) Complement subband decomposition (CSD). (b) Complement logarithmic subband decomposition (CLSD). ... 28

Fig. 3.8 Subband decomposition. (a)Generic subband decomposition(GSD).(b) Generic logarithmic subband decomposition (GLSD). ... 31

Fig. 3.9 Shift,Dashed line is angle of the solid line after shift. (For example, Figure 3.9 is divided into 8 equal portions) ... 33

Fig. 3.10 Subband decomposition. (a) Complement subband decomposition (CSD). (b) Complement logarithmic subband decomposition (CLSD). ... 33

Fig. 3.11 (a) Uniform angle (dashed line).(b) Shift angle (solid line). (c) Fig. (a) and Fig. (b) combine. ... 35

Fig. 3.12 Subband decomposition. (a) GSD+CSD (GCSD). (b) GLSD+CLSD (CLSD). .. 35

Fig. 3.13 Flow diagram for 2D local cepstral feature extraction. ... 36

Fig. 3.14 (a) Decompose image. (b) Perform DFT on each block. ... 37

Fig. 3.15 For each block, the central P

P (PM) frequency coefficients are rearranged. 37

Fig. 4.1 All testing classes of the 3D model on PSB [50]. ... 44

Fig. 4.2 All classes of the 3D model on ESB [51]... 54

Fig. 4.3 All classes of the 3D model on SHREC-W [52] ... 62

Fig. 4.4 All classes of the 3D model on NIST [53]. ... 70

(10)

1

Chapter 1 Introduction

Recent development in advanced techniques for modeling, digitizing and visualizing 3D models has made 3D models (see Fig. 1.1) as plentiful as images and video. Therefore, it is necessary to design a 3D model retrieval system which enables the users to efficiently and effectively search interested 3D models. Many retrieval systems or search engines provide a keyword-based interface for multimedia data retrieval. In general, the multimedia data is annotated with appropriate keywords, typically manually labeled by experienced managers. However, the difference in interpretation of the same multimedia data among different people will make the annotated keywords differ from person to person. To overcome the difficulties of keyword-based retrieval, content-based retrieval has become a widely accepted research direction.

Fig. 1.1 The 3D model of a fighter_jet

The primary challenge to a content-based 3D model retrieval system [1, 2] is to extract a set of proper features for efficiently representing and effectively discriminating distinct types of 3D models. In general, there are three paradigms for 3D model retrieval:

view-based descriptors [3-20, 39-41], spatial shape descriptors [21-32], and frequency descriptors [33-38, 42].

View-based descriptors are generally obtained by projecting a 3D model on a number of 2D projections from different views. Discriminative features extracted from these 2D projection planes are combined to index similar 3D models. These 2D planes can be represented by either binary images representing the silhouettes from different views, or by gray-level images representing the curvature information or the depth information. One advantage of view-based descriptors is that it is easy to design a query interface which supports a 2D sketch for 3D model retrieval. The problem is that rotation invariance has to

(11)

2

be solved by either pose normalization prior to 2D projections, by extracting rotation-invariant features, or by matching 2D feature descriptors over many different alignments simultaneously.

Spatial shape descriptors consider the statistical distributions or histograms of local features evaluated at the vertices or meshes of a 3D model. These descriptors include curvature histogram, shape (including distance, angle, area, Volume) distributions, shape histograms, extended Gaussian images, 3D Hough transform descriptors, density-based shape descriptor, etc. The main drawback of these spatial shape descriptors is that they do not take into account how the local features are spatially distributed over the model surface.

Frequency descriptors are extracted by mapping the 3D data into frequency domain representations. These representations include 3D discrete Fourier transform, spherical harmonics, concrete radicalized spherical projection, spherical wavelet transform, 3D Zernike moments, 3D angular radial transform, etc. The effectiveness of these frequency descriptors relies on the quality of the voxel decomposition of a 3D model.

The rest of the thesisis organized as follows. The related work is described in Chapter 2. In Chapter 3, 2D Spectral and Cepstral features of 3D models will be used for 3D model retrieval. Chapter 4 gives the experimental results to show the effectiveness of the proposed projected shape features. Finally, conclusion is given in Chapter 5.

(12)

3

Chapter 2 Related Work

In this section, some related work for 3D model retrieval will be described. The 3D model retrieval methods are classified into three categories: view based feature methods, spatial shape feature based methods, and frequency based methods.

2.1 View Based Methods

View-based descriptors are generally obtained by projecting a 3D model on a number of 2D projections from different views. Discriminative features extracted from these 2D projection planes are combined to index similar 3D models. The problem is that rotation invariance has to be solved by either pose normalization prior to 2D projections, by extracting rotation-invariant features, or by matching 2D feature descriptors over many different alignments simultaneously.

Super and Lu [3] exploited the curvature scale space to partition the silhouette contour of a 3D model into overlapping local parts at all scales shown in Fig. 2.1.

Fig. 2.1 Silhouette contours of the views [3].

Chen et al. [4] introduced the lightfield descriptor to represent 3D models as shown in Fig. 2.2 The lightfield descriptor is computed from ten silhouettes. Each silhouette is represented by a 2D binary image.

Fig. 2.2 A typical example of the 10 silhouettes for a 3D model [4].

(13)

4

Kuo and Cheng [5] proposed a 3D shape retrieval system based on the principal plane analysis. By projecting the 3D model onto the principal plane as shown in Fig 2.3, a 3D model can be transformed into a 2D binary image.

Fig. 2.3 Robustness evaluation using different kinds of attack: (a) original 3D model; (b) the model after similarity transformation; (c) the noise-added Model; and (d) the model after destruction. [5].

Shih et al. [7] proposed two features, grid sphere descriptor (GSD) and dodecahedral silhouette descriptor (DSD), to describe inside and outside information of a 3D model. For GSD, a 3D model is segmented by a 32 × 64 × 64 sphere grid for each 3D object. There are 32 shells and each shell is segmented by 64×64 grid as shown in Fig. 2.4(a). For each shell, the number of valid grids is calculated to get the GSD. For DSD, six silhouettes are rendered from faces of dodecahedron over a hemisphere to represent a 3D model. Each silhouette is represented by a 2D binary image as shown in Fig. 2.4(b). The angular radial transformation (ART) is extracted for each silhouette as the feature vector. These two features, GSD and DSD, can be combined for 3D model retrieval.

(a)

(a) (b)

Fig. 2.4(a) The 3D model is segmented by sphere grid. (b) Six silhouettes from the face of dodecahedron over a hemisphere [7].

In fact, 2D silhouettes represented by binary images cannot describe the altitude information of a 3D model from different views. Thus some authors [8-14] describe

(14)

5

altitude information by gray-level images.

Vranic et al. [8] proposed depth buffer descriptor. Six grey-scale images are rendered using parallel projection. The six images are transformed using the discrete Fourier transform (DFT) as shown in Fig. 2.5.

Fig. 2.5 The depth buffers of a car and the 2D DFT of the six images [8].

Shih et al. [9, 10] propose elevation descriptor for 3D model retrieval. The elevation descriptor is invariant to translation and scaling of 3D models and it is robust for rotation.

First, a 3D model is represented by six gray-level images which describe the altitude information of a 3D model from six different views including front, left, right, rear, top and bottom in Fig. 2.6(b). Each gray-level image, called an elevation, is decomposed into several concentric circles in Fig. 2.7. The sum of the altitude information within each concentric circle is then calculated. To be less sensitive to rotations, the elevation descriptor is obtained by taking the difference between the altitude sums of two successive concentric circles. Since there are six elevations for each 3D model, an efficient similarity matching method is provided to find the best match for an input model.

Fig. 2.6 (a) The voxel grid of 3D model. (b) Six elevations of a 3D military model including front, top, left, right, rear, and bottom elevations [9].

(15)

6

Fig. 2.7 The concentric circles of an elevation [9].

Assfalg et al. [11-13] provided content-based 3D model retrieval through curvature maps as shown in Fig. 2.8. The geometric structure description of a 3D object is accomplished through the following steps: 1. smoothing and polygon simplification, 2.

curvature estimation, 3. deformation, 4. curvature mapping.

Fig. 2.8 Curvature maps of two 3D objects [11-13]

Shih et al. [14] proposed a 3D model retrieval approach based on the principal plane descriptor. First, a 3D model is transformed into a 2D binary image by projecting it on the principal plane, the symmetric surface of a 3D model . Moreover, for exactly representing a 3D model, the second and third planes (see Fig. 2.9) must be calculated to obtain the other two binary projecting images. Each binary image is decomposed into several concentric circles in Fig. 2.10. The sum of the altitude information within each concentric circle is then calculated. To be less sensitive to rotations, the principal plane descriptor is obtained by taking the difference between the altitude sums of two successive concentric circles. Since there are three binary images for each 3D model, an efficient similarity matching method is provided to find the best match for an input model.

(16)

7

Fig. 2.9 3D models and their corresponding projection images [14].

Fig. 2.10 The projection image segmented by several concentric circles [14].

Dimitios Zarpalas et al. [15] proposed a novel methodology for content-based 3D model search and retrieval (see Fig. 2.11).

Fig. 2.11 The different gray-level image at the different sampled points [15].

Mohamed Chaouch et al. [16] proposed a 3D model retrieval system based on 20 depth images rendered from the vertices of a regular dodecahedron (see Fig. 2.12). For the depth image, each row (depth line) is encoded into a sequence by five characters, o, c, /, -, \, representing as exterior-background, interior-background, increased-depth, constant-depth and decreased-depth (see Fig. 2.13).

(17)

8

Fig. 2.12 The depth images from different vertices [16].

Fig. 2.13 The depth lines of a race car [16].

Athanasios Mademlis et al. [17] proposed a 3D shape impact descriptor (SID) based on the physics laws of gravity (see Fig. 2.14).

Fig. 2.14 The SID computation [17].

Papadakis et al. [18] proposed a 3D model retrieval system based on panoramic view.

The panoramic views can capture the global shape of the 3D model (see Fig. 2.15).

(18)

9

Fig. 2.15 (a) A 3D model of a cup and (b)–(d) the corresponding cylindrical projections [18].

Yue Gao et al. [19] proposed the spatial structure circular descriptor (SSCD) for 3D model retrieval (see Fig. 2.16). The Kunn-Munkras algorithm is used to measure the similarity of two 3D models.

Fig. 2.16 Four examples of 3D models and their corresponding example SSCD images [19].

Shih et al. [20] proposed a 3D model retrieval approach based on the combination of different PCA plane projection approaches (see Fig. 2.17). Each 3D model is aligned by the grid-based principal component analysis (GPCA), continuous PCA (CPCA), and normal-vectors PCA (NPCA), in which each one can align 3D models more accurately

(19)

10

than traditional PCA. Then, for each alignment approach (GPCA, CPCA, or NPCA), each 3D model is projected on three PCA planes, with their normal vectors being the computed three eigenvectors, to get six gray-level images (called inner elevations). The gray value of a pixel in the image describes the depth information. The MPEG-7 angular radial transform (ART) is then applied to these inner elevations to obtain the feature descriptor of each 3D model.

Fig. 2.17 The flowchart of the proposed 3D model retrieval system [20].

The view-based 3D model retrieval algorithm is proposed by Dai et al [39]. First, each 3D model is represented by a set of 2D views. The representative views of the query model are selected and the corresponding initial weights are specified. Based on the relationship among these representative views, these initial weights are further updated. Finally, the weighted bipartite graph matching scheme is used to measure the similarity between two 3D models (see Fig. 2.18)

Fig. 2.18 weighted bipartite graph matching [39].

(20)

11

Lian et al. [40] proposed the Bag-of-Features and an efficient multiview shape matching scheme for 3D model retrieval. First, a 3D model is described by a set of depth-buffer views. Then, a view is described as a word histogram obtained by the vector quantization of the view’s salient local features. Finally, a multi-view shape matching algorithm is used to calculate the similarly between two 3D models (see Fig. 2.19).

Fig. 2.19 Lian et al. method [40]

Xiao et al. [41] proposed a 3D object retrieval system based on a novel graph model descriptor and a fast graph matching method. Firstly, based on graph model learning, a Bayesian network lightfield descriptor (BLD) is used to overcome the disadvantages of the existing view-based methods. Next, the 3D object can be retrieved based on graph model matching. Finally, based on the content-based statistical learning algorithm, a relevant feedback technique is used to improve the retrieval results (see Fig. 2.20)

Fig. 2.20 Conceptual diagram of the BLD computation[41].

(21)

12

2.2 Spatial Shape Feature Based Methods

In 3D model retrieval systems, consider the statistical distributions or histograms of local features measured at the vertices or meshes of a 3D model.

Zhang and Chen [21] proposed methods to efficiently calculate features such as area, Volume, moments, and Fourier transform coefficients from mesh representation of 3D models as shown in Fig. 2.21.

Fig. 2.21 The Geometric Descriptor [21].

Ankerst et al. [22] used shape histograms for 3D model retrieval where the 2D silhouette is divided into a number of areas by a collection of concentric shells and sectors.

In Fig. 2.22, quadratic form distance measure is employed to compute the distance between the histogram bins.

Fig. 2.22 Several 3-D shape histograms of the example protein 1SER-B. From top to bottom, the number of shells decreases and the number of sectors increases [22].

Osada et al. [23] tried to represent each 3D model by the probability distributions of some geometric properties computed from a set of randomly selected points located on the

(22)

13

surface of the model. These geometric properties, including distance, angle, area, and Volume, are employed to describe the shape distribution as shown in Fig. 2.23.

Fig. 2.23 Five shape functions based on angle (A3), lengths (D1 and D2), area (D3), and Volumes (D4) [23].

Ip et al. [24, 25] proposed a modified D2 descriptor where the D2 distance is classified as IN distance, OUT distance, and MIXED distance depending on whether the line segment connecting the two points lies inside or outside the model (see Fig. 2.24).

Fig. 2.24 The classification of point-pair distances, IN (A), OUT (B), and MIXED (C) [24].

Shih et al. [26] proposed a new descriptor called grid D2 (GD2) to alleviate this problem. In GD2, a 3D model is first decomposed into a voxel grid. The random sampling operation is performed on voxels within which some polygonal surfaces locates rather than on the random points.

Wahl et al. [27] proposed a statistical representation of 3D model based on a novel four-dimensional feature. The intrinsic geometrical relation of an oriented surface-point pair is calculated as the features. The features represent both local and global characteristics of the surface of 3D model (see Fig. 2.25).

(23)

14

Fig. 2.25 Four-dimensional of SPRH feature [27].

MPEG-7 standard has defined used the shape spectrum descriptor (SSD) [28, 29] for 3D model retrieval. SSD represents the histogram of curvatures of all points on the 3D surface (See Fig. 2.26).

Fig. 2.26 The MPEG-7’s SSD of two 3D models [29].

Reisert and Burkhardt [30] exploit various further improvements of D2 and Alpha/distance (AD). They show that small and compact representations obtained by group integration can lead to reliable and informative descriptions of the objects. Seven improved descriptors are described in Reisert’s paper, including SHT distance histograms (SD), Extended SHT distance histograms (SDE), Beta/distance histograms (BD), Alpha/beta histograms (AB), Alpha/beta/distance histogram (ABD), Alpha/beta/distance SHT histograms (ABSD), Alpha/beta/distance extended SHT histograms (ABSDE).

You-Shin Park et al. [31] proposes a 3D model retrieval system based on principal component analysis (PCA) for normalizing all the models. The histogram of 2D images sliced along the x-, y-, and z-coordinates are used as shape descriptor for measuring the

(24)

15

similarity in 3D models. For a 3D model, a hundred planes orthogonalize with the x-y-, and z-coordinates as sliced shape, respectively. Therefore, sliced shape is the 2D images which are intersecting with 3D model. This approach is to compute the slices of 3D models for the x-y-, and z-coordinates, respectively and to set by the principal axis based on principal component analysis (PCA) to search s the 3D model between the given query and the database.

Fig. 2.27 Each plane equation for the x-,y-,and z-coordinates, respectively [31].

Fig. 2.28 Find intersection line between a plane equation and a mesh [31].

Ceyhun Burak Akgul et al. [32] proposed content-based 3D model retrieval by a probabilistic generative description of local shape properties (see Fig.2.29).

Z

X

Y

(25)

16

Fig. 2.29 Density-based shape description: Measurements of the (multivariate) feature S obtained from the 3D object surface are processed into descriptor vectors, that is, the probability density function of the feature [32].

2.3 Frequency Based Methods

Frequency feature based methods are extracted by mapping the 3D data into frequency domain representations. The effectiveness of these frequency descriptors relies on the quality of the voxel decomposition of a 3D model.

Vranic et al. [33, 34] applied Fourier transform on the sphere with spherical harmonics to generate embedded multi-resolution 3D shape feature vectors shown in Fig. 2.30. This method requires pose normalization to be rotation invariant. A modified rotation invariant shape descriptor based on the spherical harmonics without pose normalization has been proposed by Funkhouser et al. [35-37] as shown in Fig. 2.31. First, a 3D model is decomposed into a collection of spherical functions which are derived by intersecting the model with a set of concentric spheres of different radii. Each spherical function is decomposed into a number of harmonics of different frequencies. The sum of norms of all frequency components at the same radius is regarded as the shape descriptor of a spherical function. The descriptors of all spherical functions will constitute the shape descriptor of a 3D model. The reason for the descriptor being rotation invariant is that rotating a spherical function does not change the energies in each frequency component.

Fig. 2.30 Multi-resolution from Fourier coefficients for spherical harmonics [34].

(26)

17

Fig. 2.31 The Spherical Harmonic Shape Representation framework [36].

Papadakis et al. [38] proposed a 3D model retrieval system based on the spherical harmonics (see Fig. 2.32). Two shape descriptors are adopted in this paper. The 3D model is aligned by the continuous principal component analysis (CPCA) or by the modified principal component analysis (NPCA). In CPCA, point coordinates is used to align models.

Besides, in NPCA, the unit normal vectors of the meshes are used to align models. The model’s surface is intersected with rays emanating from the mass center of 3D model.

Between the intersection and the mass center, the 3D model is filled up with points in equidistance. The spherical harmonics (SH) is applied on the filled 3D model to extract the two feature vectors separately based on CPCA and NPCA. Then, the spherical harmonics (SH) is applied on the filled 3D model to extract the second feature vector.

Fig. 2.32 The stages of the proposed 3D shape matching scheme for retrieval. [38].

Pan et al.[42] proposesd the Poisson histogram as 3D model descriptor. It can appropriate capture 3D model structure feature and robust under different geometry processing. Poisson histogram can be defined as two steps. Firstly, the Poisson equation of the 3D model is calculated. Secondly, the histogram-based shape descriptor, Poisson histogram, is obtained by accumulating the values of the defined signature in bins. The low dimension Possion histogram can efficient retrieval 3D models (see Fig. 2.33)

(27)

18

(a)

(b)

Fig. 2.33 (a) Different shape structures (b) The Poisson histograms for the shapes in Figure (a)(each color denotes one model. The results show the Poisson histogram has good distinction for different shape structures)[42].

(28)

19

Chapter 3

The Proposed Method for 3D Model Retrieval

In this chapter, the 2D Spectral and Cepstral features of 3D models will be used for 3D model retrieval.

3.1 Pre-Processing

Before extracting the features, 3D models are first normalized by using the voxel grids and aligned by the principal planes. The steps of pre-processing are shown as follow.

3.1.1 Alignment of 3D model

The main idea behind GPCA is to perform PCA on a 3D model based on the coordinate vectors of those voxels which contains at least one vertex or pseudo vertex point, instead of the coordinate vectors of all vertex points. After the pseudo vertices are generated, a voxel located at coordinates (x, y, z) will be defined as an opaque voxel, notated Voxel(x, y, z) = 1, if at least one vertex or pseudo vertex locates within this voxel;

otherwise, this voxel is defined as a transparent voxel, notated Voxel(x, y, z) = 0.

The proposed GPCA approach [20] is based on the covariance matrix computed from the coordinate vectors of all opaque voxels instead of the coordinate vectors of all vertex points. As a result, the area weighted defect will be greatly reduced since each opaque voxel will be equally weighted irrespective of how many vertex points located within this voxel. The detailed steps for implementing GPCA will be given below.

Step 1: For each 3D model, the mean vector, m, of the coordinate vectors of all opaque

voxels is calculated as follows:

opa

opa 1 T

2 1 0

] 1 , , [

N

i

N

i

m m

m o

m

(1) where Nopa is the total number of opaque voxels and oi is the average coordinate vector of all vertices and pseudo vertices within the i-th opaque voxel:

1 , ] , , [

1

T

Ni

j i

j i i

i

i

x

i

y z N

o v

(2) where

N is the total number of vertices located within in the i-th opaque voxel

i

(29)

20

and

v

ij is the coordinate vector of the j-th vertex (including pseudo vertex) within the i-th opaque voxel.

Step 2: Compute the

3

3 covariance matrix, C, from the coordinator vectors of all opaque voxels:

opa

1

T opa

) )(

1

N

(

i

i

N o

i

m o m

C

(3)

Step 3: Compute the three eigenvalues and their corresponding eigenvectors of the

covariance matrix C.

Step 4: Sort the eigenvalues in a decreasing order. Let

1,

2, and

3 denote the sorted eigenvalues, v1, v2, and v3 be the corresponding normalized eigenvectors, respectively. These three eigenvectors will form an orthonormal set of basis for

R .

3

The three planes passing through the original point (0, 0, 0) and with their normal vectors being v1, v2, and v3 will be called the PCA planes, notated E1, E2, and E3. Each 3D model is then rotated such that these three eigenvectors v1, v2, and v3 will be aligned with the standard basis vectors of R3, e1 = [1, 0, 0]T, e2 = [0, 1, 0]T, and e3 = [0, 0, 1]T (see Fig.

3.1). Therefore, we have to find the rotation matrix R such that Rv1 = e1, Rv2 = e2, and Rv3

= e3. That is, R [v1, v2, v3] = [e1, e2, e3] = I, where I denotes the 33 identity matrix. Define

V = [v

1, v2, v3] the matrix formed by the three normalized eigenvectors, which is an orthogonal matrix. Then we can get RV = I. Since V is an orthogonal matrix, its multiplicative inverse always exists and can be represented as V-1 = VT.Therefore, we can easily derive the rotation matrix R = I V-1 = V-1 = VT.

Fig. 3.1 The three principal planes of 3D tie fighter model.

(30)

21

. / 32 d

s

3.1.2 3D model Normalization

The aligned 3D model is then circumscribed by a smallest bounding cube again (see Fig. 3.2(a)). This bounding cube is also decomposed into a 128×128×128 voxel grid (see Fig. 3.2(b)). If there is a vertex point or pseudo vertex located within the voxel with coordinate vector [x, y, z]T, this voxel is regarded as an opaque voxel, notated Voxel(x, y, z)

= 1; otherwise, Voxel(x, y, z) = 0. We then calculate the mass center, [x

,

y

, 

z]T, of the model using the following equation:

, ] , , 1 [

] , , [

1 ) , , (

T opa

T

z y x Voxel z

y

x

x y z

N

(4)

where Nopa is the number of opaque voxels within the aligned 3D model. The coordinate vector of the mass center,

[ 

x

, 

y

, 

z

]

T, is then subtracted from that of each opaque voxel:

, ,

,

] , ,

[

x y z

T

xu

x

yu

y

zu

z

(5)

1 .

1 ) , , (

2 2 2 opa

z y x Voxel

z y N x

d

(6)

Then, the 3D model is linearly scaled using a scaling factor s such that the average distance from all opaque voxels to the mass center becomes a fixed value, 32:

, ) ( ) ( ) 1 (

1 ) , , (

2 2

2 opa

z y x Voxel

sz sy

N sx

(7)

where [sx, sy, sz]T = s[x, y, z]T is the scaled coordinate vector of an opaque voxel originally located at [x, y, z]T. From Eqs. (6) and (7), the scaling factor s can be easily derived by using the following equation:

(8) This procedure ensures that the extracted descriptor will be invariant to scaling (see Fig.

3.2(c)).

(31)

22

(a) (b) (c)

Fig. 3.2 The original and decomposed 3D tie fighter model. (a) The 3D tie fighter model circumscribed by a bounding box. (b) The bounding box of the 3D tie fighter model is decomposed into a 128×128×128 voxel grid. (c) The normalized 3D tie fighter model.

3.2 The Elevation Projection

Once the pose of a 3D model is aligned, the elevation (depth) value of each opaque voxel will be projected onto six planes indicating the six different views of the 3D model (see Fig. 3.3(a)).

Each elevation is represented by a gray level image in which the gray values denote the altitude information. Let six outer elevations are notated successively as I

k, k = 1, 2, … , 6. The gray value of each pixel on these outer elevations (see Fig. 3.3(b))

will be defined as:

(9) (10) (11) (12) (13)

(14)

64 , 64 )), , , ( ) 65 ((

max ) ,

( 0 64

1     

zV x y z x y

y

x z

I

64 , 64 )), , , ( ) 65 ((

max ) ,

( 0 64

2     

yV x yz x z

z

x y

I

64 , 64 )), , , ( ) 65 ((

max ) ,

( 0 64

3     

xV x y z y z

z

y x

I

64 , 64 )), , , ( ) 65 ((

max ) , (

0

464    

zV x y z x y

y x

z

I

64 , 64 )), , , ( ) 65 ((

max ) , (

0

564    

yV x y z x z

z x

y

I

64 , 64 )), , , ( ) 65 ((

max ) , (

0

664    

xV x y z y z

z y

x

I

(32)

23

(a) (b)

Fig. 3.3 Six different views of a 3D fighter_jet model.

Once the six projection images are generated, 2D Spectral and Cepstral features will then be extracted from each projection image.

(33)

24

3.3 The 2D Spectral and Cepstral Feature

The 3D model retrieval approach using 2D spectral features and cepstral features.

(please see Fig. 49).

Fig. 3.4 Flow diagram for 2D spectral and cepstral feature extraction.

Image

Generic Subband Decomposition (GSD)

Generic Logarithmic Subband Decomposition (GLSD)

Complement Subband Decomposition (CSD)

Complement Logarithmic Subband Decomposition (CLSD)

GSD+CSD

GLSD+CLSD

Subband Energy Computation

Subband Energy Computation

Subband Energy Computation

Subband Energy Computation

Subband Energy Computation

Subband Energy Computation

CLSD 2D Spectral Feature 2D Spectral Feature

2D IDFT 2D DFT

2D IDFT

2D IDFT

2D IDFT

2D IDFT

2D IDFT

GSD 2D Cepstral Feature

GLSD 2D Cepstral Feature

CSD 2D Cepstral Feature

CLSD 2D Cepstral Feature

GCSD 2D Cepstral Feature

GLCLSD 2D Cepstral Feature 2D Cepstral Feature

Generic Subband Decomposition (GSD)

Generic Logarithmic Subband Decomposition (GLSD)

Complement Subband Decomposition (CSD)

Complement Logarithmic Subband Decomposition (CLSD)

GSD 2D Spectral Feature Subband Energy

Computation

Subband Energy Computation

Subband Energy Computation

Subband Energy Computation

GLSD 2D Spectral Feature

CSD 2D Spectral Feature 2D IDFT

(34)

25

3.3.1 The Spectral Feature

The spectral features include 2D spectral feature, 2D generic subband decomposition (GSD) spectral feature, 2D generic logarithmic subband decomposition (GLSD) spectral feature, 2D complement subband decomposition (CSD) spectral feature, 2D complement logarithmic subband decomposition (CLSD) spectral feature.

3.3.1.1 The 2D Spectral Feature Extraction

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(15) The magnitudes of these central NN spectral coefficients will constitute the 2D spectral feature vetors:

(16)

3.3.1.2 The 2D Generic Subband Decomposition (GSD) Spectral Feature Extraction

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(17)

The 2D spectrum is then decomposed into RA subbands B, (0

   R-1, 0    A-1)

along the radial and angular directions. The energy of each subband is then computed as follows:

(18)

One subband decomposition methods, called generic subband decomposition (GSD) will be employed to divide the 2D spectrum into several subbands (please see Fig. 3.5(a)).

The magnitudes of these RA spectral coefficients will constitute the GSD 2D spectral feature vector:

(19) ))

( DFT(

)

(

u, v I x, y

F

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ), 1 (

S [

| , N- N

,|F

|, , N ,|F

|, , N- ,|F

|, ,

|F

N N ,f , f f

1 0

, 1 0

,

| ) , (

| )

, (

) ,

, (

2

     

 

A R

v u F E

B v u

)) ( DFT(

)

(

u, v I x, y

F

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ), 1 (

GSD_S

[

| , A- R

,|E

|, , R ,|E

|, , A- ,|E

|, ,

|E

A R ,f , f f

(35)

26

3.3.1.3 The 2D Generic Logarithmic Subband Decomposition (GLSD) Spectral Feature

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(20)

The 2D spectrum is then decomposed into RA subbands B, (0

   R-1, 0    A-1)

along the radial and angular directions. The energy of each subband is then computed as follows:

(21)

One subband decomposition methods, called generic logarithmic subband decomposition (GLSD) will be employed to divide the 2D spectrum into several subbands (please see Fig.

3.5(b)).The magnitudes of these RA spectral coefficients will constitute the GLSD 2D spectral feature vector:

(22)

(a) (b)

Fig. 3.5 Subband decomposition. (a)Generic subband decomposition(GSD).(b) Generic logarithmic subband decomposition (GLSD).

1 0

, 1 0

,

| ) , (

| )

, (

) ,

, (

2

     

 

A R

v u F E

B v u

)) ( DFT(

)

(

u, v I x, y

F

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ) 1 (

GLSD_S

[

| , A- R

,|E

|, , R ,|E

|, , A- ,|E

|, ,

|E

A R ,f , ,f f

Angle (A) Uniform Radius (R)

GSD

45o 90o

135o

180o 225o

270o 315o 0o

Angle (A) Log Radius (R)

GLSD

45o 90o

135o 180o

225o

270o

0o 315o

(36)

27

3.3.1.4 The 2D Complement Subband Decomposition (CSD) Spectral Feature

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(23)

The 2D spectrum is then decomposed into RA subbands B, (0

   R-1, 0    A-1)

along the radial and angular directions. The energy of each subband is then computed as follows:

(24)

One subband decomposition methods, called complement subband decomposition (CSD) will be employed to divide the 2D spectrum into several subbands (please see Fig. 3.6 and Fig. 3.7(a)).The magnitudes of these RA spectral coefficients will constitute the CSD 2D spectral feature vector:

(25)

3.3.1.5 The 2D Complement Logarithmic Subband Decomposition (CLSD) Spectral Feature

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(26)

The 2D spectrum is then decomposed into RA subbands B, (0

   R-1, 0    A-1)

along the radial and angular directions. The energy of each subband is then computed as follows:

(27)

One subband decomposition methods, called complement log subband decomposition (CLSD) will be employed to divide the 2D spectrum into several subbands (please see Fig.

3.6 and Fig. 3.7(b)).The magnitudes of these RA spectral coefficients will constitute the 1

0 , 1 0

,

| ) , (

| )

, (

) ,

, (

2

     

 

A R

v u F E

B v u

)) ( DFT(

)

(

u, v I x, y

F

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ) 1 (

CSD_S

[

| , A- R

,|E

|, , R ,|E

|, , A- ,|E

|, ,

|E

A R ,f , ,f f

1 0

, 1 0

,

| ) , (

| )

, (

) ,

, (

2

     

 

A R

v u F E

B v u

)) ( DFT(

)

(

u, v I x, y

F

(37)

28

CLSD 2D spectral feature vector:

(28)

Fig. 3.6 Shift,Dashed line is angle of the solid line after shift. (For example, Figure 3.6 is divided into 8 equal portions)

(a) (b)

Fig. 3.7 Subband decomposition. (a) Complement subband decomposition (CSD). (b) Complement logarithmic subband decomposition (CLSD).

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ) 1 (

CLSD_S

[

| , A- R

,|E

|, , R ,|E

|, , A- ,|E

|, ,

|E

A R ,f , ,f f

22.5o 67.5o 112.5o

157.5o 202.5o

247.5o 292.5o 337.5o

22.5o 67.5o 112.5o

157.5o 202.5o

247.5o 292.5o 337.5o Shift

Angle(A) Uniform Radius (R)

CSD

Angle(A) Log Radius (R)

CLSD

22.5o 67.5o 112.5o

157.5o 202.5o

247.5o 292.5o 337.5o

22.5o 67.5o 112.5o

157.5o 202.5o

247.5o 292.5o 337.5o

(38)

29

3.3.2 The Cepstral Feature

The cepstral features include 2D cepstral feature, 2D generic subband decomposition (GSD) cepstral feature, 2D generic logarithmic subband decomposition (GLSD) cepstral feature, 2D complement subband decomposition (CSD) cepstral feature, 2D complement logarithmic subband decomposition (CLSD) cepstral feature, GSD+CSD (GCSD) cepstral feature and GLSD+CLSD (GLCLSD) cepstral feature.

3.3.2.1 The 2D Cepstral Feature Extraction

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(29) The 2D cepstrum C(p, q) can be derived by taking the inverse DFT of the spectrum F(,

):

(30) The magnitudes of these central NN cepstral coefficients will constitute the 2D cepstral feature vetors:

(31)

3.3.2.2 The 2D Generic Subband Decomposition (GSD) Cepstral Feature Extraction

2D discrete Fourier transform (2D-DFT) of each 2D projection image I(x, y) is computed as follows:

(32)

The 2D spectrum is then decomposed into RA subbands B, (0

   R-1, 0    A-1)

along the radial and angular directions. The energy of each subband is then computed as follows:

(33) ))

( DFT(

)

(

u, v I x, y

F

T

f

T

] ) 1 1 ( )

0 1 ( )

1 0 ( )

0 0 ( [

)]

( ) 2 ( ), 1 (

C [

| , N- N

,|C

|, , N ,|C

|, , N- ,|C

|, ,

|C

N N ,f , f f

)) ( ( DFT )

(

p, q

-1

F u, v

C

1 0

, 1 0

,

| ) , (

| )

, (

) ,

, (

2

     

 

A R

v u F E

B v u

)) ( DFT(

)

(

u, v I x, y

F

參考文獻

相關文件

Wang, Unique continuation for the elasticity sys- tem and a counterexample for second order elliptic systems, Harmonic Analysis, Partial Differential Equations, Complex Analysis,

The results contain the conditions of a perfect conversion, the best strategy for converting 2D into prisms or pyramids under the best or worth circumstance, and a strategy

2.How do the other countries present generic skills?. 3.What are the recommended

• Students’ learning experiences are organised within the school curriculum framework to include knowledge, generic skills, values and attitudes spanning the five

have demonstrated using two- dimensional (2D) electronic spectroscopy that surprisingly long-lived (>660 fs) quantum coher- ences between excitonic states play an important role

Holographic dual to a chiral 2D CFT, with the same left central charge as in warped AdS/CFT, and non-vanishing left- and right-moving temperatures.. Provide another novel support to

We compare the results of analytical and numerical studies of lattice 2D quantum gravity, where the internal quantum metric is described by random (dynamical)

 2D materials have potential for future electronics.  The real and unique benefits is the atomically