• 沒有找到結果。

Image Error and PVS Accuracy

在文檔中 積極式可視度取樣法 (頁 39-53)

Visibility Computation Framework

5.2 Image Error and PVS Accuracy

Table 5.1: Comparison of performance, PVS accuracy, and visual error of our method and Nirenstein and Blake’s method [NB04] and Wonka et al.’s method [WWZ+06] in the Vi-enna800K model.

Exp. Method Samples/Cell Time/Cell Visible Set Pixel Error Color Error

1 Depth 80 6.22s 2.80% 5.5 × 10−5 1.5 × 10−5

GVS. The reason is that polygons in Hong Kong are relative discrete than others, and there are many overlapping polygons causing severe z-fighting phenomenon. These are unfavorable for ray casting approaches like GVS. However, image-based methods can sustain discrete or overlapping polygons without introducing too much error since every sample is a larger scale of view rather than a smaller scale ray so it can gather a bulk of visible objects once.

5.2 Image Error and PVS Accuracy

We give some real examples of pixel error of our method compared with NIR04 and GVS. In each pixel error images rendered with PVS produced by each methods, blue pixels are correct while red pixels are false pixels (false invisible). Figure 5.5 shows some examples of error pixels of our methods. Most of these errors result from projection, resolution, and field of view settings. As these error pixels are tiny and located in far distances, so they are hardly noticeable if their pixel color was not highlighted. Figure 5.6, 5.7, and 5.8 are 3 different views

Table 5.2: Comparison of our method with NIR04 and GVS in the Vienna8M model.

Exp. Method Samples/Cell Time/Cell Visible Set Pixel Error

1 Depth 176 66.22s 0.413% 1.1 × 10−5

2 Depth 296 111.70s 0.433% 6.0 × 10−6

3 NIR04 176.3 48.67s 0.407% 6.3 × 10−5

4 NIR04 301 86.31s 0.431% 4.9 × 10−5

5 GVS 5.4M 369.91s 1.27% 1.4 × 10−4

Table 5.3: Comparison of our method with NIR04 and GVS in the Hong Kong model.

Exp. Method Samples/Cell Time/Cell Visible Set Pixel Error

1 Depth 284 51.97s 1.77% 8.8 × 10−5

2 Depth 424 79.39s 1.86% 4.9 × 10−5

3 NIR04 288 36.22s 1.68% 4.4 × 10−4

4 NIR04 427 54.19s 1.80% 2.4 × 10−4

5 GVS 14.3M 777.31s 1.80% 3.4 × 10−3

rendered by PVS produced by our method and NIR04. NIR04 is also an image-based approach, the errors incurred by projection and resolution are exist, too. However, since we considered the geometry and texture information, the errors incurred by our method are fewer at the far distance compared with NIR04.

Although the PVS produced by GVS has higher visible set percentage in Vienna8M (Exp.5 in Table 5.2), it doesn’t outperform ours and NIR04 in aspect of image error. Figure 5.9, 5.10, and 5.11 are examples of pixel error incurred by our method and GVS. In our experiments, GVS tends to lose some visually important polygons but gather far and tiny polygons. This is caused by the characteristic of object-based ray casting method we mentioned before. In Figure 5.12 we place the camera out of the view cell (the green wired box). The PVS produced by GVS has a lot of tiny polygons far from the view cell, while our method mostly find polygons near the view cell. Although these far polygons are visible theoretically, these unnoticeable polygons

5.2 Image Error and PVS Accuracy 33

may cause rendering overhead and hit performance during real-time applications.

In order to test the accuracy of the sampling mechanism, we continuously add samples on a boundary face of a view cell. The relationship between the growing number of samples and the PVS size of the sampling results on 5 different boundary faces is shown in Figure 5.13. One can observe that the PVS size grows rapidly when the number of samples is less than 100 and almost capture all visible objects afterward. This shows that our sampling algorithm is very efficient to distribute samples.

(a) Vienna800K

(b) Vienna8M

(c) Hong Kong

Figure 5.4: The PVS distribution produced by different methods in different view cells of dif-ference scenes.

5.2 Image Error and PVS Accuracy 35

(a)

(b)

Figure 5.5: (a) A view at a crossroad (b) The same scene rendered using the PVS generated by our algorithm, where blue pixels are true visible objects and red pixels show image errors (false invisible objects).

Figure 5.6: Top: a view in Vienna800K. Bottom left: Pixel errors of PVS of our methods.

Bottom right: Pixel errors of PVS of NIR04.

5.2 Image Error and PVS Accuracy 37

Figure 5.7: Top: another view in Vienna800K. Bottom left: Pixel errors of PVS of our methods.

Bottom right: Pixel errors of PVS of NIR04.

Figure 5.8: Top: a view in Vienna8M. Bottom left: Pixel errors of PVS of our methods. Bottom right: Pixel errors of PVS of NIR04.

5.2 Image Error and PVS Accuracy 39

Figure 5.9: Top: a view in Vienna800K. Bottom left: Pixel errors of PVS of our methods.

Bottom right: Pixel errors of PVS of GVS. There are significant polygons missed.

Figure 5.10: Top: a view in Vienna8M. Bottom left: Pixel errors of PVS of our methods.

Bottom right: Pixel errors of PVS of GVS. There are significant polygons missed.

5.2 Image Error and PVS Accuracy 41

Figure 5.11: Top: a view in Hong Kong. Bottom left: Pixel errors of PVS of our methods.

Bottom right: Pixel errors of PVS of GVS. There are significant polygons missed.

Figure 5.12: Top: a top view in Vienna8M. The camera is placed out of the view cell (the green wired box). Bottom left: The polygons found by our method are near the view cell. Bottom right: GVS found many far and tiny polygons from the view cell.

5.2 Image Error and PVS Accuracy 43

Figure 5.13: This figure shows that our sampling algorithm is very efficient on placing samples.

One can observe that the PVS size grows rapidly when the number of samples is less than 100 and the PVS almost captures all visible objects afterward.

6

Conclusions

In this chapter, we give a brief summary and conclusion about our visibility computation algo-rithms. We also several directions for future improvements.

6.1 Summary

We present an aggressive region-based visibility sampling algorithm for general 3D scenes.

Rather than adding visibility samples only based on the visual error of the PVS of sampled regions, we actively estimate the reliability of the visibility at unsampled position and add samples at low reliability regions. Our algorithm utilizes the depth gradients to construct an importance function that represents the reliability of the potentially visible set (PVS) on a view cell’s boundary faces. The importance function guides visibility samples to depth discontinu-ities of the scene such that more visible objects can be sampled to reduce the visual error. Our experiments show that our sampling approach can effectively improve the PVS accuracy and computational speed compared to the image-based adaptive approach proposed in [NB04] and the object-based approach proposed in [WWZ+06].

44

在文檔中 積極式可視度取樣法 (頁 39-53)

相關文件