• 沒有找到結果。

Volume Visualization

1 National Center for High-Performance Computing, Taiwan, ROC

5. Volume Visualization

Figure 5 presents the procedure of our volume visualization sub-system. Besides the rendering pipeline, this procedure also shows the interactive functionalities.

The input data are the thorax images and lung images and the output data are rendering results. User can observe and navigate the rendering results, and modify

Figure 5. Volume visualization procedure.

the transfer functions through a graphical user interface.

The volume rendering kernel is the most important part because the system takes the advantage of the state-of-the-art graphics hardware, we have to consider the constraints from graphics hardware while bring graphics hardware into full utilization. We will describe our volume rendering pipeline in Subsection 5.1, and more details about classification in Subsection 5.2. The last part of this section will present the measuring method in our 3D scene.

5.1. Volume Rendering Pipeline

The first step in our volume rendering kernel is re-sampling because we choose view align re-sampling planes instead of axis align planes. We use 3D texture that is supported by the current graphics hardware to store the volume data. This allows the graphics hardware to perform tri-linear interpolation when our system re-samples the data.

Classification is the second step and it is in charge of color mapping from density values to RGBA colors.

This is a useful tool to highlight the desired range of density values. It can be a mathematical equation or just a color table. In our implementation, we provide a graphical interface for user to define the transfer function and create a color table from it. Color table can be treated as texture in graphics hardware. However, we don't use this color table for mapping because we want to improve rendering quality by pre-integrated method [8].

Our third step is pre-integration which is implemented using GLSL. Engel et al. propose this algorithm in programmable graphics hardware to prevent sampling artefacts caused by insufficient Nyquist frequency [8].

This algorithm will consider the integration between adjacent slices. In other words, slab is used to replace slice. After this step, a 2D color table will be generated.

Axes represent front density values and back density values separately, so each value in this table is an integrated result from front to back. We use this pre-integrated table to do the color mapping.

The final step in our volume rendering pipeline is composition. We follow the volume rendering

composition equations that are derived by Levoy [15].

We can do the alpha blending recursively from back to front by applying the following equations:

cout=asource×csource1−asource×ci n aout=asource1−asource×ai n

where ain and ci n are previous integrated alpha and RGB colors. asource and csource are alpha and RGB colors from incoming voxel. aout and cout

are current integrated alpha and RGB colors. But, when we use the pre-integrated method, the equations have to be revised as the following equations:

cout=cpreint1−apreint×ci n

aout=apreint1−apreint×ai n

where ain , ci n , aout , cout are the same as previous definitions. apreint and cpreint are the alpha and RGB colors from pre-integrated computing.

In our implementation of volume rendering pipeline, we also do the following two designs.

1. We also sort all view align planes from all volumes because there are two volumes as our input data.

2. We don't calculate gradient and shading to get better visual effects because we choose the better performance with less memory cost. We will discuss this issue in Section 7.

5.2. Classification interface for color table generation.

Figure 6 is the proposed graphical user interface. Its controlling panel can be simply divided into three major parts:

Up-left block. This block displays the histogram of density logarithm values and distribution of RGBA colors from the selected volume data set. The histogram chart is a useful guide for user to find features. The distribution of RGBA colors use four curves to present the results separately. This block shows the color distribution in real time.

Bottom-left block. This is an interactive block.

There is a density values' mapping bar in the bottom and can add or delete control points.

We can set arbitrary RGBA colors for each control point. Afterward, the system will use linear interpolation to generate RGBA colors for other density values in between. The four bars in the middle are used to specify RGBA values separately for the selected control point.

In upper-left corner, volume's name is displayed. Under the name, there is a box containing the color of current selected control

point. In upper-right corner, there are four buttons. From left to right, their functions are

"load transfer function", "save transfer function", "jump to previous volume", and

"jump to next volume".

Right block. This block contains the pre-integrated table. It is a reference information without any interactive mechanism.

Figure 6. Graphical user interface of transfer function.

Figure 7 is an example of the mapping between the designed transfer function and rendering result. From top to bottom, there are histogram of thorax volume data, volume rendering result, and histogram of lung volume data. In this case, we redistribute the density values from 0 to 255. Bone is identified in the volume data of thorax.

Lungs, airways, tumors are identified in the volume data of lung. We also show the benefits of having individual transfer function for every volume data set. More features can be identified in the regions that users want to look at.

5.3. Measurement

For 3D measurement, we develop a 3D seed growing algorithm to calculate volumetric size. User controls a 3D probe to locate the seed and chooses a threshold of tolerance. After the seed growing procedure, we multiply the marked voxels by the size of single voxel, and will get the size of marked region. We highlight the

Figure 7. A transfer function example.

marked region on the screen for user to verify the result.

Figure 8 is an example. A probe located in assistant coordinate axes is pointed out to a tumor. A purple ball that can slide along the probe gives a concept about the threshold of tolerance. The selected tumor is marked as white. Values of threshold and volumetric size display in the upper-left corner.

6. Results

Our system is based on OpenGL [19], GLSL [13], FLTK [22], and 3D VR Engine [16]. It is developed using Microsoft Visual C++ 6.0 in win32 platform. Our testing platform is a PC with dual AMD Athlon MP 2800+ CPU, 3GB Register ECC DDR266 memory, and NVIDIA GeForce 6800 GT graphics card with 256MB graphics memory.

We have 50 patients' data sets and there are more than 2 data sets for each patient. We randomly test more than 10 data sets and the system works well. We select two data sets to present our results. The first case has some

Figure 8. Statistics of volumetric size.

small tumors and the second case has small tumors and big tumors. In the first case, lung region can be detected automatically. User only needs little editing to adjust the mask. In the second case, the automatic detection result is less than desired because the tumor is too large and right next to the heart or the boundary of thorax. Figure 4 is the example of the second case. User needs some editing to fix the boundary of the lung. Nevertheless, our system still gives a convenient way to segment lung region from the thorax CT images.

We put the previous segmented result of first test case into our visualization sub-system. Figure 7 is the rendering result. 512×512×512 is the original thorax volume's resolution and 512×512×256 is lung volume's resolution. The size of single voxel is

0.66015625×0.66015625×1.0mm3 . The 3D visualized volume data is redistributed into 8-bits unsigned char images. Rendering performance of this case is about 3.8 frame per second (FPS) in 1024×768 screen resolution and 1.5 FPS in 1280×1024 screen resolution.

The second test case has lower volume resolution but more complex diagnoses. Its original thorax volume's resolution is 512×512×96 and segmented lung volume's resolution is 512×512×35 . Voxel size is 0.65234375×0.65234375×5.0mm3 . The rendering and partial diagnoses results are presented in Figure 9.

Because the resolution of volume is low, the rendered image is not smooth. And due to the contrast agents in these images, vessels and bones have high density

Figure 9. A diagnoses example in our 3D visualization environment.

values. We use bone frame to locate the lung position and then turn it off on the rendering frame by adjusting its transfer function. We color the boundary of lung as yellow and partial range of blood as red. A dent region and a cancer inside the lung are observed. Its 3D visualized data is also redistributed into 8-bits unsigned char images. Rendering performance of the second case is about 4.5 FPS in 1024×768 screen resolution and 2.5 FPS in 1280×1024 screen resolution.

7. Discussions

In this section, we want to discuss our proposed features detection method in volume rendering step. The proposed method can identify spatial features easily and decrease the memory cost in graphics hardware.

However, we decide to sacrifice some visual effects to gain the rendering performance. We will discuss the details in the following sub-sections and some concerns especially based on our study cases.

7.1. Advantages

There are two advantages to use our method:

1. Specific spatial features. Traditional method