• 沒有找到結果。

Chapter 7: Conclusions and Future Works

7.2 Future Works

The hair model consumes a lot of editing time. It would be a research direction for editing the hair model interactively to achieve the feature of “what we see is what we get”. The interpolation scheme has drawbacks. It could make hairs interpolated from the same key hair look too similar and tufts of hairs look too different from each other when the key hairs are too different.

In the future, we will add the Eulerian fluid to simulate the wind interacting with the hair like [1]. We will add LOD support to the simulation program.

We would like to investigate the possibility of adding the environment lighting [17][29] support to the renderer. Currently, if we add a skybox as the background, the lighting of the hair doesn‟t blend into the scene because the light source doesn‟t match the environment image.

Reference

[1] C. Yuksel, and S. Tariq, “Advanced techniques in real-time hair rendering and simulation”, in ACM SIGGRAPH 2010 Courses, pp. 1-168, Los Angeles, California, 2010.

[2] I. Sadeghi, H. Pritchett, H. W. Jensen et al., “An artist friendly hair shading system”, ACM Trans. Graph., vol. 29, no. 4, pp. 1-10, 2010.

[3] Z. Ren, K. Zhou, T. Li et al., “Interactive hair rendering under environment lighting”, ACM Trans. Graph., vol. 29, no. 4, pp. 1-8, 2010.

[4] A. McAdams, A. Selle, K. Ward et al., “Detail preserving continuum simulation of straight hair”, ACM Trans. Graph., vol. 28, no. 3, pp. 1-6, 2009.

[5] A. Zinke, C. Yuksel, A. Weber et al., “Dual scattering approximation for fast multiple scattering in hair”, ACM Trans. Graph., vol. 27, no. 3, pp. 1-10, 2008.

[6] C. Yuksel, and J. Keyser, “Deep Opacity Maps”, Computer Graphics Forum, vol. 27, no. 2, pp. 675-680, 2008.

[7] S. Tariq, and L. Bavoil, “Real time hair simulation and rendering on the GPU”, in ACM SIGGRAPH 2008 talks, Los Angeles, California, 2008.

[8] E. Sintorn, and U. Assarsson, “Real-time approximate sorting for self shadowing and transparency in hair rendering”, in Proceedings of the 2008 symposium on Interactive 3D graphics and games, pp. 157-162, Redwood City, California, 2008.

[9] A. Selle, M. Lentine, and R. Fedkiw, “A mass spring model for hair simulation”, ACM Trans. Graph., vol. 27, no. 3, pp. 1-11, 2008.

[10] Q. Hou, K. Zhou, and B. Guo, “BSGP: bulk-synchronous GPU programming”, ACM Trans. Graph., vol. 27, no. 3, pp. 1-12, 2008.

[11] R. Bridson, Fluid simulation for computer graphics: AK Peters Ltd, 2008.

[12] K. Ward, N. Galoppo, and M. Lin, “Interactive virtual hair salon”, Presence-Teleoperators and Virtual Environments, vol. 16, no. 3, pp. 237-251, 2007.

[13] Nvidia, CUDA Programming Guide: NVIDIA Corporation, 2007.

[14] M. Müller, B. Heidelberger, M. Hennix et al., “Position based dynamics”, Journal of Visual Communication and Image Representation, vol. 18, no. 2, pp.

109-118, 2007.

[15] S. Green, “Cuda particles”, NVIDIA Whitepaper, 2007.

[16] K. Crane, I. Llamas, and S. Tariq, "Real-time simulation and rendering of 3D fluids," GPU Gems 3, pp. 633-675: Addison Wesley, 2007.

[17] B. Hiebert, J. Dave, T.-Y. Kim et al., “The Chronicles of Narnia: the lion, the crowds and rhythm and hues”, in ACM SIGGRAPH 2006 Courses, Boston, Massachusetts, 2006.

[18] S. Hadap, “Oriented strands: dynamics of stiff multi-body system”, in Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 91-100, Vienna, Austria, 2006.

[19] R. Gupta, M. Montagnol, P. Volino et al., “Optimized framework for real time hair simulation”, Advances in Computer Graphics, pp. 702-710, 2006.

[20] R. Bridson, R. Fedkiw, and M. Muller-Fischer, “Fluid simulation”, in ACM SIGGRAPH 2006 Courses, Boston, Massachusetts, 2006.

[21] F. Bertails, B. Audoly, M. P. Cani et al., “Super-helices for predicting the dynamics of natural hair”, Acm Transactions on Graphics, vol. 25, no. 3, pp.

1180-1187, Jul, 2006.

[22] Y. Zhu, and R. Bridson, “Animating sand as a fluid”, ACM Trans. Graph., vol.

24, no. 3, pp. 965-972, 2005.

[23] L. Petrovic, M. Henne, and J. Anderson, Volumetric methods for simulation and rendering of hair, Tech. rep., Pixar Animation Studios, 2005.

[24] H. Nguyen, and W. Donnelly, "Hair animation and rendering in the nalu demo", GPU Gems 2, pp. 361–380: Addison Wesley, 2005.

[25] B. Choe, M. G. Choi, and H.-S. Ko, “Simulating complex hair with robust collision handling”, in Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 153-160, Los Angeles, California, 2005.

[26] F. Bertails, C. Menier, and M. Cani, “A practical self-shadowing algorithm for interactive hair animation”, in Proceedings of Graphics Interface 2005, pp.

71-78, Victoria, British Columbia, 2005.

[27] P. Volino, and N. Magnenat-Thalmann, “Animating complex hairstyles in real-time”, in Proceedings of the ACM symposium on Virtual reality software and technology, pp. 41-48, Hong Kong, 2004.

[28] T. Scheuermann, “Practical real-time hair rendering and shading”, in ACM SIGGRAPH 2004 Sketches, pp. 147, Los Angeles, California, 2004.

[29] I. Neulander, “Quick image-based lighting of hair”, in ACM SIGGRAPH 2004 Sketches, pp. 43, Los Angeles, California, 2004.

[30] T. Mertens, J. Kautz, P. Bekaert et al., “A self-shadow algorithm for dynamic hair using density clustering”, in ACM SIGGRAPH 2004 Sketches, pp. 44, Los Angeles, California, 2004.

[31] M. Koster, J. Haber, and H.-P. Seidel, “Real-Time Rendering of Human Hair Using Programmable Graphics Hardware”, in Proceedings of the Computer

Graphics International, pp. 248-256, 2004.

[32] K. Ward, M. C. Lin, J. Lee et al., “Modeling Hair Using Level-of-Detail Representations”, in Proceedings of the 16th International Conference on Computer Animation and Social Agents (CASA 2003), pp. 41, 2003.

[33] K. Ward, and M. C. Lin, “Adaptive Grouping and Subdivision for Simulating Hair Dynamics”, in Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, pp. 234, 2003.

[34] S. R. Marschner, H. W. Jensen, M. Cammarano et al., “Light scattering from human hair fibers”, ACM Trans. Graph., vol. 22, no. 3, pp. 780-791, 2003.

[35] F. Bertails, T.-Y. Kim, M.-P. Cani et al., “Adaptive Wisp Tree: a multiresolution control structure for simulating dynamic clustering in hair motion”, in Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 207-213, San Diego, California, 2003.

[36] Y. Bando, B.-Y. Chen, and T. Nishita, “Animating Hair with Loosely Connected Particles”, Computer Graphics Forum, vol. 22, no. 3, pp. 411-418, 2003.

[37] E. Plante, M. P. Cani, and P. Poulin, “Capturing the complexity of hair motion”, Graphical Models, vol. 64, no. 1, pp. 40-58, 2002.

[38] T. Y. Kim, and U. Neumann, “Interactive multiresolution hair modeling and editing”, Acm Transactions on Graphics, vol. 21, no. 3, pp. 620-629, 2002.

[39] J. T. Chang, J. Jin, and Y. Yu, “A practical model for hair mutual interactions”, in Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 73-80, San Antonio, Texas, 2002.

[40] E. Plante, M.-P. Cani, and P. Poulin, “A layered wisp model for simulating interactions inside long hair”, in Proceedings of the Eurographic workshop on Computer animation and simulation, Manchester, pp. 139-148, UK, 2001.

[41] T.-Y. Kim, and U. Neumann, “Opacity Shadow Maps”, in Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pp. 177-182, 2001.

[42] S. Hadap, and N. Magnenat-Thalmann, “Modeling Dynamic Hair as a Continuum”, Computer Graphics Forum, vol. 20, no. 3, pp. 329-338, 2001.

[43] T. Lokovic, and E. Veach, “Deep shadow maps”, in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp.

385-392, 2000.

[44] K. Anjyo, Y. Usami, and T. Kurihara, “A simple method for extracting the natural beauty of hair”, ACM SIGGRAPH Computer Graphics, vol. 26, no. 2, pp. 111-120, 1992.

[45] R. Rosenblum, W. Carlson, and E. Tripp, “Simulating the structure and

dynamics of human hair: Modeling, rendering and animation”, The Journal of Visualization and Computer Animation, vol. 2, no. 4, pp. 141-148, 1991.

[46] J. T. Kajiya, and T. L. Kay, “Rendering fur with three dimensional textures”, SIGGRAPH Comput. Graph., vol. 23, no. 3, pp. 271-280, 1989.

[47] J. U. Brackbill, and H. M. Ruppel, “FLIP: A method for adaptively zoned, particle-in-cell calculations of fluid flows in two dimensions”, Journal of Computational Physics, vol. 65, no. 2, pp. 314-343, 1986.

[48] F. H. Harlow, “The Particle-in-Cell Method for Numerical Solution of Problems in Fluid Dynamics”, in Experimental Arithmetic, High-Speed Computations and Mathematics, pp. 269-269, 1963.

Appendix A

We give an informal explanation of why making the velocities divergence free

preserves the incompressible feature of fluid here.

Figure 50: An arbitrary small region of fluid

Consider the case shown in Figure 50. The box is an arbitrary small region of fluid,

∂Ω is its boundary surface, n̂ is the surface normal vector and u⃗ is the velocity. The change of volume with respect to time is:

d

dtvolume = ∯ u⃗ ⋅ n̂

∂Ω

(6) If the volume doesn‟t change, we get:

∯ u⃗ ⋅ n̂ integrand must equal to zero. Hence we have:

∇ ∙ u⃗ = 0 (10)

We see that the incompressibility condition is equivalent to making the velocities divergence free.

Appendix B

We are going to start with the split incompressible fluid equations without viscosity, see [11]. In these equations, „q‟ is a generic quantity like velocity, density or temperature, u⃗ is the velocity of the fluid, g⃗ is gravity, ρ is the density of the fluid and p is pressure.

Assume

∂u⃗

∂t =u⃗ n+1− u⃗ n

∆t (14)

We substitute the derivative of velocity into the pressure equation:

u⃗ n+1− u⃗ n

∆t +1

ρ∇p = 0 (15)

We can rearrange the equation to get:

u⃗ n+1− u⃗ n+∆t

ρ ∇p = 0 (16)

We take the gradient on both side of the equation and rearrange it:

∇ ∙ u⃗ n+1− ∇ ∙ u⃗ n+∆t that makes the velocity field divergence free.

On the staggered MAC grid, we use central difference to approximate the divergence of velocity in a fluid cell (i,j,k) :

Here ∆x is the width of the grid cell.

Similarly, we approximate the Laplacian of pressure with:

2p =6pi,j,k− pi+1,j,k− pi,j+1,k− pi,j,k+1− pi−1,j,k− pi,j−1,k− pi,j,k−1

∆x2 (20)

Then we have the pressure equation in a fluid cell (i, j, k):

−∆t

If cell (i+1, j, k) is air, we assume it is a free boundary with zero pressure, and then we set the term pi+i,j,k to zero:

We can arrange the system of linear equations of pressure into a matrix form A ∙ p = r:

( gradient of pressure to obtain the divergence free velocities, this term appears on both side of the equation and could be cancelled.

Appendix C

We list the tessellate control/evaluation shader code of the rendering pipeline in step (1) of the rendering stage in Figure 51 and Figure 52.

Figure 51: Tessellate control shader code of the B-spline tessellation step 1 layout(vertices = 4) out;

2 in vec3 vPosition[];

3 patch out vec3 tcTangent[3];

4 uniform vec2 tessLevelOuter;

5 #define ID gl_InvocationID 6 void main()

7 {

8 gl_out[ID].gl_Position = vec4(vPosition[ID], 1);

9 tcTangent[0] = vPosition[1]-vPosition[0];

10 tcTangent[1] = normalize(vPosition[2]-vPosition[1]);

11 tcTangent[2] = vPosition[3]-vPosition[2];

12 gl_TessLevelOuter[0] = tessLevelOuter[0];

13 gl_TessLevelOuter[1] = tessLevelOuter[1];

14 }

Figure 52: Tessellate evaluation shader of the B-spline tessellation step 1 layout(isolines, equal_spacing) in;

2 patch in vec3 tcTangent[3];

3 precise out vec3 teTangent;

4 void main() 5 {

6 float u = gl_TessCoord.x, v = gl_TessCoord.y;

7 vec4 p0 = gl_in[0].gl_Position, p1 = gl_in[1].gl_Position;

8 vec4 p2 = gl_in[2].gl_Position, p3 = gl_in[3].gl_Position;

9 float uu = u*u, uuu = uu*u;

10 float b0 = (1.0-u)*(1.0-u)*(1.0-u)/6.0;

11 float b1 = (3.0*uuu - 6.0*uu +4.0)/6.0;

12 float b2 = (-3.0*uuu + 3.*uu + 3.*u + 1.0)/6.0;

13 float b3 = (uuu)/6.0;

14 precise vec4 outPos =

15 fma(p0,vec4(b0),p1*b1) + fma(p3,vec4(b3), p2*b2);

16 gl_Position = outPos;

17 float Bt[3];

18 Bt[0] = 0.5f*uu - u + 0.5f;

19 Bt[1] = -uu + u + 0.5f;

20 Bt[2] = 0.5f*uu + 0 + 0;

21 teTangent = tcTangent[2]*Bt[2] +

22 fma(tcTangent[0],vec3(Bt[0]),tcTangent[1]*Bt[1]);

23 }

We list the tessellate control/evaluation shader code of the rendering pipeline in step (2) of the rendering stage in Figure 53 and Figure 54.

Figure 53: Tessellate control shader code of single strand interpolation step 1 layout(vertices = 1) out;

2 uniform vec2 tessLevelOuter;

3 in int vVertexID[];

4 patch out int tcVertexID;

5 void main() 6 {

7 tcVertexID = vVertexID[0];

8 gl_TessLevelOuter[0] = tessLevelOuter[0];

9 gl_TessLevelOuter[1] = tessLevelOuter[1];

10 }

Figure 54: Tessellate evaluation shader code of single strand interpolation step

11 uniform int numSegmentPerHair;

12 patch in int tcVertexID; //<-workaround: gl_PrimitiveID missing 13 out vec3 teTangent;

20 int hairID = tcVertexID / (numSegmentPerHair);

21 int vertexIndex = 2*tcVertexID + vertexID;

22 vec2 coord = texelFetch( clumpCoord, interpHairID ).xy;

23 vec3 yAxis = texelFetch( coordFrame, hairID*2 ).xyz;

24 vec3 zAxis = texelFetch( coordFrame, hairID*2 +1).xyz;

25 vec3 offset = yAxis * coord.x + zAxis *coord.y;

26 int vertexID2Root = vertexID+tcVertexID%(numSegmentPerHair);

27 float ratio = float(vertexID2Root)/float(numSegmentPerHair);

28 offset *= (clumpWidth* (rootWidth *(1.0-ratio) + tipWidth * ratio) );

29 vec3 vertPos = texelFetch(keyHairPos, vertexIndex).xyz;

30 vertPos += offset;

31 gl_Position = vec4(vertPos, 1.0);

32 teTangent = texelFetch(hairTangent, vertexIndex).xyz;

33 }

相關文件