Home > OpenGL > Make it translucid! (Part Three)

Make it translucid! (Part Three)

Finally the third and final part of this tutorial.

In the first one we focused on the TSM creation, then in the second one we saw the translucency’s multiple scattering contribution. In this final part, we are going to see how to filter the TSM.

The main idea with TSM is that the more the object is “thick”, consequently the less the light will be able to pass through it. Another important idea with TSM is that since light (once penetrated the material) scatters in pseudo-random direction, can leave the object from a point that do not coincides with the point from where the light entered.

In order to simulate the first behaviour, one can simply read the Z coordinate from one vertex in light space (we can call it X_{in}) and then compare with the Z from another point (taken by the camera, but then projected in light space – called X_{out}). Obviously, X_{out} have to be on the same line that ideally connects the light source with X_{in}: this means X_{out}.XY must be equal to X_{in}.XY. Then, the only difference between those two points will lie in the Z value: by arbitrarily choosing X_{out} and projecting it in light space, X_{in} is simply taken by reading the content of the TSM in X_{out}.XY!

TSM thickness idea

That’s what basically has been discussed in the second part. The thickness is computed and used to modulate light intensity in the Rd function: the latter however does not take into account the scattering: it’s like the light enters one point, then it simply “fly” through the object in a straight line, and finally leaves it exactly in the same direction as when entered the material.

This problem is solved in this final part (thus implementing the second idea of TSM) by simulating pseudo-random scattering through filtering. Filtering an image basically consists in taking the whole image and then apply some sort of algorithm to produce the final result. In this case we need to simulate the scattering: simplifying this behaviour in order to make the algorithm real-time friendly is done by taking into account not only the point where the light enters or leaves, but also its neighbours. The more the neighbour are far from the original point, the less they will contribute to the final result.

One can implement filtering by following any schema, even though I’ve found the one from the Dachsbacher’s 2003 paper being of high quality (and a little heavy on the GPU 😛 ). A graphical representation can be seen in the following image, consisting in a 21-samples schema:

TSM Filtering Schema

The code used to implement the filtering is just too long to be posted here, so I decided to post the most important parts of it.

Filtering the TSM

vec4 filter21 (vec4 Xout)

{

   const float d = 1.0/1024.0;

   vec4 finalColor = vec4(0.0,0.0,0.0,0.0);

   vec4 Xin;

   //

   float v0 = tsm_smoothness*1.0;

   float v1 = tsm_smoothness*2.0;

   float v2 = tsm_smoothness*3.0;

   float k0 = 0.05556; // 1.0/18.0

   float k1 = 0.04167; // 1.0/24.0

   float k2 = 0.04167; // 1.0/24.0

   //

   Xin = Xout;

   finalColor = multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d + d;

   Xin.x = Xout.x;

   //

   […]

   //

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x – 1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x + 1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   //

   […]

   //

   Xin.y = Xout.y + 5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x + 5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y – 5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x – 5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   //

   return finalColor*tsmGainFactor;
}


A little bit of gain is added to the final color in order to increase the TSM contribution in the final rendering.

The whole TSM stuff is then simply called in the main function of your GLSL code through filter21(..) . In order to correctly apply the translucency, the TSM’s result color must be added up to the base color. You can further increase the final look by taking into account the clamped-to-zero N dot L product, thus simulating dark zones not directly lit by the light source; or even ambient reflections on the main model..

Thank you for reading my very first tutorial. I know there are some unclear parts: some were not written by purpose, in order to give you greater flexibility. Lastly, excuse me if there are some grammar errors, but I’m Italian and I do not write long posts in English from long time…

Bye!

Advertisements
  1. junskyman
    April 24, 2008 at 5:51 pm

    Thanks for your last part of this tutorial, it’s quite helpful for me, great job!
    By now I still have some question not to understand much clearly:
    1) How to avoid those pixels which don’t show the object diffuse into the result through the mip-map filter? The paper says by adding an alpha channel, but I cannot understand it very clearly;
    2) How to account for the thickness? You say:”The thickness is computed and used to modulate light intensity in the Rd function”. Is light intensity multiplied by the thickness directly, or multiplied by the exponent of the thickness?
    Can you give me light about these two questions? Thanks for your guidance!
    By the way, would you please send me the complete GLSL shader code of part three tutorial(about how to render using TSM)if possible? My Email is skyman_2001@163.com, thank you very much in advance!

  2. eraser85
    April 24, 2008 at 6:48 pm

    Hi Junskyman, I’m glad you found the tutorial helpful.

    1) Actually, in my implementation, I’ve never had the need to cut off meaningless pixels from textures. In any case, it’s quite simple: when you render, say, the surface normals, you should not just leave the alpha channel to 1.0. Instead, you should write 1.0 when there is something useful in the r,g,b channels (eg. fragments have been processed and the corresponding result are therefore written in the texture), while 0.0 should be written when there aren’t. In practice, you can simply check if at that particular fragment there is geometry or just empty background: -sign(abs(vCoord.x)) (where vCoord is the vertex coordinate or even normal direction in any space) returns 0.0 if there is no vertex, 1.0 otherwise.
    So, in the example with surface normals, you just need to write gl_FragColor = vec4(N.x, N.y, N.z, -sign(abs(N.x)) );
    Once this has been done, depending on your final implementation, you can either discard fragments according to the above alpha channel OR simply multiply the end result of a function by the same alpha channel (when alpha is 0.0, the corresponding output fragment will just be zero).

    2) Thickness is being used in the Rd function. If you take a quick look at the code you’ll find:

    float dr = length(xr – Xout);
    float dv = length(xv – Xout);

    dr and dv are then used to compute f2, f3, f4 and f5 which are multiplied alltogether. So, no need to multiply again by the thickness factor.
    Also, in my code in part2, I’ve done a bit of optimization by using pow(e, ….) instead of e(….). While it should be the same, in practice is not: ATI and nVidia do things in different ways with the exponential function e(), so I’ve done it by hand!

    Hope my reply helps you. Thank you for your comment!

    Bye!
    Michele

  3. junskyman
    April 26, 2008 at 11:58 am

    Hi Michele,
    Thank you for your explaining. It makes me understand more clearly!
    At last I have a little question to ask you:

    float k0 = 0.05556; // 1.0/18.0
    float k1 = 0.04167; // 1.0/24.0
    float k2 = 0.04167; // 1.0/24.0

    Here what do ‘18.0’ and ‘24.0’ stand for? Why do you using them? Could you please tell me about these? Thanks in advance!
    Hope for your reply.

    Bye!
    Junskyman

  4. eraser85
    April 26, 2008 at 12:11 pm

    Hi junskyman,
    glad to be helpful.

    k0, k1 and k2 are three constants used to divide each scattering result in order to obtain a final value in range 0.0 … 1.0

    k0 is used to divide the first 9 sub-results: if you do 1/18 times 9 you have a final value in range 0.0 … 0.5 (9/18 = 0.5)
    k1 is used to divide the second 8 sub-results: the resulting range of values is now between 0.0 and 0.3333
    k2 is used to divide the last 4 sub-results: range is between 0.0 and 1.6666

    If you sum up all the sub-results you have a max value of 1.0. You can then multiply this value for whatever gain factor you need 🙂

    This way you have that the first 9 sub-results (nearer the center of the Xout coordinate) weight more than the others..

    Bye,
    Michele

  5. junskyman
    April 27, 2008 at 12:20 pm

    I see. Thank you very much, Michele!

  6. junskyman
    April 28, 2008 at 1:44 pm

    Hi Michele,
    I have a question to ask you again 🙂

    You said:
    “k1 is used to divide the second 8 sub-results: the resulting range of values is now between 0.0 and 0.3333”

    But I think these 8 sub-results aren’t same. The 4 outer sub-results are sparser than the 4 inner sub-results, aren’t they?
    So I think their weights are not equal. Am I right?

    Hope for your reply. Thanks a lot!
    Best wishes,
    junskyman

  7. eraser85
    April 28, 2008 at 4:56 pm

    Yes you are right, although it’s just something you notice if you are reading the code 🙂
    If you don’t like this approximation, you can just introduce another constant (say, k1b) and use the following values:

    k0 = 0.05556; // 1.0/18.0 (9 samples)
    k1a = 0.06250; // 1.0/16.0 (4 samples)
    k1b = 0.04167; // 1.0/24.0 (4 samples)
    k2 = 0.02083; // 1.0/48.0 (4 samples)

  8. junskyman
    April 29, 2008 at 3:01 am

    Thanks a lot! You’re so kindhearted! 🙂

  9. junskyman
    April 30, 2008 at 10:41 am

    Hi Michele,
    I come to consult you again. 🙂
    It seems that your GLSL code don’t use the Depth Map to do something:

    “vec4 depthIN = texture2D(DepthBuff, Xin.xy,lvl);”

    Then what do we use the Depth Map to do?
    Thanks a lot!

  10. eraser85
    April 30, 2008 at 12:07 pm

    Aehm.. 😀

    The code posted in the second part actually is not 100% what I’m using. I’ve made some changes in order to keep the code as independent to the implementation as possibile. While doing those changes I forgot to write the line you are missing 🙂

    We said that we take each point from the Camera view (this is called Xout), and then we use the same x-y coordinates but with different z (read from the depth buffer previously stored) in order to obtain Xin.

    The missing line is exactly this: Xin.z = depthIN.r;

    There’s something I have to say, too: when you write data in a texture, if the latter is not floating point but integer (GL_RGBA8 ) you must have all the values stored as positive or at least equal to zero. But pay attention: reading from the depth buffer or calculating the surface normals gives you numbers between -1.0 and 1.0.

    This is something quite simple to overcome but there are a number of different situations in which there is a particular solution for each of them.

    While I could discuss them down here, I suppose that’s better starting a new blog post, keeping this discussion cleaner.

  11. junskyman
    May 1, 2008 at 5:06 pm

    Thank you, Michele. I agree to start a new blog post for discussing. 😀
    Happy Labor Day! 🙂

  12. antharaton
    July 27, 2009 at 1:45 pm

    Hi Michele, thanks a lot for your tutorials. They were quite helpful, but I still have some questions left:
    1) Did you use the values for sigma_a, sigma_s and the refr_index form Jensen et al. paper as prposed by the TSM paper?
    2) Is tsm_smoothness the LOD for the mipmap? How do you get these valuse and whats the multiplikation for? Shouldn’t you be using texture2DLod instead of texture2D?

    Hope for your reply. Thanks a lot!
    Bye,
    Antharaton

    • eraser85
      July 27, 2009 at 3:04 pm

      Hi Antharaton, thank you for using my tutorial.
      1) No, I used slightly different values in order to amplify the TSM effect. Here are the values I’m using:
      vec4 lightFreqAbsorption = vec4(0.2844, 0.2844, 0.2844, 1.0);
      vec4 absorptionCoeff = 0.6;
      vec4 sigma_a = lightFreqAbsorption * absorptionCoeff;
      vec4 sigma_s = lightFreqAbsorption * (1.5 – absorptionCoeff);

      The reason I used those values is that I was experimenting with different textures and coefficient so I needed a more “neutral” light frequency absorption coefficient which could also be easily incremented and decremented (hence the need for that absorption coefficient).. feel free to use whatever values you need (the one in Jensen et al. for example)

      2) tsm_smoothness is indeed the parameter indicating the mipmap level 🙂 However, according to the official OpenGL GLSL documentation, you can use texture2DLod just in the vertex shader, not in the fragment.. you should get an error during shader compilation if texture2DLod is used in the fragment code

      Edit: Forgot to say how I chose values for mipmap textures… It’s just an arbitrarily chosen value (in my case is 2.0). The greater this value, the smoother the filter becomes.. Based on personal experience, I would recommend you to not use values greater than 2.0.. graphical artifacts appear, since you are using really small textures..

      Hope this helps.. bye 😉

      • antharaton
        July 27, 2009 at 3:26 pm

        Thanks for resonding that fast,
        I’m still new to GLSL and couldn’t find any useful information while googling that explains how to get the mipmap level in glsl. How did you compute exactly the mipmap levels or the value for tsm_smoothness? I’m using the framebufferEXT to render the irradiance, normal and depth map in one rederpass, so while creating the textures I’m generating the mipmaps just by glGenerateMipmapEXT(GL_TEXTURE_2D).

        Bye,
        Antharton

  13. eraser85
    July 27, 2009 at 4:09 pm

    For automatic mipmap generation you can do 2 things:
    1) using gluBuild2DMipmaps instead of calling glTexImage2D.. or
    2) define glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);

    keep in mind that gluBuild2DMipmaps does something behind the scene with your texture.. in 70% of cases you don’t even need to know that and can use gluBuild2DMipmaps without problems.. but if you see something strange related to your textures just revert to the other method 😉

    I have also found this link, might be interesting to you: http://www.gamedev.net/community/forums/topic.asp?topic_id=495747

    See ya!

    • antharaton
      July 27, 2009 at 10:55 pm

      Thanks a lot for all your support and suggestions.
      I have noticed that Dachsbacher et al. used a local and a gloabl response, for different values of depth. As I understood the local response was just a simplification of the gloabl response computation to speed up things.
      So Your multipleScattering function does compute both accurately?
      Considering the very last part of your tutorial, could I just use a phong shader, compute the filter in the fragment part and add the result to the one form the phong illumination?

      best regards,
      Antharaton

      • eraser85
        July 28, 2009 at 8:48 am

        Actually when translucency occur on any material, at any point on its surface you have some light contribution from an indefinitely large number of light rays, which is really heavy to compute. That’s because light scatters in almost-random directions while passing through the material.. the technique proposed by Dachsbacher et al. takes into consideration the object “depth” and then does some kind of approximation of the light coming out the surface at any point. Just take a look at the first picture in this page for a better understanding.

        My multipleScattering function just computes the scattering contribution, according to the aforementioned approximation, at each filter sample: that’s why you have to call multipleScattering 21 times with different offset values in filter21() and 9 times in filter9() 😉

        One could think of it this way: when you compute the filter21() for a single fragment in light-space you are just calculating one single ray of light entering the material and scattering in it. Doing it for every light ray (just processing the whole geometry as seen by the light point-of-view) you are approximating all the light scattering in the material, giving you smoother shadows/illumination..

        As for the phong illumination.. it’s exactly what I’m doing right now 😛 Phong illumination it’s just the local light contribution, while scattering it’s the global one 😉 Global illumination quality can be also increased by using ambient occlusion and color bleeding

  14. antharaton
    July 30, 2009 at 2:50 am

    Hi,
    I’ve finally finised the implementation but I’m not sure if it works right…
    If I’m putting a light source right behind the object (same position as camera but negativ z value) and start increasing the tsmGainFactor, then the front side of my object starts getting transparent and the back side of it gets visible.
    Do you use more lightsources to achieve your result? Can you rotate the object and still keep a nice looking transluescent object?

    Thanks for all your support,
    Antharaton

    • eraser85
      July 30, 2009 at 11:25 am

      As I already mentioned somewhere in my tutorial / comments, I’ve implemented TSM in RenderMonkey, where I do not have to write C/C++ code for FBOs, shader loading etc.. I just write directly GLSL code and that’s it.

      Of course RenderMonkey has some problems, one of them preventing me from read the lightspace matrix (don’t remember exactly the details, but me and my colleague looked at and haven’t found a solution) so I can’t have a moving light source.. that would require me to compute manually the aforementioned matrix.

      Moving to C/C++ you have all the flexibility you need to do it (and even better, I’m certain). More light sources just require you to compute the scattering for each light source and finally mix them.

  1. No trackbacks yet.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: