### Archive

Archive for February, 2008

## Make it translucid! (Part One)

As the name suggests, this tecnique is about translucency in materials. Look at your hand when is between you and a strong light source: light seems to pass through you. Or a glass with milk, you can actually see that even though milk is opaque, light still soften its look. So light through objects: that’s what translucency is all about!

Simulating this effect in real time can be an heavy task. Up to now, illumination models used in games or generic 3D software used for displaying meshes in real time, have been using BRDF models. BRDF stands for Bidirectional Reflectance Distribution Function and it just defines illumination on a material by the amount of light reflected by the surface. There are simpler types of BRDFs, like the Phong model used by OpenGL, and more complex ones such as Ward or Cook-Torrance which are phisically based.

In both cases, those models are useful for describing in real time the majority of materials, expect the translucent ones. In those materials, light isn’t just reflected back in a diffusive and/or specular manner, it is also absorbed by the material itself, scattered inside, and then left out by a different position. This phenomenon is called Subsurface Scattering and in order to simulate it in real time, we utilize the BSSRDF (Bidirectional Subsurface Scattering Reflectance Distribution Function). Even though the BSSRDF is an approximation of the generic Rendering Equation proposed by James T. Kajiya in 1986, is still too heavy to be computed in real time.

During my traineeship at Visual Computing Laboratory in the Italian National Research Council, we discussed a couple of possible tecniques for implementing the subsurface scattering in real time and we ended up using the Translucent Shadow Maps, which is both fast and pretty.

What this tecnique actually does is extending the concept of a classic shadow map, adding other useful informations in it, like the incoming irradiance and surface normals from the light point of view. Actually, to store those kind of informations we need at least two textures: one used to store the incoming irradiance (it’s colored, so we need three channels) AND the depth map, while the second is needed to store the surface normals (x, y, z, so three channels). On Shader Model 3.0 or newer we can actually do all of this in just one pass with Multiple Render Targets.

## 1 – Creating the Depth Map

That’s a simple task. Just render the geometry in light space, then save the z coordinate of each projected vertices on the texture.

varying vec4 vPosLS;

void main(void)
{

//vertex position in light space

vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

vPosLS /= vPosLS.wwww;

gl_Position = vPosLS;

}

void main(void)

{

gl_FragColor = vec4(vec3(vPosLS.z*.5+.5), –sign(abs(vPosLS.x)) );

}

## 2 – Compute the Irradiance Map

That’s a bit tricky. The irradiance incident a surface point $x_{in}$ is scattered in the material according to the Fresnel term $F_{t}$:

$E(x_{\mathrm{in}}) = F_{t}(\eta, \omega_{\mathrm{in}}) || N(x_{\mathrm{in}}) \cdot \omega_{\mathrm{in}} || I(\omega_{\mathrm{in}})$

In order to compute this, we need basically:

1. Per-pixel surface normals
2. Light incident direction (vertexPosition – lightPosition)
3. A copy of the texture applied to the model

The Fresnel term $F_{t}$ can be computed by the following approximation:

$F_{t}(\eta, \omega_{\mathrm{in}}) = (2 - N \cdot V)^5 + \mu (1 - N \cdot V)^5$

Since we are observing the scene from the light point of view, V = L. Be sure to normalize both the normals and the light position!

The last term in the first equation is the irradiance impluse, which basically describes the intensity of the light source. Since the irradiance is wavelength dependent, multiply it by the texture image applied on the model. Varying the irradiance impulse you can control the intensity of the translucency.

varying vec4 vPosLS;

varying vec3 vNormLS;

void main(void)
{

//vertex position in light space

vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

vPosLS /= vPosLS.wwww;

//normal direction in light space

vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

gl_Position = vPosLS;

}

void main(void)

{

vec3 N = normalize(vNormLS);

vec3 L = normalize(vPosLS-lightpos).xyz;

vec3 V = L; //from the light pov V==L

// Fresnel Refraction (F)

float k = pow(1.0 – max(0.0, dot(N,V)), 5.0);

float Ft = 1.0 – k + refr_index * k;

// Incident Light at surface point

vec3 E = vec3( Ft * max(0.0 , dot(N,L))) * irradiance_intensity;

//Multiply E by the color texture here

gl_FragColor = vec4(E, –sign(abs(N.x)));

}

## 3 – Surface Normals

Computing surface normals is also trivial: just project each vertex in light space, along with the normals, then write the latter’s coordinates as an RGB output.

varying vec4 vNormLS ;

void main(void)

{

vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

gl_Position = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

}

void main(void)

{

gl_FragColor = vec4(vNormLS*.5+.5, 1.0);

}

To be continued.. stay tuned!