Make it translucid! (Part Three)

April 4, 2008 19 comments

Finally the third and final part of this tutorial.

In the first one we focused on the TSM creation, then in the second one we saw the translucency’s multiple scattering contribution. In this final part, we are going to see how to filter the TSM.

The main idea with TSM is that the more the object is “thick”, consequently the less the light will be able to pass through it. Another important idea with TSM is that since light (once penetrated the material) scatters in pseudo-random direction, can leave the object from a point that do not coincides with the point from where the light entered.

In order to simulate the first behaviour, one can simply read the Z coordinate from one vertex in light space (we can call it $X_{in}$) and then compare with the Z from another point (taken by the camera, but then projected in light space – called $X_{out}$). Obviously, $X_{out}$ have to be on the same line that ideally connects the light source with $X_{in}$: this means $X_{out}$.XY must be equal to $X_{in}$.XY. Then, the only difference between those two points will lie in the Z value: by arbitrarily choosing $X_{out}$ and projecting it in light space, $X_{in}$ is simply taken by reading the content of the TSM in $X_{out}$.XY!

That’s what basically has been discussed in the second part. The thickness is computed and used to modulate light intensity in the Rd function: the latter however does not take into account the scattering: it’s like the light enters one point, then it simply “fly” through the object in a straight line, and finally leaves it exactly in the same direction as when entered the material.

This problem is solved in this final part (thus implementing the second idea of TSM) by simulating pseudo-random scattering through filtering. Filtering an image basically consists in taking the whole image and then apply some sort of algorithm to produce the final result. In this case we need to simulate the scattering: simplifying this behaviour in order to make the algorithm real-time friendly is done by taking into account not only the point where the light enters or leaves, but also its neighbours. The more the neighbour are far from the original point, the less they will contribute to the final result.

One can implement filtering by following any schema, even though I’ve found the one from the Dachsbacher’s 2003 paper being of high quality (and a little heavy on the GPU 😛 ). A graphical representation can be seen in the following image, consisting in a 21-samples schema:

The code used to implement the filtering is just too long to be posted here, so I decided to post the most important parts of it.

Filtering the TSM

vec4 filter21 (vec4 Xout)

{

const float d = 1.0/1024.0;

vec4 finalColor = vec4(0.0,0.0,0.0,0.0);

vec4 Xin;

//

float v0 = tsm_smoothness*1.0;

float v1 = tsm_smoothness*2.0;

float v2 = tsm_smoothness*3.0;

float k0 = 0.05556; // 1.0/18.0

float k1 = 0.04167; // 1.0/24.0

float k2 = 0.04167; // 1.0/24.0

//

Xin = Xout;

finalColor = multipleScattering(Xin,Xout, v0)*k0;

Xin.y = Xout.y + d;

Xin.x = Xout.x;

finalColor += multipleScattering(Xin,Xout, v0)*k0;

Xin.y = Xout.y + d + d;

Xin.x = Xout.x;

//

[…]

//

Xin.y = Xout.y + 1.5*d;

Xin.x = Xout.x – 1.5*d;

finalColor += multipleScattering(Xin,Xout, v1)*k1;

Xin.y = Xout.y + 1.5*d;

Xin.x = Xout.x + 1.5*d;

finalColor += multipleScattering(Xin,Xout, v1)*k1;

//

[…]

//

Xin.y = Xout.y + 5.0*d;

Xin.x = Xout.x;

finalColor += multipleScattering(Xin,Xout, v2)*k2;

Xin.y = Xout.y;

Xin.x = Xout.x + 5.0*d;

finalColor += multipleScattering(Xin,Xout, v2)*k2;

Xin.y = Xout.y – 5.0*d;

Xin.x = Xout.x;

finalColor += multipleScattering(Xin,Xout, v2)*k2;

Xin.y = Xout.y;

Xin.x = Xout.x – 5.0*d;

finalColor += multipleScattering(Xin,Xout, v2)*k2;

//

return finalColor*tsmGainFactor;
}

A little bit of gain is added to the final color in order to increase the TSM contribution in the final rendering.

The whole TSM stuff is then simply called in the main function of your GLSL code through filter21(..) . In order to correctly apply the translucency, the TSM’s result color must be added up to the base color. You can further increase the final look by taking into account the clamped-to-zero N dot L product, thus simulating dark zones not directly lit by the light source; or even ambient reflections on the main model..

Thank you for reading my very first tutorial. I know there are some unclear parts: some were not written by purpose, in order to give you greater flexibility. Lastly, excuse me if there are some grammar errors, but I’m Italian and I do not write long posts in English from long time…

Bye!

Categories: OpenGL

Pictures of my new Macbook Pro

Hi all, I’ve added some pictures of my brand new Macbook Pro! Take a look at them, just click on my flickr pictures on the right..

Soon I’ll post a little review on it and the third and final part of TSM tecnique!

Categories: Personal

Make it translucid! (Part Two)

March 3, 2008 2 comments

The last time we discussed about the approximations needed to compute translucency in real time using TSM. We also created a translucent shadow map and now we are going to use it…

4 – Light diffuses through the material

The light entering the material diffuses in it according to the Rd equation (take a look at the figure on the right).

Thanks to the TSM computed in the last article, we only need two more informations: $X_{in}$ and $X_{out}$. $X_{in}$ represents the fragment coordinate (in light space) were light enters the material, while $X_{out}$ is the point from which the light leaves the object.

In practice, $X_{out}$ is the fragment coordinate as seen by the camera moving around the object PROJECTED in light space. Once you get the fragment coordinate from the vertex shade, you have it automatically projected in camera (or view) space: since we need to compare this with the data stored in the TSM, we must have them both in the same space (light space). The projection in light space is done by multiplying the view-space-projected coordinate by the camera’s model-view-projection matrix inverse: by doing so we get the object space coordinates, ready to be multiplied by the light’s model-view-projection matrix.

Once we have projected $X_{out}$ in light space, calculating $X_{in}$ is just a matter of shifting $X_{out}$‘s (x, y) coordinates by a delta (usually the size of a pixel – 1.0/resX and 1.0/res if you want resolution/ratio independency). In this way we can compute translucency simply by filtering the TSM previously calculated. An elegant an also pretty fast solution, if you ask! Of course, this method have some drawbacks: it assumes the object being completely convex so some errors might occur, even though them not being visually important in the majority of cases…

Rd function in GLSL:

vec4 multipleScattering (vec4 Xin, vec4 Xout, float lvl)

{

vec4 finalColor = vec4(0.0,0.0,0.0,1.0);

float e = 2.718281828459;

/***************************/

//irradiance, depth and normals must account for coordinate shifting!

vec4 depthIN = texture2D(DepthBuff, Xin.xy,lvl);

vec4 sNormIN = texture2D(SNormals, Xin.xy,lvl);

//

vec4 sigma_a = lightFreqAbsorbed * tsm_freqAbsorption;

vec4 sigma_s = lightFreqAbsorbed * (1.5-tsm_freqAbsorption);

//

vec4 extinction_coeff = (sigma_a + sigma_s);

vec4 reduced_albedo = sigma_s / extinction_coeff;

vec4 effective_extinction_coeff = sqrt(3.0 * sigma_a * extinction_coeff);

vec4 D = 1.0/(3.0*extinction_coeff);

//

float fresnel_diff = -(1.440/(refr_index*refr_index))+(0.710/refr_index)+0.668+(0.0636*refr_index);

float A = (1.0+fresnel_diff)/(1.0-fresnel_diff);

//

vec4 zr = 1.0/extinction_coeff;

vec4 zv = zr + 4.0*A*D;

//

vec4 xr = Xin – zr * sNormIN;

vec4 xv = Xin + zv * sNormIN;

//

float dr = length(xr – Xout);

float dv = length(xv – Xout);

//

vec4 f1 = reduced_albedo/(4.0*3.1415296);

vec4 f2 = zr * (effective_extinction_coeff * dr + 1.0);

vec4 f3 = pow(vec4(e) , -effective_extinction_coeff * dr) / (extinction_coeff * pow(dr,3.0));

vec4 f4 = zv * (effective_extinction_coeff * dv + 1.0);

vec4 f5 = pow(vec4(e), -effective_extinction_coeff * dv) / (extinction_coeff * pow(dv,3.0));

//

finalColor = f1 * ( f2 * f3 + f4 * f5);

//

}

Categories: OpenGL

My first notebook

Some days ago I’ve ordered my first notebook ever. I’m quite excited since I’ve chosen a nice beast, without sacrificing mobility: an Apple MacBook Pro. Actually, I don’t really care about MacOS even though I’m in the need for that OS to test applications I write, with different OSes and hardware configs. One of the first things I’m gonna do is install WinXP since I’ve got already Vista on my desktop.

For all the people that might wonder what made me choose an Apple product… I just reply saying that an Apple product is just like any other product. Yes, they are quite a bit more expensive, but that’s because in commerce you ALWAYS pay more for better looking products. Always.

Having that said, what made me take that notebook instead of, say, a Dell or Asus product? Well, the specs. Pure and simple. What I was in the need was a notebook with:

• Penryn Cpu, which is both fast and energy efficient (45nm, even better if the 3MB-chached version which consumes even less)
• nVidia 8600 with at least 256MB of dedicated VRam, or if that was not available, an 8800 (actually the 8700 is being assembled with the old productive process, wich is way less efficient)
• A good quality screen (possibly not with TN panel), with LED back-light in order to have better contrast and be less hungry on the battery
• As a plus, my ideal notebook would have had a BTO option for a 7200RPM hard drive (even though capacity isn’t something I’m interested in as long as it’s greater than 60GB, an SSD option whould just be too much expensive)

I’ve studied all the possibile notebooks that could have met my standards, and actually there weren’t any at all. 😮

The old base MacBook Pro had led back-lit display, 8600 (but with just 128MB vram) and no penryn.. The dell XPSes had no led back-lit displays, nor the penryn; Toshiba alternatives were even worse. The only two possible candidates were the Acer 5920 (the top version) and the Asus M50v.

While the latter was a good solution all around (a very nice thing was the nVidia 9500.. DX10.1 with per-MRT blending, and such..) didn’t had a very good display (just 1280 without led back-lighting); the former was similar, even though with “just” the nVidia 8600 and similarly to the M50v, no LED display (with a nice resolution of 1440 though). Actually I was more interested in the ASUS solution just because Acer products seems “cheap” or “not-so-well-made”.

I sweared to myself: if Apple is not going to update their pro notebooks in a couple of days (I could hold on up to Tuesday-Wednesday), I was going to take the Asus’ one.

Fortunately Apple did the long awaited update and I took the base model, with a 7200RPM hard drive. Seeing the online reviews, I was also impressed with the display quality: it doesn’t seem to be a TN panel thanks to the high viewing angles and overall quality.

Now my shipment is in transit and I hope to put my hands on it on Tuesday or Wednesday at most. As it arrives, I’ll take some shoots and will discuss here the positive and negative aspects.

W00t!

Categories: Personal Tags: , , , , , ,

Make it translucid! (Part One)

February 28, 2008 2 comments

My first post will be about OpenGL, specifically about an advanced shading tecnique proposed by C. Dachsbacher e M. Stamminger in 2003, named Translucet Shadow Maps.

As the name suggests, this tecnique is about translucency in materials. Look at your hand when is between you and a strong light source: light seems to pass through you. Or a glass with milk, you can actually see that even though milk is opaque, light still soften its look. So light through objects: that’s what translucency is all about!

Simulating this effect in real time can be an heavy task. Up to now, illumination models used in games or generic 3D software used for displaying meshes in real time, have been using BRDF models. BRDF stands for Bidirectional Reflectance Distribution Function and it just defines illumination on a material by the amount of light reflected by the surface. There are simpler types of BRDFs, like the Phong model used by OpenGL, and more complex ones such as Ward or Cook-Torrance which are phisically based.

In both cases, those models are useful for describing in real time the majority of materials, expect the translucent ones. In those materials, light isn’t just reflected back in a diffusive and/or specular manner, it is also absorbed by the material itself, scattered inside, and then left out by a different position. This phenomenon is called Subsurface Scattering and in order to simulate it in real time, we utilize the BSSRDF (Bidirectional Subsurface Scattering Reflectance Distribution Function). Even though the BSSRDF is an approximation of the generic Rendering Equation proposed by James T. Kajiya in 1986, is still too heavy to be computed in real time.

During my traineeship at Visual Computing Laboratory in the Italian National Research Council, we discussed a couple of possible tecniques for implementing the subsurface scattering in real time and we ended up using the Translucent Shadow Maps, which is both fast and pretty.

What this tecnique actually does is extending the concept of a classic shadow map, adding other useful informations in it, like the incoming irradiance and surface normals from the light point of view. Actually, to store those kind of informations we need at least two textures: one used to store the incoming irradiance (it’s colored, so we need three channels) AND the depth map, while the second is needed to store the surface normals (x, y, z, so three channels). On Shader Model 3.0 or newer we can actually do all of this in just one pass with Multiple Render Targets.

1 – Creating the Depth Map

That’s a simple task. Just render the geometry in light space, then save the z coordinate of each projected vertices on the texture.

varying vec4 vPosLS;

void main(void)
{

//vertex position in light space

vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

vPosLS /= vPosLS.wwww;

gl_Position = vPosLS;

}

void main(void)

{

gl_FragColor = vec4(vec3(vPosLS.z*.5+.5), –sign(abs(vPosLS.x)) );

}

2 – Compute the Irradiance Map

That’s a bit tricky. The irradiance incident a surface point $x_{in}$ is scattered in the material according to the Fresnel term $F_{t}$:

$E(x_{\mathrm{in}}) = F_{t}(\eta, \omega_{\mathrm{in}}) || N(x_{\mathrm{in}}) \cdot \omega_{\mathrm{in}} || I(\omega_{\mathrm{in}})$

In order to compute this, we need basically:

1. Per-pixel surface normals
2. Light incident direction (vertexPosition – lightPosition)
3. A copy of the texture applied to the model

The Fresnel term $F_{t}$ can be computed by the following approximation:

$F_{t}(\eta, \omega_{\mathrm{in}}) = (2 - N \cdot V)^5 + \mu (1 - N \cdot V)^5$

Since we are observing the scene from the light point of view, V = L. Be sure to normalize both the normals and the light position!

The last term in the first equation is the irradiance impluse, which basically describes the intensity of the light source. Since the irradiance is wavelength dependent, multiply it by the texture image applied on the model. Varying the irradiance impulse you can control the intensity of the translucency.

varying vec4 vPosLS;

varying vec3 vNormLS;

void main(void)
{

//vertex position in light space

vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

vPosLS /= vPosLS.wwww;

//normal direction in light space

vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

gl_Position = vPosLS;

}

void main(void)

{

vec3 N = normalize(vNormLS);

vec3 L = normalize(vPosLS-lightpos).xyz;

vec3 V = L; //from the light pov V==L

// Fresnel Refraction (F)

float k = pow(1.0 – max(0.0, dot(N,V)), 5.0);

float Ft = 1.0 – k + refr_index * k;

// Incident Light at surface point

vec3 E = vec3( Ft * max(0.0 , dot(N,L))) * irradiance_intensity;

//Multiply E by the color texture here

gl_FragColor = vec4(E, –sign(abs(N.x)));

}

3 – Surface Normals

Computing surface normals is also trivial: just project each vertex in light space, along with the normals, then write the latter’s coordinates as an RGB output.

varying vec4 vNormLS ;

void main(void)

{

vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

gl_Position = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

}

void main(void)

{

gl_FragColor = vec4(vNormLS*.5+.5, 1.0);

}

To be continued.. stay tuned!

Categories: OpenGL

Hello world!

Hi all,

I’ve changed my blog since I didn’t liked so much the old one.. now things are getting hotter, I have lots more stories to tell but also less time to write 🙂 My thesis, my work.. Gotta go to dinner, see ya next time!

Categories: Personal Tags: ,

The Wild Italy Expo’ 2007

Oggi, 6 Maggio 2007, si teneva a Ferrara il Wild Italy Expo’ ovvero una fiera/mercato di tantissime specie diverse di rettili e aracnidi ma anche animali meno impressionanti come topolini, coniglietti, cincillà etc..

La fiera si sviluppava su un area di circa 5000 mt quadri ed era divisa in maniera non proprio ordinata.. sono stati allestiti diversi banconi su cui gli espositori mettevano in mostra alcuni dei più strani, curiosi o carini animali di cui disponevano.. e ce n’erano alcuni veramente impressionanti!

Si andava dai serpenti albini (piuttosto curiosi, ma che personalmente non mi interessano più di tanto.. preferisco molto di più quelli scuri e cattivi 😀 😀 ) ai mega-tartarugoni di 50Kg, dai Cincillà che correvano sulle gabbie ai camaleonti che con fare deciso passavano da un ramo all’altro..

Menzione particolare meritano i ragni. C’era un signore Inglese (o forse Australiano.. boh) che aveva allestito 4mt di tavolo pieno zeppo di ragni, di tutte le dimensioni e colori. Si andava dal classico ragnetto che troviamo in campagna, a quelli pelosi scuri grossi circa 3cm, fino ad arrivare alla regina indiscussa dell’esposizione: un esemplare molto giovane chiaro, tendente al rossiccio, coperto di peli e grande quanto la mano di un uomo adulto! E la cosa bella è che se fatto crescere, riesce a diventare quasi il doppio! Costava circa 100€ però ne sarebbe veramente valsa la pena!

Il signore, inoltre, mentre dava spiegazioni sulle varie specie (parlando rigorosamente in Inglese..), decide di aprire una vaschetta e a MANI NUDE prende uno di questi ragni (grosso circa 4cm)!! Decisamente colpito gli chiedo se è sicuro fare una cosa del genere, se c’è il rischio di essere morsi: di contro mi risponde che la maggior parte di quei ragni proviene dal sud africa e sono per la maggior parte tutti innocui, fatta eccezione per 5-6 esemplari che sono piuttosto aggressivi. In ogni caso mi ha assicurato che nell’eventualità che uno di quei ragni morda l’uomo, non c’è bisogno di andare in ospedale dato che nell’arco di un paio d’ore il dolore svanisce e con esso anche il veleno. Non credo che il discorso valga anche con la Vedova Nera.. 😀 😀

Nota di colore: alcuni di quei ragni (diciamo quelli dai 4cm e mezzo in su) come pasto accettano volentieri (oltre che le classiche cavallette) anche piccoli roditori.. alla faccia 😀

Rimando anche in questo caso alla pagina su Flickr con le foto della fiera.
Alla prossima!

Categories: Uncategorized