Archive

Posts Tagged ‘OpenGL’

Multiple OpenGL rendering contexts

July 16, 2008 4 comments

Hi all. Currently I’m working on a software which “should” have different OpenGL areas, each drawn in a different window or a more generic drawable surface (a panel, a group box, etc.. anything that has a window handle in practice).

But what I’m facing is demotivating me ūüė¶

From what I’m reading, if you loose an OpenGL context, you also loose anything associated with it (textures, state changes, etc..):

This means that all the GL state is destroyed with it. This includes textures, among other things. More precisely, on Windows the textures are corrupted, whereas Linux handles it correctly.

Actually, in my application, I have different contexts and simply switch between them thanks to the wglMakeCurrent and everything works fine as long as I just render polygons with glColor.

But as I try to apply a texture (loaded with SDL_Image) nothing happens. So I tried to dump the texture raw data to a file and open it with PhotoShop but the resulting image is completely gray (205/255) ūüėź

I thought it could be the loosing-context problem mentioned above, so I tried to dump the texture as soon as I upload the data in it with glTexImage2D but I got the same gray image as before. Just to be sure I checked that SDL_Image was loading the texture correctly and that was the case…

I still have to check a couple of things… If you have some tips, please share them with me!

Bye

Input / Output with GPUs

It has been a while since my last blog post. Anyway, talking with Junskyman about the Translucent Shadow Map technique, a problem aroused: how do the GPU handles data within textures?

Before dealing with this, it might be better starting with the basics: let’s then talk about Input / Output with GPUs.

When you define a texture, you also specify it’s Format, Internal Format and expecially it’s type:

glTexImage2D( target, level, iFormat, w, h, border, format, type, *data );

with the InternalFormat you basically define how many channels are being used in that texture, while format specifies the format of the input data defined in *data.

Type, however, tells the OpenGL pipeline how the data associated with the texture is to be treated.

From now on one could decide to work exclusively on GPU thanks to shaders, or instead allowing the CPU to handle some of the operations.

As for the CPU/GPU approach, usually getting data from GPU is done by glReadPixels(). It copies the texture data located in the VRam into an array. Pay attention that if the array’s type differs from texture’s type, will result in artifacts and wrong data being read. Usually, with floating point textures, there are no restrictions in the output values (they can be both positive or negative).

Integer texture type, instead, forces data to be converted from floating point (if that’s the case with original data stored in VRam) to Unsigned Integer. Negative or greater than 8bit values are simply clamped in 0 … 2^8 range.

The same thing applies with data processed using the shaders-only approach, but in this case are also normalized in -1.0 … 1.0 range. But, let’s say we want to transfer negative data using integer textures. In this case, we simply have to map the original values according to the following equation:

vec4 convData = origData * 0.5 + 0.5;

A weird thing happened while using RenderMonkey, however. In RM, even if you set texture type to GL_RGBA32F, assigning negative data results in clamping. Therefore, compacting data in 0.0 … 1.0 range and then scale it back to -1.0 … 1.0 range is mandatory.

In my past tutorial I made some changes to the source code, removing this conversion since it should not be needed in a real world application.

At the end, my advice is simple though: always pay attention to the range of values your data have at any time, and use fewer conversions as possible, since it always introduces errors.

Categories: OpenGL Tags: , , , ,

Make it translucid! (Part Three)

April 4, 2008 19 comments

Finally the third and final part of this tutorial.

In the first one we focused on the TSM creation, then in the second one we saw the translucency’s multiple scattering contribution. In this final part, we are going to see how to filter the TSM.

The main idea with TSM is that the more the object is “thick”, consequently the less the light will be able to pass through it. Another important idea with TSM is that since light (once penetrated the material) scatters in pseudo-random direction, can leave the object from a point that do not coincides with the point from where the light entered.

In order to simulate the first behaviour, one can simply read the Z coordinate from one vertex in light space (we can call it X_{in}) and then compare with the Z from another point (taken by the camera, but then projected in light space – called X_{out}). Obviously, X_{out} have to be on the same line that ideally connects the light source with X_{in}: this means X_{out}.XY must be equal to X_{in}.XY. Then, the only difference between those two points will lie in the Z value: by arbitrarily choosing X_{out} and projecting it in light space, X_{in} is simply taken by reading the content of the TSM in X_{out}.XY!

TSM thickness idea

That’s what basically has been discussed in the second part. The thickness is computed and used to modulate light intensity in the Rd function: the latter however does not take into account the scattering: it’s like the light enters one point, then it simply “fly” through the object in a straight line, and finally leaves it exactly in the same direction as when entered the material.

This problem is solved in this final part (thus implementing the second idea of TSM) by simulating pseudo-random scattering through filtering. Filtering an image basically consists in taking the whole image and then apply some sort of algorithm to produce the final result. In this case we need to simulate the scattering: simplifying this behaviour in order to make the algorithm real-time friendly is done by taking into account not only the point where the light enters or leaves, but also its neighbours. The more the neighbour are far from the original point, the less they will contribute to the final result.

One can implement filtering by following any schema, even though I’ve found the one from the Dachsbacher’s 2003 paper being of high quality (and a little heavy on the GPU ūüėõ ). A graphical representation can be seen in the following image, consisting in a 21-samples schema:

TSM Filtering Schema

The code used to implement the filtering is just too long to be posted here, so I decided to post the most important parts of it.

Filtering the TSM

vec4 filter21 (vec4 Xout)

{

   const float d = 1.0/1024.0;

   vec4 finalColor = vec4(0.0,0.0,0.0,0.0);

   vec4 Xin;

   //

   float v0 = tsm_smoothness*1.0;

   float v1 = tsm_smoothness*2.0;

   float v2 = tsm_smoothness*3.0;

   float k0 = 0.05556; // 1.0/18.0

   float k1 = 0.04167; // 1.0/24.0

   float k2 = 0.04167; // 1.0/24.0

   //

   Xin = Xout;

   finalColor = multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d + d;

   Xin.x = Xout.x;

   //

¬†¬† […]

   //

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x Р1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x + 1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   //

¬†¬† […]

   //

   Xin.y = Xout.y + 5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x + 5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y Р5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x Р5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   //

   return finalColor*tsmGainFactor;
}


A little bit of gain is added to the final color in order to increase the TSM contribution in the final rendering.

The whole TSM stuff is then simply called in the main function of your GLSL code through filter21(..) . In order to correctly apply the translucency, the TSM’s result color must be added up to the base color. You can further increase the final look by taking into account the clamped-to-zero N dot L product, thus simulating dark zones not directly lit by the light source; or even ambient reflections on the main model..

Thank you for reading my very first tutorial. I know there are some unclear parts: some were not written by purpose, in order to give you greater flexibility. Lastly, excuse me if there are some grammar errors, but I’m Italian and I do not write long posts in English from long time…

Bye!

Make it translucid! (Part Two)

March 3, 2008 2 comments

The last time we discussed about the approximations needed to compute translucency in real time using TSM. We also created a translucent shadow map and now we are going to use it…

4 – Light diffuses through the material

The light entering the material diffuses in it according to the Rd equation Rd Equation(take a look at the figure on the right).

Thanks to the TSM computed in the last article, we only need two more informations: X_{in} and X_{out}. X_{in} represents the fragment coordinate (in light space) were light enters the material, while X_{out} is the point from which the light leaves the object.

In practice, X_{out} is the fragment coordinate as seen by the camera moving around the object PROJECTED in light space. Once you get the fragment coordinate from the vertex shade, you have it automatically projected in camera (or view) space: since we need to compare this with the data stored in the TSM, we must have them both in the same space (light space). The projection in light space is done by multiplying the view-space-projected coordinate by the camera’s model-view-projection matrix inverse: by doing so we get the object space coordinates, ready to be multiplied by the light’s model-view-projection matrix.

Once we have projected X_{out} in light space, calculating X_{in} is just a matter of shifting X_{out}‘s (x, y) coordinates by a delta (usually the size of a pixel – 1.0/resX and 1.0/res if you want resolution/ratio independency). In this way we can compute translucency simply by filtering the TSM previously calculated. An elegant an also pretty fast solution, if you ask! Of course, this method have some drawbacks: it assumes the object being completely convex so some errors might occur, even though them not being visually important in the majority of cases…

Rd function in GLSL:

vec4 multipleScattering (vec4 Xin, vec4 Xout, float lvl)

{

   vec4 finalColor = vec4(0.0,0.0,0.0,1.0);

   float e = 2.718281828459;

   /***************************/

   //irradiance, depth and normals must account for coordinate shifting!

   vec4 irradIN = texture2D(Irradiance, Xin.xy,lvl);

   vec4 depthIN = texture2D(DepthBuff, Xin.xy,lvl);

   vec4 sNormIN = texture2D(SNormals, Xin.xy,lvl);

   //

   vec4 sigma_a = lightFreqAbsorbed * tsm_freqAbsorption;

   vec4 sigma_s = lightFreqAbsorbed * (1.5-tsm_freqAbsorption);

   //

   vec4 extinction_coeff = (sigma_a + sigma_s);

   vec4 reduced_albedo = sigma_s / extinction_coeff;

   vec4 effective_extinction_coeff = sqrt(3.0 * sigma_a * extinction_coeff);

   vec4 D = 1.0/(3.0*extinction_coeff);

   //

   float fresnel_diff = -(1.440/(refr_index*refr_index))+(0.710/refr_index)+0.668+(0.0636*refr_index);

   float A = (1.0+fresnel_diff)/(1.0-fresnel_diff);

   //

   vec4 zr = 1.0/extinction_coeff;

   vec4 zv = zr + 4.0*A*D;

   //

   vec4 xr = Xin Рzr * sNormIN;

   vec4 xv = Xin + zv * sNormIN;

   //

   float dr = length(xr РXout);

   float dv = length(xv РXout);

   //

   vec4 f1 = reduced_albedo/(4.0*3.1415296);

   vec4 f2 = zr * (effective_extinction_coeff * dr + 1.0);

   vec4 f3 = pow(vec4(e) , -effective_extinction_coeff * dr) / (extinction_coeff * pow(dr,3.0));

   vec4 f4 = zv * (effective_extinction_coeff * dv + 1.0);

   vec4 f5 = pow(vec4(e), -effective_extinction_coeff * dv) / (extinction_coeff * pow(dv,3.0));

   //

   finalColor = f1 * ( f2 * f3 + f4 * f5);

   //

   return irradIN*finalColor;

}

Make it translucid! (Part One)

February 28, 2008 2 comments

My first post will be about OpenGL, specifically about an advanced shading tecnique proposed by C. Dachsbacher e M. Stamminger in 2003, named Translucet Shadow Maps.

As the name suggests, this tecnique is about translucency in materials. Look at your hand when is between you and a strong light source: light seems to pass through you. Or a glass with milk, you can actually see that even though milk is opaque, light still soften its look. So light through objects: that’s what translucency is all about!

Simulating this effect in real time can be an heavy task. Up to now, illumination models used in games or generic 3D software used for displaying meshes in real time, have been using BRDF models. BRDF stands for Bidirectional Reflectance Distribution Function and it just defines illumination on a material by the amount of light reflected by the surface. There are simpler types of BRDFs, like the Phong model used by OpenGL, and more complex ones such as Ward or Cook-Torrance which are phisically based.

In both cases, those models are useful for describing in real time the majority of materials, expect the translucent ones. In those materials, light isn’t just reflected back in a diffusive and/or specular manner, it is also absorbed by the material itself, scattered inside, and then left out by a different position. This phenomenon is called Subsurface Scattering and in order to simulate it in real time, we utilize the BSSRDF (Bidirectional Subsurface Scattering Reflectance Distribution Function). Even though the BSSRDF is an approximation of the generic Rendering Equation proposed by James T. Kajiya in 1986, is still too heavy to be computed in real time.

During my traineeship at Visual Computing Laboratory in the Italian National Research Council, we discussed a couple of possible tecniques for implementing the subsurface scattering in real time and we ended up using the Translucent Shadow Maps, which is both fast and pretty.

What this tecnique actually does is extending the concept of a classic shadow map, adding other useful informations in it, like the incoming irradiance and surface normals from the light point of view. Actually, to store those kind of informations we need at least two textures: one used to store the incoming irradiance (it’s colored, so we need three channels) AND the depth map, while the second is needed to store the surface normals (x, y, z, so three channels). On Shader Model 3.0 or newer we can actually do all of this in just one pass with Multiple Render Targets.

1 – Creating the Depth Map

That’s a simple task. Just render the geometry in light space, then save the z coordinate of each projected vertices on the texture.

Depth Map of an elephant

Vertex Shader:

varying vec4 vPosLS;

void main(void)
{

   //vertex position in light space

   vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

   vPosLS /= vPosLS.wwww;

   gl_Position = vPosLS;

}

Pixel Shader:

void main(void)

{

¬†¬† gl_FragColor = vec4(vec3(vPosLS.z*.5+.5), –sign(abs(vPosLS.x)) );

}

2 – Compute the Irradiance Map

That’s a bit tricky. The irradiance incident a surface point x_{in} is scattered in the material according to the Fresnel term F_{t}:

E(x_{\mathrm{in}}) = F_{t}(\eta, \omega_{\mathrm{in}}) || N(x_{\mathrm{in}}) \cdot \omega_{\mathrm{in}} || I(\omega_{\mathrm{in}})

In order to compute this, we need basically:

  1. Per-pixel surface normals
  2. Light incident direction (vertexPosition – lightPosition)
  3. A copy of the texture applied to the model

The Fresnel term F_{t} can be computed by the following approximation:

F_{t}(\eta, \omega_{\mathrm{in}}) = (2 - N \cdot V)^5 + \mu (1 - N \cdot V)^5

Since we are observing the scene from the light point of view, V = L. Be sure to normalize both the normals and the light position!

The last term in the first equation is the irradiance impluse, which basically describes the intensity of the light source. Since the irradiance is wavelength dependent, multiply it by the texture image applied on the model. Varying the irradiance impulse you can control the intensity of the translucency.

irradiance.jpg

Vertex Shader:

varying vec4 vPosLS;

varying vec3 vNormLS;

void main(void)
{

   //vertex position in light space

   vPosLS = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

   vPosLS /= vPosLS.wwww;

   //normal direction in light space

   vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

   gl_Position = vPosLS;

}

Pixel Shader:

void main(void)

{

   vec3 N = normalize(vNormLS);

   vec3 L = normalize(vPosLS-lightpos).xyz;

   vec3 V = L; //from the light pov V==L

   // Fresnel Refraction (F)

   float k = pow(1.0 Рmax(0.0, dot(N,V)), 5.0);

   float Ft = 1.0 Рk + refr_index * k;

   // Incident Light at surface point

   vec3 E = vec3( Ft * max(0.0 , dot(N,L))) * irradiance_intensity;

   //Multiply E by the color texture here

¬†¬† gl_FragColor = vec4(E, –sign(abs(N.x)));

}

3 – Surface Normals

Computing surface normals is also trivial: just project each vertex in light space, along with the normals, then write the latter’s coordinates as an RGB output.
normals.jpg

Vertex Shader:

varying vec4 vNormLS ;

void main(void)

{

   vNormLS = (WorldToLight * vec4(gl_Normal,0.0)).xyz;

   gl_Position = gl_ProjectionMatrix * WorldToLight * gl_Vertex;

}

Pixel Shader:

void main(void)

{

   gl_FragColor = vec4(vNormLS*.5+.5, 1.0);

}

To be continued.. stay tuned!