UnityDevelop with iTween syntax autocompletition!

January 3, 2012 Leave a comment

I’ve begun working with Unity, what a great piece of software! Sadly, the integrated script editor isn’t much help since it doesn’t have any autocompletition feature at all. I’ve found however a pretty useful (yet unfinished and unpolished) editor which supports it as well as many other little things that make our life a little bit better and easier.

iTween autocompletition

I’m talking about UnityDevelop, a script editor made by Flashbang Studios which is based on FlashDevelop by Mika Palmu and has been customized to better fit Unity development process. In order for UnityDevelop to work properly, you have to download the Unity 3.3 Classes and unzip them in “C:\Program Files\UnityDevelop\Classes”.

Then, you will need the Javascript representation of the iTween’s class structure. I’ve created it and made public for you to use it, so just grab it and enjoy! As with the Unity classes, just unzip in the same folder.

 

Carbon fiber stuff!

November 7, 2011 Leave a comment

I’ve begun working with carbon fiber composites! I find fascinating how just few grams of fibers can be so strong and stiff. Since I don’t have much time and space, I don’t think I’ll be able to make more than one or two pieces a year.. anyway I’ll try and post here my works. I’ve just created a new page where I’m going to put everything composite-related in there 😉

The opening article is about a carbon/kevlar protection for a longboard.. happy reading!

My carbon/kevlar noseguard

How to count total lines of code in VS.NET

January 19, 2009 Leave a comment

I’ve found this little neat string that can be used in the VS.NET integrated Find and gives your current project / solution / file total line count:

^~(:Wh@//.+)~(:Wh@\{:Wh@)~(:Wh@\}:Wh@)~(:Wh@/#).+

To use it, just press CTRL+SHIFT+F, then select Use Regular Expressions and finally write the file extensions in which you want to count the lines of code. 🙂

The results will be displayed at the Find result window‘s end.

Thanks go to Germán Schuager for the tip.

Multiple OpenGL rendering contexts

July 16, 2008 4 comments

Hi all. Currently I’m working on a software which “should” have different OpenGL areas, each drawn in a different window or a more generic drawable surface (a panel, a group box, etc.. anything that has a window handle in practice).

But what I’m facing is demotivating me 😦

From what I’m reading, if you loose an OpenGL context, you also loose anything associated with it (textures, state changes, etc..):

This means that all the GL state is destroyed with it. This includes textures, among other things. More precisely, on Windows the textures are corrupted, whereas Linux handles it correctly.

Actually, in my application, I have different contexts and simply switch between them thanks to the wglMakeCurrent and everything works fine as long as I just render polygons with glColor.

But as I try to apply a texture (loaded with SDL_Image) nothing happens. So I tried to dump the texture raw data to a file and open it with PhotoShop but the resulting image is completely gray (205/255) 😐

I thought it could be the loosing-context problem mentioned above, so I tried to dump the texture as soon as I upload the data in it with glTexImage2D but I got the same gray image as before. Just to be sure I checked that SDL_Image was loading the texture correctly and that was the case…

I still have to check a couple of things… If you have some tips, please share them with me!

Bye

My brand new Nikon D60

May 17, 2008 Leave a comment

Hi there,

a few days ago I finally decided to get a reflex digital SLR. At first, I was undecided between Canon 400D and Nikon D40x. After reading online reviews, I ended up choosing the latter.

One of the “features” I most appreciated from the D40x was the grain noise at high ISOs. While the D40x produced a slightly noisier image, this kind of noise is almost monochromatic, like using an high sensitive film on a classic SLR; on the other hand, the 400D produces a more “colorfNoise Comparisonul” noise.

One day I was passing by a shop nearby my house and saw the D40x at a very competitive price (compared to several online shops), so I asked if it was possible to not get the 18-55mm kit and instead getting just the body with the nice 18-135mm lens. The main reason I didn’t wanted the kit was the lack of image stabilization.

After a bit of talks, the shopkeeper asked me if I was interested in the newer Nikon D60, which was due to arrive in his shop in a couple of days. The D60 kit includes two stabilized lenses, an AF-S 18-55mm VR and an AF-S 55-200mm VR ED. Intrigued for both the price AND the lenses, I decided to wait for the D60 so I could take a closer look at it.

While I was waiting for the camera to arrive, I read a couple of online reviews, expecially the one from dpreview.com, which helped me a lot deciding for what I was going to buy.

Infact, as the guy from the shop told me the camera was arrived, I went there almost convinced to take it. Once there, he explained me that the camera body had three-years warranty, while each lens had an extraordinary 4-years warranty! Amazed by the build quality, the body’ size and weight.. I ended up buying it 🙂 .

In the days following I started enjoying the camera and, even beeing the very first time for me using a reflex, I was surprised by the results. You can see them too in my flickr page (I also bought the pro upgrade in flickr, so no more limitations with sets, image sizes, etc..).

I hope to rapidly increase my shooting expertise, I really like taking photos 🙂

Bye!

Categories: Personal Tags: , , ,

The Wild Italy Expo’ 2008

May 11, 2008 Leave a comment

Hi there!

New photos have been added to my flickr.com account. This time are from the Wild Italy Expo’ 2008 edition. Hope you will enjoy them 🙂 Bye!

PS: here’s the link to the set

Input / Output with GPUs

It has been a while since my last blog post. Anyway, talking with Junskyman about the Translucent Shadow Map technique, a problem aroused: how do the GPU handles data within textures?

Before dealing with this, it might be better starting with the basics: let’s then talk about Input / Output with GPUs.

When you define a texture, you also specify it’s Format, Internal Format and expecially it’s type:

glTexImage2D( target, level, iFormat, w, h, border, format, type, *data );

with the InternalFormat you basically define how many channels are being used in that texture, while format specifies the format of the input data defined in *data.

Type, however, tells the OpenGL pipeline how the data associated with the texture is to be treated.

From now on one could decide to work exclusively on GPU thanks to shaders, or instead allowing the CPU to handle some of the operations.

As for the CPU/GPU approach, usually getting data from GPU is done by glReadPixels(). It copies the texture data located in the VRam into an array. Pay attention that if the array’s type differs from texture’s type, will result in artifacts and wrong data being read. Usually, with floating point textures, there are no restrictions in the output values (they can be both positive or negative).

Integer texture type, instead, forces data to be converted from floating point (if that’s the case with original data stored in VRam) to Unsigned Integer. Negative or greater than 8bit values are simply clamped in 0 … 2^8 range.

The same thing applies with data processed using the shaders-only approach, but in this case are also normalized in -1.0 … 1.0 range. But, let’s say we want to transfer negative data using integer textures. In this case, we simply have to map the original values according to the following equation:

vec4 convData = origData * 0.5 + 0.5;

A weird thing happened while using RenderMonkey, however. In RM, even if you set texture type to GL_RGBA32F, assigning negative data results in clamping. Therefore, compacting data in 0.0 … 1.0 range and then scale it back to -1.0 … 1.0 range is mandatory.

In my past tutorial I made some changes to the source code, removing this conversion since it should not be needed in a real world application.

At the end, my advice is simple though: always pay attention to the range of values your data have at any time, and use fewer conversions as possible, since it always introduces errors.

Categories: OpenGL Tags: , , , ,

Make it translucid! (Part Three)

April 4, 2008 19 comments

Finally the third and final part of this tutorial.

In the first one we focused on the TSM creation, then in the second one we saw the translucency’s multiple scattering contribution. In this final part, we are going to see how to filter the TSM.

The main idea with TSM is that the more the object is “thick”, consequently the less the light will be able to pass through it. Another important idea with TSM is that since light (once penetrated the material) scatters in pseudo-random direction, can leave the object from a point that do not coincides with the point from where the light entered.

In order to simulate the first behaviour, one can simply read the Z coordinate from one vertex in light space (we can call it X_{in}) and then compare with the Z from another point (taken by the camera, but then projected in light space – called X_{out}). Obviously, X_{out} have to be on the same line that ideally connects the light source with X_{in}: this means X_{out}.XY must be equal to X_{in}.XY. Then, the only difference between those two points will lie in the Z value: by arbitrarily choosing X_{out} and projecting it in light space, X_{in} is simply taken by reading the content of the TSM in X_{out}.XY!

TSM thickness idea

That’s what basically has been discussed in the second part. The thickness is computed and used to modulate light intensity in the Rd function: the latter however does not take into account the scattering: it’s like the light enters one point, then it simply “fly” through the object in a straight line, and finally leaves it exactly in the same direction as when entered the material.

This problem is solved in this final part (thus implementing the second idea of TSM) by simulating pseudo-random scattering through filtering. Filtering an image basically consists in taking the whole image and then apply some sort of algorithm to produce the final result. In this case we need to simulate the scattering: simplifying this behaviour in order to make the algorithm real-time friendly is done by taking into account not only the point where the light enters or leaves, but also its neighbours. The more the neighbour are far from the original point, the less they will contribute to the final result.

One can implement filtering by following any schema, even though I’ve found the one from the Dachsbacher’s 2003 paper being of high quality (and a little heavy on the GPU 😛 ). A graphical representation can be seen in the following image, consisting in a 21-samples schema:

TSM Filtering Schema

The code used to implement the filtering is just too long to be posted here, so I decided to post the most important parts of it.

Filtering the TSM

vec4 filter21 (vec4 Xout)

{

   const float d = 1.0/1024.0;

   vec4 finalColor = vec4(0.0,0.0,0.0,0.0);

   vec4 Xin;

   //

   float v0 = tsm_smoothness*1.0;

   float v1 = tsm_smoothness*2.0;

   float v2 = tsm_smoothness*3.0;

   float k0 = 0.05556; // 1.0/18.0

   float k1 = 0.04167; // 1.0/24.0

   float k2 = 0.04167; // 1.0/24.0

   //

   Xin = Xout;

   finalColor = multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v0)*k0;

   Xin.y = Xout.y + d + d;

   Xin.x = Xout.x;

   //

   […]

   //

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x – 1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   Xin.y = Xout.y + 1.5*d;

   Xin.x = Xout.x + 1.5*d;

   finalColor += multipleScattering(Xin,Xout, v1)*k1;

   //

   […]

   //

   Xin.y = Xout.y + 5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x + 5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y – 5.0*d;

   Xin.x = Xout.x;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   Xin.y = Xout.y;

   Xin.x = Xout.x – 5.0*d;

   finalColor += multipleScattering(Xin,Xout, v2)*k2;

   //

   return finalColor*tsmGainFactor;
}


A little bit of gain is added to the final color in order to increase the TSM contribution in the final rendering.

The whole TSM stuff is then simply called in the main function of your GLSL code through filter21(..) . In order to correctly apply the translucency, the TSM’s result color must be added up to the base color. You can further increase the final look by taking into account the clamped-to-zero N dot L product, thus simulating dark zones not directly lit by the light source; or even ambient reflections on the main model..

Thank you for reading my very first tutorial. I know there are some unclear parts: some were not written by purpose, in order to give you greater flexibility. Lastly, excuse me if there are some grammar errors, but I’m Italian and I do not write long posts in English from long time…

Bye!

Pictures of my new Macbook Pro

March 13, 2008 Leave a comment

Hi all, I’ve added some pictures of my brand new Macbook Pro! Take a look at them, just click on my flickr pictures on the right..

Soon I’ll post a little review on it and the third and final part of TSM tecnique!

Make it translucid! (Part Two)

March 3, 2008 2 comments

The last time we discussed about the approximations needed to compute translucency in real time using TSM. We also created a translucent shadow map and now we are going to use it…

4 – Light diffuses through the material

The light entering the material diffuses in it according to the Rd equation Rd Equation(take a look at the figure on the right).

Thanks to the TSM computed in the last article, we only need two more informations: X_{in} and X_{out}. X_{in} represents the fragment coordinate (in light space) were light enters the material, while X_{out} is the point from which the light leaves the object.

In practice, X_{out} is the fragment coordinate as seen by the camera moving around the object PROJECTED in light space. Once you get the fragment coordinate from the vertex shade, you have it automatically projected in camera (or view) space: since we need to compare this with the data stored in the TSM, we must have them both in the same space (light space). The projection in light space is done by multiplying the view-space-projected coordinate by the camera’s model-view-projection matrix inverse: by doing so we get the object space coordinates, ready to be multiplied by the light’s model-view-projection matrix.

Once we have projected X_{out} in light space, calculating X_{in} is just a matter of shifting X_{out}‘s (x, y) coordinates by a delta (usually the size of a pixel – 1.0/resX and 1.0/res if you want resolution/ratio independency). In this way we can compute translucency simply by filtering the TSM previously calculated. An elegant an also pretty fast solution, if you ask! Of course, this method have some drawbacks: it assumes the object being completely convex so some errors might occur, even though them not being visually important in the majority of cases…

Rd function in GLSL:

vec4 multipleScattering (vec4 Xin, vec4 Xout, float lvl)

{

   vec4 finalColor = vec4(0.0,0.0,0.0,1.0);

   float e = 2.718281828459;

   /***************************/

   //irradiance, depth and normals must account for coordinate shifting!

   vec4 irradIN = texture2D(Irradiance, Xin.xy,lvl);

   vec4 depthIN = texture2D(DepthBuff, Xin.xy,lvl);

   vec4 sNormIN = texture2D(SNormals, Xin.xy,lvl);

   //

   vec4 sigma_a = lightFreqAbsorbed * tsm_freqAbsorption;

   vec4 sigma_s = lightFreqAbsorbed * (1.5-tsm_freqAbsorption);

   //

   vec4 extinction_coeff = (sigma_a + sigma_s);

   vec4 reduced_albedo = sigma_s / extinction_coeff;

   vec4 effective_extinction_coeff = sqrt(3.0 * sigma_a * extinction_coeff);

   vec4 D = 1.0/(3.0*extinction_coeff);

   //

   float fresnel_diff = -(1.440/(refr_index*refr_index))+(0.710/refr_index)+0.668+(0.0636*refr_index);

   float A = (1.0+fresnel_diff)/(1.0-fresnel_diff);

   //

   vec4 zr = 1.0/extinction_coeff;

   vec4 zv = zr + 4.0*A*D;

   //

   vec4 xr = Xin – zr * sNormIN;

   vec4 xv = Xin + zv * sNormIN;

   //

   float dr = length(xr – Xout);

   float dv = length(xv – Xout);

   //

   vec4 f1 = reduced_albedo/(4.0*3.1415296);

   vec4 f2 = zr * (effective_extinction_coeff * dr + 1.0);

   vec4 f3 = pow(vec4(e) , -effective_extinction_coeff * dr) / (extinction_coeff * pow(dr,3.0));

   vec4 f4 = zv * (effective_extinction_coeff * dv + 1.0);

   vec4 f5 = pow(vec4(e), -effective_extinction_coeff * dv) / (extinction_coeff * pow(dv,3.0));

   //

   finalColor = f1 * ( f2 * f3 + f4 * f5);

   //

   return irradIN*finalColor;

}