Hardware-accelerated ambient occlusion

During the Pisa trainee I’ve worked on a QT/OpenGL plugin for the open-source software MeshLab. Said plugin wasn’t supposed to run in real-time, therefore quality was the objective for the project. Using the classical definition of Ambient Occlusion (which is often related to accessibility shading), each fragment is more or less visible to others according to how much of it is covered or shown by the surrounding geometry.

According to a research conducted in 2000 by H.H. Buelthoff and M.S. Langer, humans use a more accurate model than dark-means-deep to perceive shape-from-shading under diffuse lighting, and ambient occlusion is a component of this model.

By integrating on the hemisphere surrounding a given point, it’s possible to determine the visibility for that fragment. This approach is quite heavy on the hardware, but it’s one of the most accurate ways for calculating ambient occlusion. While a CPU-only approach required even minutes to compute, using the GPU to elaborate evertything (by writing and accumulating data on a single texture) reduced total time by one order of magnitude. By using multiple render targets is possible to work with very complex geometry, up to hundreds of milions of polygons. The final result is shown in this page.

On the home page of OpenGL.org there have been news about ongoing development and then release of a new version of MeshLab, which included my plugin. A little satisfaction nonetheless 😉

%d bloggers like this: