Bilateral Mesh Denoising

Few days ago while looking for an asset in my archive, I resurfaced all sort of interesting things from the past. One thing it brought back lots of memories was a mesh I scanned some time ago. Not surprisingly for (poorly 😦 ) scanned data it was very noisy, so I tried to resurface my snippets on mesh denoising and I figured to write a small and quick blog post.

Bilateral filter

Generally, smoothing (or blurring) filters are good to reduce the high frequency components of our data and well, noise is most of the time unwanted high frequency information. But blindly reducing high frequencies is not the greatest idea, we will lose important features of our data that are not supposed to be noise.

To smooth images whilst preserving edges, the seminal paper [Tomasi and Manduchi 1998] introduced the Bilateral filter that has soon become a fundamental tool for anyone dealing with image filtering. The main idea is to have a kernel that adapts itself based on the content that is filtering; let’s first write down the original formula

CodeCogsEqn (1)

Here p represents a point in the neighbourhood of  u (the pixel we are filtering), CodeCogsEqn (2) is the image data and G represents Gaussian filters. Gs is nothing new, it’s just your classical blur kernel, the novel part is all in CodeCogsEqn (3), which is often referred to as range filter.
This filter provides a scale to the kernel  Gs: higher the likelihood of an edge in the image (big difference between neighbouring values), the smaller Gs is made.  The “minimum” amplitude of the edge can be tuned changing the standard deviation of CodeCogsEqn (3), the bigger this standard deviation the more the bilateral filter converges to a normal gaussian blur. A good value for it would be using the mean gradient value of the image.

bilat

You can see here how the image has been smoothed out, but the strong features are still very well preserved.

This concept of edge-aware filtering has been widely used also in videogames, where is not uncommon when we need to upsample low resolution buffers.  Being a rendering engineer in the videogame industry I am very very interested in these applications, but is not in the scope of this blog post so here some excellent reads from: Angelo Pesce’s blogBart Wronki’s blogPadraic Hennessy’s GDC 2016 Presentation (but you can find more 🙂 )

Behold the third dimension: Bilateral mesh denoising 

The bilateral filter has been introduced for images, but nothing stop us from using it on different dimensions such as meshes! Two papers independently introduced  this idea:  [Jones et al. 2003] and [Fleishman et al. 2003] . I have direct experience with the latter so I am going to refer to that one for the rest of the post.

Now to apply the 2D algorithm to 3D, we need to “map” the concepts to another domain. Obviously the vertices data set is the equivalent of the image, but what about the image values needed to detect features? The main idea was to consider the height of the vertices over their tangent plane as the equivalent of the intensity levels in the image:

PicForBlogBilateral

Using my mad Paint skills  I tried to illustrate the point. In blue we have the plane tangential to the vertex we want to filter, p, in red the vector between p and neighbours pminusq and finally in orangey the projection of pminusq on the normal of p (i.e. the height w.r.t. the tangent plane); I will call this projection h from now on.
This nice orange-y bit is exactly what we need to use the bilateral filter on our noised mesh! We can in fact assume that in areas with high h values we are more likely in the presence of a feature. Now similarly, they assume that the noise-free mesh is well approximated by the tangent plane at p, so what we need to find for each vertex v (up to now called p, but I like v better now that we explicitly talk about vertices)  is:

newvertex

Where d is the result of the bilateral filter (1).  As the above implies, the algorithm is applied on a per-vertex basis and allows for multiple iterations.
Now referring to the formula I mentioned in the previous section for images, in 3D the bilateral filter becomes:

dForumla

Where N(v) is the neighbourhood of the considered vertex. This is best set in relation to the standard deviation we use for the range filter (the one that penalizes large variations) CodeCogsEqn (3), in particular it should be so that  diseq.

Think about it keeping in mind the correlation between image intensities and h (dot ). In an image a very strong difference in intensity usually indicates an edge, while noise that we can blur out can be usually classified as as smaller variations of the intensity. It all makes sense, let’s apply (1) and DONE!

Nope.

Few issues still there. Firstly we are relying on the normal at the vertex in the noisy mesh and this can lead to unwanted results. We can solve this by mollifying the normal, that is just averaging the normals of the faces of the 1-ring neighbourhood of v, using the area of said faces as weight.  In rare cases we might need to consider a k-neighbourhood when mollifying normals.

This is not the only issue here, the bilateral filtering is not energy preserving. In images this is not so noticeable or not so jarring to the eye, however in meshes this manifests as shrinking which could be quite evident. Also, because you are likely in need to apply multiple iterations of the algorithm to obtain good results, you are also more likely to create visible and serious shrinkage of the mesh.   [Fleishman et al. 2003]  mention some volume preservation technique that is either not described or, well, I did not paid enough attention while implementing the algorithm. I experienced sensible shrinkage, but for my use case it was an ok trade off, so I did not investigate much further.

Here an half-decent result I got (displayed with curvature to highlight details and features)

bunCurv01

On the left the noised mesh, at the center the original and to the right the de-noised result.
Probably could get better with better tuning of parameters, but not too bad.

Damn you box! More problems 

To add on to the problems, let’s consider a mesh that has corner-like features as the one below (or any cuboid would do). Now if we have noise on the edges we are in for some troubles!

sharpft.png
From [Fleishman et al. 2003]
The noise is not removed because is wrongly classified as feature. It’s quite clear why if you think that we are using  the heights of the neighbours relative to the tangent plane to the vertex we are filtering (I call these h). Being on top of a very strong feature, the hs are very big because they are essentially the sum of the genuine big value from the feature and the smaller value caused by the actual noise. The algorithm doesn’t know we have extra noise, it just sees that big value and immediately infers an edge to preserve.

Another issue I see is that the algorithm relies on the fact that the local tangent plane always well approximates the denoised mesh, therefore we push the vertices in the normal direction. I don’t think this holds well with corners. As a result of this possibly wrong assumption  you can end up smoothing or bending sharp edges.
curvedcube.png

Note that the cube on the right is after lots of unnecessary iterations, here to only exaggerate the issue.

Bilateral Normal Filtering

This corners issue was a big deal for me, so I started thinking of solutions. Apparently I came up to a conclusion that is already exploited by many research papers:  first order normals are a better representation of the surface, better than the actual vertices positions in this context. With this idea and a bit of googling I found the paper [Lee and Wang 2005] that I used to improve on the basic algorithm.
In short, they still apply the bilateral filtering as [Fleishman et al. 2003], but they filter the normals:

NormalBilateral

Where i_latex is the face of the considered normal, N(i)  are neighbouring faces of i_latexci is the centroid of face i_latex and finally dijshort is the equivalent of the h mentioned for version (1). In fact, here variations between normals are detected as the projection of the normal we are filtering onto the vector given by the difference between the normal itself and a sample in the neighbourhood

dij

Again, this makes sense as a big dijshort will happen in presence of features of the mesh.

Ok this is cool, we have nice filtered normals, but I needed vertices! How do we get the denoised mesh? Well, we know that by definition the normal of a face is perpendicular to each edge of that face. This gives us a system of equations (dot product of filteredi  with each edge of its face must be zero) that we can solve  with least-square methods.

LSE

Where F is the set of neighbouring face, lambda the iteration step size and istar is the set of neighbouring vertices. For a full derivation I refer you to [Lee and Wang 2005].

BilateralNormal
From [Lee and Wang 2005]. Center is result from [Fleishman et al. 2003], right is the result from filtering normals ( [Lee and Wang 2005] )
Still not perfect, but pretty good!

I’d love to better detail this algorithm, but I am afraid the post is already too long. If you are interested in it please let me know and I will write another short post (or edit this one).


This is not cutting-edge denoising tech, nor is a fair review of all the methods based on bilateral filtering 🙂 I just found this seminal paper interesting and wanted to do a quick and dirty write up of my very short journey into mesh denoising, during which the above described methods were good enough. Hopefully you found this remotely interesting and you will keep on researching the plethora of available evolution of the method; believe me, with the right keywords you’ll be able to find a massive quantity of papers, I might add few links as appendix later on.

Thanks for reading!

-Francesco  (@FCifaCiar)

Advertisements
Bilateral Mesh Denoising

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s