Derivative of Gaussians steerability

I am writing this short post on a small interesting old fact that I found being unknown to some (probably few): derivatives of Gaussian kernels are steerable… wait stee-what?

Steerable filters

In computer vision and image processing is not uncommon to find ourselves in the need of oriented filters to analyze responses under different orientations.

We can describe as “Steerable filters” [Freeman and Adelson 1991] a class of filters in which we can synthesize a filter for an arbitrary orientation through a linear combination of basis filters. To put this into equations, if a class of filters is steerable we can have:

steereq

where: f is our filter, the various fi are the filters in the basis and the factors  ktheta are what we can call “steering functions”.
Not only we can easily create filters with arbitrary orientation, we can actually skip that and obtain directly their response if we already have a response for the basis filters. This is because when it comes to convolution:

steereqdistrib

This is great! It means that if we need outputs for lots of orientations, we don’t have to compute lots of expensive convolutions, but only as many as the number of filters in our basis.

Derivative of Gaussians steerability

One very useful class of filters that is steerable is the Derivative of Gaussians (DoG).

Derivative of Gaussians are used a lot in vision/image processing as a way to detect edges (or generally features). I would write more about this, but it would transform this in a lengthier post than I want, so I’ll throw there only two things:

  • They are great to compute gradients and therefore to detect discontinuities. With a large standard deviation we have poor localization (bad at finidng the exact location containing the edge) but good detection (good at detecting proper edges). With low standard deviation we have the opposite, good localization, but poor detection.
  • First order derivatives  at different scales can be used to approximate a Laplacian of Gaussian (LoG), the second order can be used to actually compute the Laplacian. LoG are extremely useful to detect blobs.

So that’s what the first order of Derivative of Guassians look like in x and y directions:

FirstOrderDoG.png

These two nice guys here are one possible choice of basis filters to compute a DoG oriented however we like. But what are the steering functions?  It’s real simple, if we want to find the filter oriented along the direction direction, then the steering functions are simply:
steeringfun

Therefore, using the first equation in the post, our oriented DoG is obtained with:

geq

I’ve put a short proof of this in Appendix A at the bottom of the post.

With these filters we can now compute, for example, the gradients at a certain direction:

final-composite

which could be useful, for example, in feature detection scenarios and I am sure in other cases that don’t come to my mind now 🙂

Note: All of the above extends to higher derivative orders.

Additional use cases and further readings

I wanted to keep this post short as this is just a little nice property. Even so, I have treated it probably too shortly, there are other interesting facts and use cases. Here is a quick list of possible use cases for steerable filters (not limited to DoG) and links to papers you could read to understand more about them.

  • Feature detection: I have briefly illustrated how to use steerable filters to find edges in a specific direction. Steerable filters are also useful for example in junctions detection [Perona 92] or detecting the orientation of an image [Freeman and Adelson 1991]. Plenty of other papers on this field can be found (e.g. [Jacob and Unser 2004] and many many more).
  • Scene relighting: An interesting usage of steerable functions can be seen for surface re-lighting. One  of the first example of steerable light sources is from [Teo et al. 1997]; they steered light sources to relight a static scene, however using a very large (probably impractical) basis.
    Another very interesting (at least historically) example are the Steerable Illumination textures of [Ashikhmin and Shirley 2002]; They calculate a basis of “baked” illumination, steered to obtain the final result on bumpy surfaces from an arbitrary light position without any cross-fading. They use smaller basis than [Teo et al. 97].
  • Image stitching: Steerable filters have been used to aid in image stitching where finding orientation of corners could help. This has been done for “classical” image mosaics [Subramanyam 2013] or with a stronger focus on satellite images [Li et al. 2007].
  • Image filtering: [Freeman and Adelson 1991] shows how to remove noise from an image, while enhancing oriented structures.  [Gotsman 1994] used steerable gaussian for texture filtering.
  • Motion detection:  Steerable filters are also used to aid in motion detection algorithms. I don’t have a specific paper to link, but steerable filters are often used, implicitly or not.

That is all! It’s probably not the most interesting or incredibly obscure thing, but I figured to post it in the wild nonetheless 🙂 I hope you enjoyed it at least a bit.

’til the next one!

-Francesco  (@FCifaCiar)

Appendix A: Derivation of steerability functions for Derivative of Gaussians

Let’s first express the first order derivative of a gaussian in the x direction in polar coordinates as:

gxpolar

If we then rotate phi by pi2 we will obviously obtain the derivative in the y direction:

gypolar

Now, let’s rotate the filter by an arbitrary angle theta, using trigonometric identities we obtain:

gxrotated

grotatedfinal

which is what I mentioned before 🙂

Advertisements
Derivative of Gaussians steerability

Bilateral Mesh Denoising

Few days ago while looking for an asset in my archive, I resurfaced all sort of interesting things from the past. One thing it brought back lots of memories was a mesh I scanned some time ago. Not surprisingly for (poorly 😦 ) scanned data it was very noisy, so I tried to resurface my snippets on mesh denoising and I figured to write a small and quick blog post.

Bilateral filter

Generally, smoothing (or blurring) filters are good to reduce the high frequency components of our data and well, noise is most of the time unwanted high frequency information. But blindly reducing high frequencies is not the greatest idea, we will lose important features of our data that are not supposed to be noise.

To smooth images whilst preserving edges, the seminal paper [Tomasi and Manduchi 1998] introduced the Bilateral filter that has soon become a fundamental tool for anyone dealing with image filtering. The main idea is to have a kernel that adapts itself based on the content that is filtering; let’s first write down the original formula

CodeCogsEqn (1)

Here p represents a point in the neighbourhood of  u (the pixel we are filtering), CodeCogsEqn (2) is the image data and G represents Gaussian filters. Gs is nothing new, it’s just your classical blur kernel, the novel part is all in CodeCogsEqn (3), which is often referred to as range filter.
This filter provides a scale to the kernel  Gs: higher the likelihood of an edge in the image (big difference between neighbouring values), the smaller Gs is made.  The “minimum” amplitude of the edge can be tuned changing the standard deviation of CodeCogsEqn (3), the bigger this standard deviation the more the bilateral filter converges to a normal gaussian blur. A good value for it would be using the mean gradient value of the image.

bilat

You can see here how the image has been smoothed out, but the strong features are still very well preserved.

This concept of edge-aware filtering has been widely used also in videogames, where is not uncommon when we need to upsample low resolution buffers.  Being a rendering engineer in the videogame industry I am very very interested in these applications, but is not in the scope of this blog post so here some excellent reads from: Angelo Pesce’s blogBart Wronki’s blogPadraic Hennessy’s GDC 2016 Presentation (but you can find more 🙂 )

Behold the third dimension: Bilateral mesh denoising 

The bilateral filter has been introduced for images, but nothing stop us from using it on different dimensions such as meshes! Two papers independently introduced  this idea:  [Jones et al. 2003] and [Fleishman et al. 2003] . I have direct experience with the latter so I am going to refer to that one for the rest of the post.

Now to apply the 2D algorithm to 3D, we need to “map” the concepts to another domain. Obviously the vertices data set is the equivalent of the image, but what about the image values needed to detect features? The main idea was to consider the height of the vertices over their tangent plane as the equivalent of the intensity levels in the image:

PicForBlogBilateral

Using my mad Paint skills  I tried to illustrate the point. In blue we have the plane tangential to the vertex we want to filter, p, in red the vector between p and neighbours pminusq and finally in orangey the projection of pminusq on the normal of p (i.e. the height w.r.t. the tangent plane); I will call this projection h from now on.
This nice orange-y bit is exactly what we need to use the bilateral filter on our noised mesh! We can in fact assume that in areas with high h values we are more likely in the presence of a feature. Now similarly, they assume that the noise-free mesh is well approximated by the tangent plane at p, so what we need to find for each vertex v (up to now called p, but I like v better now that we explicitly talk about vertices)  is:

newvertex

Where d is the result of the bilateral filter (1).  As the above implies, the algorithm is applied on a per-vertex basis and allows for multiple iterations.
Now referring to the formula I mentioned in the previous section for images, in 3D the bilateral filter becomes:

dForumla

Where N(v) is the neighbourhood of the considered vertex. This is best set in relation to the standard deviation we use for the range filter (the one that penalizes large variations) CodeCogsEqn (3), in particular it should be so that  diseq.

Think about it keeping in mind the correlation between image intensities and h (dot ). In an image a very strong difference in intensity usually indicates an edge, while noise that we can blur out can be usually classified as as smaller variations of the intensity. It all makes sense, let’s apply (1) and DONE!

Nope.

Few issues still there. Firstly we are relying on the normal at the vertex in the noisy mesh and this can lead to unwanted results. We can solve this by mollifying the normal, that is just averaging the normals of the faces of the 1-ring neighbourhood of v, using the area of said faces as weight.  In rare cases we might need to consider a k-neighbourhood when mollifying normals.

This is not the only issue here, the bilateral filtering is not energy preserving. In images this is not so noticeable or not so jarring to the eye, however in meshes this manifests as shrinking which could be quite evident. Also, because you are likely in need to apply multiple iterations of the algorithm to obtain good results, you are also more likely to create visible and serious shrinkage of the mesh.   [Fleishman et al. 2003]  mention some volume preservation technique that is either not described or, well, I did not paid enough attention while implementing the algorithm. I experienced sensible shrinkage, but for my use case it was an ok trade off, so I did not investigate much further.

Here an half-decent result I got (displayed with curvature to highlight details and features)

bunCurv01

On the left the noised mesh, at the center the original and to the right the de-noised result.
Probably could get better with better tuning of parameters, but not too bad.

Damn you box! More problems 

To add on to the problems, let’s consider a mesh that has corner-like features as the one below (or any cuboid would do). Now if we have noise on the edges we are in for some troubles!

sharpft.png
From [Fleishman et al. 2003]
The noise is not removed because is wrongly classified as feature. It’s quite clear why if you think that we are using  the heights of the neighbours relative to the tangent plane to the vertex we are filtering (I call these h). Being on top of a very strong feature, the hs are very big because they are essentially the sum of the genuine big value from the feature and the smaller value caused by the actual noise. The algorithm doesn’t know we have extra noise, it just sees that big value and immediately infers an edge to preserve.

Another issue I see is that the algorithm relies on the fact that the local tangent plane always well approximates the denoised mesh, therefore we push the vertices in the normal direction. I don’t think this holds well with corners. As a result of this possibly wrong assumption  you can end up smoothing or bending sharp edges.
curvedcube.png

Note that the cube on the right is after lots of unnecessary iterations, here to only exaggerate the issue.

Bilateral Normal Filtering

This corners issue was a big deal for me, so I started thinking of solutions. Apparently I came up to a conclusion that is already exploited by many research papers:  first order normals are a better representation of the surface, better than the actual vertices positions in this context. With this idea and a bit of googling I found the paper [Lee and Wang 2005] that I used to improve on the basic algorithm.
In short, they still apply the bilateral filtering as [Fleishman et al. 2003], but they filter the normals:

NormalBilateral

Where i_latex is the face of the considered normal, N(i)  are neighbouring faces of i_latexci is the centroid of face i_latex and finally dijshort is the equivalent of the h mentioned for version (1). In fact, here variations between normals are detected as the projection of the normal we are filtering onto the vector given by the difference between the normal itself and a sample in the neighbourhood

dij

Again, this makes sense as a big dijshort will happen in presence of features of the mesh.

Ok this is cool, we have nice filtered normals, but I needed vertices! How do we get the denoised mesh? Well, we know that by definition the normal of a face is perpendicular to each edge of that face. This gives us a system of equations (dot product of filteredi  with each edge of its face must be zero) that we can solve  with least-square methods.

LSE

Where F is the set of neighbouring face, lambda the iteration step size and istar is the set of neighbouring vertices. For a full derivation I refer you to [Lee and Wang 2005].

BilateralNormal
From [Lee and Wang 2005]. Center is result from [Fleishman et al. 2003], right is the result from filtering normals ( [Lee and Wang 2005] )
Still not perfect, but pretty good!

I’d love to better detail this algorithm, but I am afraid the post is already too long. If you are interested in it please let me know and I will write another short post (or edit this one).


This is not cutting-edge denoising tech, nor is a fair review of all the methods based on bilateral filtering 🙂 I just found this seminal paper interesting and wanted to do a quick and dirty write up of my very short journey into mesh denoising, during which the above described methods were good enough. Hopefully you found this remotely interesting and you will keep on researching the plethora of available evolution of the method; believe me, with the right keywords you’ll be able to find a massive quantity of papers, I might add few links as appendix later on.

Thanks for reading!

-Francesco  (@FCifaCiar)

Bilateral Mesh Denoising

Chronicles of a tour in Skin Shading – (Some) Useful biological aspects used in CG

Preamble:  This series of blog posts is a braindump of some parts of my short journey in real time skin shading for my Master thesis, supervised by Dr. Tim Weyrich. 

Part 1: Dipole approximation for dummies 

A part of my studies that fascinated me quite a lot was the biological background used in skin shading. Not sure if this aspect receives much love in games (let me know if you actively do use these concepts!), but I can see lots of potential, especially from an authoring point of view.

As a necessary disclaimer: I am not a biologist and although I love biology as any other science, my knowledge is fairly limited, so please correct me if I wrote something stupid 🙂

Chromophores

Main responsible for light absorption, and therefore for the skin colour, are chemical compounds called chromophores. There are two influential types of chromophore: Melanin and Haemoglobin.

DermEpiChromo.png
From [Donner and Jensen 2006]. I will talk about the scattering structures later in the post.
Melanin is the pigment that most influences the skin colour and it is mostly found in the upper layer of the skin, the epidermis.
We can identify two types of them; the eumelanin is a dark or brown pigment and the pheomelanin that has a lighter, reddish-yellow colour.
Intuitively, a high concentration of melanin is found in dark skin where it is mostly eumelanin; on the contrary, in lighter skins such as the Asian or Caucasian ones, there is much less  melanin and there is an higher percentage of pheomelanin.

Haemoglobin, found at dermis level, is a protein responsible for the transport of oxygen and produces a pink-purple nuance on the skin. Small amounts of heamoglobin correspond to a pale effect on skin colour, while an high concentration makes the skin appear more pink/red.

Chromophores in Computer Graphics

Wait, do we care about that much biology? As CG community we know that to correctly represent the world we have to study it thoroughly, so it should come at no surprise that to develop skin appearance models, looking at biological parameters can be useful.

[Donner and Jensen 2006] proposed a spectral shading model which permits the manipulation of chromophores via three parameters: Haemoglobin fraction, Melanin Fraction, Melanin type blend (that describe eumelanin/pheomelanin ratio) + they add a parameter for skin oiliness for the specular contribution.

IncreasingMelanin.png
Increasing melanin fraction [Donner and Jensen 2006].
These parameters are then embedded in the multi-pole model [Donner and Jensen 2005]. I described roughly the model in the previous post, but the important bit to remember here is that it describes a multi-layer material where each layer has a characteristic absorption coefficient. The model proposed by [Donner and Jensen 2006] has two layers, the epidermis with an absorption coefficient dependent on melanin factors and the dermis layer, where the absorption coefficient is dependent on the haemoglobin fraction. I’ve put the exact formulas at the bottom of the post.

IncreasingHaemo.png
Increasing haemoglobin fraction [Donner and Jensen 2006].
This to me has a great importance. We can define the biologically-correct diffusion profile for any target we want!

But hey, don’t we use albedo textures? Yes, and they also contain fine scale details we want to preserve.  The issue is that albedo textures include overall colour data and therefore melanin and haemoglobin pigments are already embedded. If we want to get our colour from chromophores, we must normalize the albedo colour by dividing each texel by the assumed total diffuse reflectance

TotalDiffuseReflectance

Where R(r) is the diffusion profile we found using the assumed chromophores information.

This work is taken forward by [Donner et al. 2008], where they introduce more physiological parameters and make them spatially-varying (whereas in [Donner and Jensen 2006] spatial variation is limited to the use of an albedo modulation texture). This paper is truly great, I am not going deeper into it  for the sake of brevity of the post, but check it, it is well worth a read.

 

But that is not all. Skin colour varies constantly, also very rapidly. Think about it, we blush, we look pale, we get red after some exercise etc.  A great model for this variation is proposed by [Jimenez et al. 2010] .  Even to a non-biologist like me, it is clear that all of the variations I mentioned are depending on the blood flow and what is in blood that influences most the skin colour? Oh well, our dear Haemoglobin of course!  So the great contribution of [Jimenez et al. 2010] is the proposal of a colour appearance model with spatially-varying haemoglobin distribution that allows appearance transfer between characters.

The key observation made to propose this model was that for a wide range of appearances the haemoglobin distribution is close to a Gaussian one. Starting from an acquired or painted neutral haemoglobin map, various emotions or states can be simply modelled with a bias + scale. This way they transform a neutral distribution H0.gif  to any desired combinations:

scalebiases

With the same idea of usual blend shapes. Each blend shape i_latex has its own set of scale/bias textures and the weights used for these are the same used for classic animation rigs.
Note that because of the low frequency nature of these info, we can efficiently store scales and biases in low resolution textures, how convenient!

Once we have all our infos about chromophores, [Jimenez et al. 2010]  provides us with a great precomputed table that we can use to extract albedo colour from melanin and haemoglobin maps/distributions:

skinexr.png
From [Jimenez et al. 2010]
I strongly suggest you to go and read the full paper, it has lots more details on the model and a great description of the acquisition phase.

SSS and specular from a biological POV

Subsurface Scattering

In the epidermis (i.e. outermost layer) the cells’ membrane, some elements in the nucleus and cytoplasmic organelles are the main scatterers; these scatterers are well separated one another, therefore the whole epidermis contribution can be approximated with a single scattering event.

Layers.png
From [Igarashi et al. 2005]
Wait! I told you in the dipole post that we can ignore the single scattering! Yes. I did, and I still stand my ground. As I said previously in this post, the epidermis is the main responsible for absorption, but most of the scattering in fact occurs at the dermis (the layer under the epidermis). Here the main scatterers are  collagen fibres that scatter in forward direction. These scatterers are so densely packed that the multiple scattering events can be seen as isotropic even if single fibre scattering events are not. And this should explain my claim on the dipole post that it is fine to approximate subsurface scattering as a diffusion phenomenon.

Surface reflection

On skin, surface reflection is primarily caused by a thin layer formed by an oily liquid called sebum and by lipids. Also the morphology of wrinkles has an impact on the specular aspect of the skin.
The distribution of these factors changes across the face, causing varying specular properties. A thorough study of such variations can be found in  [Weyrich et al. 2006]. For my thesis I did use the information in the paper baking roughness and  scaling factor ( rhos ) for the Cook-Torrance BRDF in a texture

spatiallyVarying.png

Sadly I don’t have great comparison pictures any more, this is the best I am left with:

spatiallyVaryingResult.png

The left image is using the same parameters (cheeks’ ones) for the whole face, the right picture is using spatially varying parameters.  Note that I think there is something a bit off with the lips in this picture, as they should have a bit stronger specular.


 

That’s all from today’s biology lesson 🙂 This is a short overview of some of the many biologically-based skin appearance models made by a lowly (non-biologist) programmer. I found these aspects fascinating when I studied them, so I hope someone else found this post moderately interesting 🙂

To the next one!

– Francesco (@FCifaCiar)

Extras

Absorption coefficients for dermis and epidermis ( [Donner and Jensen 2006] )

The wavelength dependent absorption coefficients for the two different types of melanin (eumelanin and pheomelanin) are:

absEmPm

Then we need to define the baseline absorption coefficient from small tissue in the epidermis as:

absBaseline

So that finally we can define the absorption coefficient for epidermis as:

absEpi

Where Bm defines the melanin type blend (eumelanin/pheomelanin) and Cm is the melanin fraction.

For the dermis absorption coefficient we have:

absDerm

Where Gamma is the blood oxygenation ratio between oxy and deoxy-haemoglobin; this varies very little and in [Donner and Jensen 2006] is fixed at 0.75.  Note that haemoglobin has different absorption behaviour if it is oxygenated or deoxygenated.  Ch is the fractional amount of haemoglobin in the dermis.

Reference and Acknowledgements

An obligatory reference not directly mentioned in the post has to go to the technical report The Appearance of Human Skin [Igarashi et al. 2005] . It is a bit long, but it has lots of interesting informations.

[Donner and Jensen 2006] :  DONNER, C., AND JENSEN, H. W. 2006. A spectral BSSRDF for shading human skin. In Rendering Techniques 2006, 409–417.

[Donner et al. 2008]: DONNER, C., WEYRICH, T., D’EON, E., RAMAMOORTHI, R., AND RUSINKIEWICZ,S. 2008. A Layered, heterogeneous reflectance model for acquiring and rendering human skin. In ACM Trans. Graph. 27, 5, 140:1–140:12.

[Jimenez et al. 2010]: JIMENEZ, J., SCULLY, T., BARBOSA, N., DONNER, C., ALVAREZ, X., VIEIRA, T., MATTS, P., ORVALHO, V., GUTIERREZ, D., AND WEYRICH, T. 2010. A
practical appearance model for dynamic facial color. In ACM Trans. Graphic. 29,
141, 1–10.

[Weyrich et al. 2006]: WEYRICH, T., MATUSIK, W., PFISTER, H., BICKEL, B., DONNER, C., TU, C., MCANDLESS, J., LEE, J., NGAN, A., JENSEN, H. W., AND GROSS, M.
2006. Analysis of human faces using a measurement-based skin reflectance model.
ACM Trans. Graphic. 25, 1013–1024.

Also many thanks to my supervisor Tim Weyrich, who is certainly an authority of this field and who taught me a lot.

 

Chronicles of a tour in Skin Shading – (Some) Useful biological aspects used in CG

Chronicles of a tour in Skin Shading – Dipole approximation for dummies

Preamble:  This series of blog posts is motivated by the fact that I apparently lost the pdf of my Master dissertation on real time skin shading. I still have most of the concepts in mind so I will braindump here some parts of my short journey. Don’t expect ultra high quality stuff, it was the work of a little guy approaching for the first time serious computer graphics. 

It was over 2 years ago, when I decided for my Master dissertation to venture myself in Skin Shading. At the time, having a sphere (slowly) rendered with a Cook-Torrance BRDF was my main achievement and little I knew about how challenging graphics programming can be.

All excited I started my literature review and one paper that always came up in references was  A Practical Model for Subsurface Light Transport [Jensen et al. 2001] and its Dipole approximation. Little me was very excited, I printed the paper (yes, I like to print stuff I need to study)  and …. I desperately stared at it for an hour. Whilst it being a great paper, it is not an easy read, at all.
Recently on Computer Graphics  Stack Exchange Nathan Reed posted a question about it and I had a chance to write down a short answer keeping in mind little me, that was eye-balling the paper back then. To kick-start this short series of posts I am going to partially refactor part of my answer here.  It is going to be a very high level description, for details go back and read the paper, it is worth it 🙂

What it is

It is well known that to model skin a simple BRDF is not enough, but we need to approximate a BSSRDF because of the subsurface scattering.
Often the subsurface scattering (SSS) is approximated with a diffusion phenomenon; this is a fairly good thing to do since in highly scattering material such as the skin the distribution of light loses angular dependency and tends to isotropy.

The dipole approximation used by [Jensen et al. 2001] is nothing else but a formulation of the diffusion problem used to model the multiple-scattering part of the BSSRDF.¹ This multiple-scattering term is defined in the paper as a function of incoming and outgoing points and directions:

multipole

Where the Ft are Fresnel terms and R is what is known as a diffusion profile. This profile is what actually describes the phenomenon as a function of the distance between the point where the light enters the media and the point where the light leaves it.
Here in R is where the dipole comes in play.

dipole.png

The dipole approximate the reflectance as coming from two light sources, one with positive contribution and one with negative contribution.

The positive one is located at a distance zr beneath the surface, while the negative one is at a distance that depends on zr , Fresnel reflectivity of the considered slab and on absorption and scattering coefficients of the media. Note that zr is not an arbitrary distance, but is one “mean free path”. A mean free path is given by one over the sum of absorption and reduced scattering coefficients and it represent the distance travelled by a photon before it collides inside the medium.
The contribution of these lights to the diffusion profile R is dependent on their distance from the surface, xixo, scattering and absorption coefficients of the material.

Now that we have a definition of Sd, the last step is to integrate the BSSRDF over a certain area. In [Jensen et al. 2001] they render using a Monte Carlo ray tracer randomly sampling around a point with a density function that is density, where d is the distance from the point we are evaluating and sigmatr depends on absorption and reduced scattering coefficients.

I don’t want to add too many formulas to the core of the post, but at the bottom I’ve put an expanded formula for R with a short description of every parameter.

Why the dipole formulation?

That is all cool and well, but the little me of 2 years ago was very confused, what the heck? A dipole?  Why?

Well, it turns out that the dipole approximation is common in solutions for diffusion problems. Let’s just take a small step back.

The diffusion equation is a Partial Differential Equation and as such there is an infinite number of solutions! For this reason, when we solve diffusion of light problems we need to take into account a boundary condition to find the solution we want  (among the infinite possible ones).

Usually, the boundary condition that is used in transport theory is the so called “extrapolated boundary condition” and the dipole approximation is used to handle it.
In our case, the condition is that the net flux will be zero at the “extrapolated” distance of 2AD from the actual surface, where:

A_D

being Fdr the Fresnel diffuse reflectance (of the considered slab) and sigmat the sum of absorption and reduced scattering coefficients of the material.

The dipole approximation satisfies this boundary condition by representing the scattering of the incoming light with the two light sources described in the previous section.

Its assumptions and subsequent works

[Jensen et al. 2001] was a seminal paper for the field, but it relies on few assumptions that are addressed in subsequent papers. I will not undergo the massive task of listing papers that improved/relied upon the dipole approximation, but here a couple of assumptions discussed after it:

  • The dipole is a good model as long as the object represented is considered (semi-infinitely) thick. This is not true for the skin which is a multi-layered material with thin slabs that need to be taken into account. This issue is tackled by the famous multipole model [Donner and Jensen 2005]. Their model assumes a multitude of dipoles instead of just one (hence the name multipole) and simulates reflectance and transmittance between thin slabs. The paper also proposes interesting acceleration techniques for the rendering.
  • The dipole method assumes that the incident light is directionally uniform; to account for scenarios where it is not, [Frisvad et al. 2015] recently introduced a directional dipole model.
  • [D’Eon and Irving 2011] proposed various improvements on the dipole and multipole models, introducing a modified formulation of the diffusion equation, a more accurate exitance calculation, a more accurate term A  and different boundary conditions.

 

That is all for now, I was multiple times tempted to write a bit more, but I really wanted to keep this post as simple as possible. This is not sufficient to have a deep grasp of this model, but hopefully it should give an overview and a good set of info to approach the actual paper 🙂

– Francesco (@FCifaCiar)

 

Extras

The actual R(r) formula 

xtQG8.png

¹Note that ignoring the single-scattering term has been proved an ok thing to do for skin. For some translucent materials (e.g. smoke) it is fundamental.

 

Chronicles of a tour in Skin Shading – Dipole approximation for dummies

Quick dump of GCN links

During my holidays I have met an old friend of mine that is currently trying to learn a bit more about console’s GPUs and asked me few links. I knew a couple off the top of my head, but I didn’t have a list saved anywhere, but instead vague memories of docs here and there.

This is not a blog post, this is not interesting, this is only a GCN links dump to send to friends asking for info (ciao Piero!) and maybe useful to the random internet surfer.

Official AMD documents

Blog Posts

Presentations

Other

 

Also, let me add that AMD’s great initiative, GPU Open, is a great resource for GCN related stuff!

Let me know if you know any other interesting link!

Till the next one,
-Francesco  (@FCifaCiar)

Quick dump of GCN links