Implementing GGX BRDF in Arnold with Multiple Importance Sampling

Posted by Shih-Chin on Sun, Jun 28, 2015

Monte Carlo path tracing has a slow convergence rate close to O(1/n)O(1/\sqrt n), it needs to quadruple the number of samples to halve the error. Many mainstream production renderers have deployed multiple importance sampling (MIS) technique to reduce the variance of sampling result (i.e. the noise in image). If we want to make our custom material shader efficiently executed along with the entire render scene in Arnold, we shall definitely consider exploiting the power of MIS.

MIS

What is Multiple Importance Sampling (MIS)?

Suppose we want to solve the direct illumination equation:

Lo(x,ωo)=ΩLi(x,ωi)fr(x,ωo,ωi)cosθidωiL_o(x,\vec{\omega_o}) = \int_\Omega L_i(x, \vec{\omega_i}) f_r(x, \vec{\omega_o},\vec{\omega_i}) \cos\theta_i\, d\omega_i

Monte Carlo sampling uses random samples to compute this integral numerically. The idea of importance sampling is trying to generate the random samples proportional to some probability density function (PDF) which has similar shape of the integrand.

Image from Simon Premoze, Practical Importance Sampling and Variance Reduction, SIGGRAPH Course Notes, 2010.

In other words, the ideal PDF for the integrand of direct illumination equation would be proportional to the product of incoming radiance Li(x,ωi)L_i(x,\vec{\omega_i}) and BRDF fr(x,ωo,ωi)f_r(x,\vec{\omega_o}, \vec{\omega_i}). However, the product of these terms are too complicated to compute its PDF, but we might be able to find a suitable PDF for some of those terms individually. What we hope is to find a way to sample each term respectively and properly combine those samples to get a result with lower variance. This turns to the technique named Multiple Importance Sampling (MIS) demonstrated by Veach and Guibas [1] in 1995:

Images from Veach and Guibas [1]

With MIS, we use following formula to compute the result:

1nfi=1nff(Xi)g(Xi)wf(Xi)pf(Xi)+1ngj=1ngf(Yj)g(Yj)wg(Yj)pg(Yj)(1)\begin{gathered} \kern{5em} \frac{1}{n_f}\sum_{i=1}^{n_f}\frac{f(X_i)g(X_i)w_f(X_i)}{p_f(X_i)}+\frac{1}{n_g}\sum_{j=1}^{n_g}\frac{f(Y_j)g(Y_j)w_g(Y_j)}{p_g(Y_j)}\kern{5em} (1) \end{gathered}

Where nkn_k is the number of samples taken from certain PDF pkp_k. The weighting functions wkw_k takes all the different ways that a sample could have been generated, and wkw_k could be computed by power heuristic: wk(x)=(nsps(x))βi(nipi(x))βw_k(x)=\frac{(n_s p_s(x))^\beta}{\sum_i{(n_i p_i(x))^\beta}}

The MIS API in Arnold

At each shading point, we want to compute the direct illumination and indirect illumination. In order to utilize MIS API in Arnold, we have to provide three functions:

  1. AtBRDFEvalPdfFunc
  2. AtBRDFEvalSampleFunc
  3. AtBRDFEvalBrdfFunc

For getting better understanding of the task of these functions. Recall that our goal is to use MIS to combine the samples generated from both PDFs of lights and BRDF. Let BRDF as f(X)f(X) and lighting function as g(Y)g(Y) in equation (1), then the unknown entities to Arnold are:

  • pf\textcolor{#FF3399}{p_f} (AtBRDFEvalPdfFunc): the PDF of BRDF
  • Xi\textcolor{#009933}{X_i} (AtBRDFEvalSampleFunc): the sample direction generated according to pf\textcolor{#FF3399}{p_f}
  • f\textcolor{#0099FF}{f} (AtBRDFEvalBrdfFunc): the BRDF evaulated with sampled incoming direction and fixed outgoing direction (i.e. -sg->Rd)

With these information, wf(Xi)w_f(X_i) and wg(Yj)w_g(Y_j) can be computed with power heuristic:

wf(Xi)=(nfpf(Xi))β(nfpf(Xi)+ngpg(Xi))βwg(Yj)=(ngpg(Yj))β(nfpf(Yj)+ngpg(Yj))β\begin{aligned} w_f(\textcolor{#009933}{X_i}) &= \frac{(n_f \textcolor{#FF3399}{p_f}(\textcolor{#009933}{X_i}))^\beta}{(n_f \textcolor{#FF3399}{p_f}(\textcolor{#009933}{X_i})+n_g p_g(\textcolor{#009933}{X_i}))^\beta} \\ w_g(Y_j) &= \frac{(n_g p_g(Y_j))^\beta}{(n_f \textcolor{#FF3399}{p_f}(Y_j)+n_g p_g(Y_j))^\beta} \end{aligned}

Now suppose we want to use GGX for glossy component, the shading structure would like this:

  1. GgxBrdf ggx_brdf;
  2. GgxBrdfInit(&ggx_brdf, sg);
  3.  
  4. auto directGlossy = AI_RGB_BLACK;
  5.  
  6. AiLightsPrepare(sg);
  7. while (AiLightsGetSample(sg)) {
  8. if (AiLightGetAffectSpecular(sg->Lp)) {
  9. directGlossy += AiEvaluateLightSample(sg, &ggx_brdf,
  10. ggxEvalSample, ggxEvalBrdf, ggxEvalPdf);
  11. }
  12. }
  13.  
  14. auto indirectGlossy = sg->Rr > 0
  15. ? AI_RGB_BLACK
  16. : AiBRDFIntegrate(sg, &ggx_brdf,
  17. ggxEvalSample, ggxEvalBrdf, ggxEvalPdf, AI_RAY_GLOSSY);

BRDF Evaluation

The content of ggxEvalBrdf is the formula of microfacet BRDF:

fr(i,o,n)=F(i,hr)G(i,o,hr)D(hr)4inonf_r(i,o,n)=\frac{F(i, h_r)G(i, o, h_r)D(h_r)}{4|i⋅n||o⋅n|}

  • i,oi, o are the incoming and outgoing direction
  • hrh_r is the normalized half vector
  • nn is the macro-surface normal
  • FF is Fresnel term, GG is masking/shadowing term and DD is the microfacet normal distribution

Please refer to the paper of Walter et al. [2] for more details.

Evaluate the PDF for direction sampling

The job of ggxEvalPdf is to evaluate the PDF for given direction and it must match the density of samples generated by ggxEvalSample. The strategy suggested by Walter et al. [2] is to sample the microfacet normal mm according to pm(m)=D(m)mnp_m(m)=D(m)\vert m\cdot n \vert, and then transform the PDF from microfacet to macro-surface:

po(o)=pm(m)ωhωo=D(m)mn4imp_o(o)=p_m(m)\left|\frac{\partial\omega_h}{\partial\omega_o}\right|=\frac{D(m)|m⋅n|}{4|i⋅m|}

  1. float ggxEvalPdf(const void *brdfData, const AtVector *indir)
  2. {
  3. // V = -sg->Rd, Nf = sg->Nf
  4. auto M = AiV3Normalize(V + *indir);
  5. auto IdotM = ABS(AiV3Dot(V, M));
  6. auto MdotN = ABS(AiV3Dot(M, Nf));
  7. // We pass Nf to D(), since it needs to compute thetaM which is the angle
  8. // between microfacet normal M and macro-surface normal N.
  9. return D(M, Nf) * MdotN * 0.25f / IdotM;
  10. }

Create the sample direction from PDF

In ggxEvalSample, the direction is sampled with the cosine weighted microfacet normal distribution:

pm(m)=D(m)mn, where D(m)=αg2πcos4θm(αg2+tan2θm)2p_m (m)=D(m)|m⋅n|\text{, where }D(m)=\frac{\alpha_g^2}{\pi\cos^4{\theta_m}(\alpha_g^2+\tan^2{\theta_m})^2}

With the distribution transformation from ωm\omega_m to θm,ϕm\theta_m, \phi_m:

pm(θm,ϕm)=sinθm pm(m)p_m(\theta_m, \phi_m)=\sin{\theta_m}\ p_m(m)

We can compute the marginal probability of pm(θm)p_m(\theta_m) and conditional probability pm(ϕmθm)p_m(\phi_m\vert\theta_m)

pm(θm)=02πD(m)cosθmsinθm dϕm=2αg2sinθmcos3θm(αg2+tan2θm)2p_m(\theta_m)=\int_0^{2\pi}{D(m)\cos{\theta_m} \sin{\theta_m}\ d\phi_m} = \frac{2\alpha_g^2\sin\theta_m}{\cos^3{\theta_m}(\alpha_g^2+\tan^2{\theta_m})^2}

pm(ϕmθm)=pm(θm,ϕm)pm(θm)=12πp_m(\phi_m\vert\theta_m) = \frac{p_m(\theta_m, \phi_m)}{p_m(\theta_m)} = \frac{1}{2\pi}

Then, we compute the corresponding CDFs and their inverse functions to get θm,ϕm\theta_m, \phi_m from two canonial random variable ξ1,ξ2[0,1)\xi_1, \xi_2 \in [0, 1)

Pm(θm)=0θm2αg2sinθcos3θ(αg2+tan2θ)2dθ=αg2αg2+tan2θ0θm=tan2θmαg2+tan2θmθm=arctan(αgξ11ξ1)\begin{aligned} P_m(\theta_m) &= \int_0^{\theta_m} \frac{2 \alpha_g^2 \sin \theta'}{\cos^3{\theta'}(\alpha_g^2+\tan^2{\theta'})^2}\, d\theta' \\ &= -\frac{\alpha_g^2}{\alpha_g^2+\tan^2{\theta'}} \Big|_0^{\theta_m}\\ &= \frac{\tan^2{\theta_m}}{\alpha_g^2 + \tan^2{\theta_m}} \\ \theta_m &= \arctan \left( \alpha_g \sqrt{\frac{\xi_1}{1 - \xi_1}} \right) \end{aligned}

Pm(ϕmθm)=0ϕm12πdϕ=ϕm2π,ϕm=2πξ2P_m(\phi_m|\theta_m) = \int_0^{\phi_m} \frac{1}{2\pi} d\phi’ = \frac{\phi_m}{2\pi}, \kern{1em} \phi_m = 2\pi \xi_2

Finally, we generate a sample direction with spherical coordinate θm,ϕm\theta_m, \phi_m and transform the vector from local frame of shading point U, V, Nf(face-forward shading normal) into world space:

  1. AtVector ggxEvalSample(const void *brdfData, float rx, float ry)
  2. {
  3. // Sample microfacet normal.
  4. auto thetaM = atanf(roughness * sqrtf(rx / (1.0f - rx)));
  5. auto phiM = AI_PITIMES2 * ry;
  6.  
  7. AtVector M;
  8. M.z = cosf(thetaM);
  9. float r = sqrtf(1.0f - SQR(M.z));
  10. M.x = r * cosf(phiM);
  11. M.y = r * sinf(phiM);
  12.  
  13. // Transform the M from local frame to world space.
  14. // The U, V can be retrieved from AiBuildLocalFramePolar with sg->Nf.
  15. AiV3RotateToFrame(M, U, V, Nf);
  16.  
  17. // Compute reflected direction according to view direction and normal vector.
  18. // V = -sg->Rd
  19. return 2.0f * ABS(AiV3Dot(V, M)) * M - V;
  20. }

Results

In comparison to aiStandard, GGX has stronger tails than Beckmann which is used in aiStandard.

Roughness 0.0 to 1.0
![Buddha](/images/rlShaders/rlGgx_buddha_cmp.jpg)

The results of the custom BRDF composed of GGX for glossy and Oren Nayar for diffuse.

References

  1. Veach, Eric, and Leonidas J. Guibas. "Optimally combining sampling techniques for Monte Carlo rendering." SIGGRAPH, 1995.
  2. Walter, Bruce, et al. "Microfacet models for refraction through rough surfaces." EGSR'07.
  3. Colbert, Mark, Simon Premoze, and Guillaume François. “Importance sampling for production rendering.” SIGGRAPH 2010 Course Notes.
  4. Pharr, Matt, and Greg Humphreys. Physically based rendering: From theory to implementation 2/e. Morgan Kaufmann. 2010.