WebGL Shadow Maps Part 2: Lighting

In part one we got the basic technique working and created a shadow map. To keep part one as focused as possible, no other lighting calculations were done, leaving us with a shadow map but some weird self shadowing. Now we’re going to smooth our pixelated edges, add lighting calculations, clean up our code, and look at an alternative to our biasing approach.

Pixelated Edges

Our shadow currently has jagged edges. If we recall from when we rendered our depth map, it contained the whole scene. Our camera is much closer than that so we’re seeing the shadow map closer, and since it’s  made of pixels, those pixels are bigger. While you could just up the resolution of your depth texture, this could get gigantic depending how large your scene is.

Before we look at solutions I do want to mention that by using sampler2DShadow, WebGL is actually helping us here a bit. While it won’t attempt to smooth out the scaled pixels of your shadow map, it does smooth out the rendered pixels in-between the shadowed and lit areas. This is something I don’t believe you can achieve if you use a regular sampler2D instead, at least not easily. You can see the sampler2D approach in example 2b in the companion github repo. Here’s a side-by-side comparison:

While this automatic smoothing is good, it doesn’t help with our scaled up shadow map edges. Lets make a simple improvement, every time we sample our depth texture, lets also sample the adjacent pixels. Line numbers line up with code in improvements.js.

vec2 adjacentPixels[5] = vec2[](
  vec2(0, 0),
  vec2(-1, 0), 
  vec2(1, 0), 
  vec2(0, 1), 
  vec2(0, -1)
);

float bias = 0.002;
float visibility = 1.0;
float shadowSpread = 1100.0;

void main()
{
  for (int i = 0; i < 5; i++) {
    vec3 biased = vec3(positionFromLightPov.xy + adjacentPixels[i]/shadowSpread, positionFromLightPov.z - bias);
    float litPercent = texture(shadowMap, biased);
    visibility *= max(litPercent, 0.85);
  }
  
  fragColor = vColor * visibility;
}

This is a very simple approach but pretty effective. You can play around with which pixels to sample as well as the shadowSpread to change the effect. You could also sample more times, sample more randomly, etc, but this presents the basic idea. Here is the outcome:

Not bad for just sampling the four neighboring pixels.

Simple Performance Improvements

So far we’ve been converting pixels to texture space in our fragment shader like so:

vec3 lightPovPositionInTexture = positionFromLightPov.xyz * 0.5 + 0.5;

A multiplication and addition isn’t exactly a performance nightmare, but we do run this for every single pixel that’s rendered. We could instead only do this once per draw call. All our vertices are already multiplied by the light pov matrix to get their position from the light. If we just multiply that matrix by a matrix that scales 0.5 and translates 0.5 before using it, we get the same effect. I didn’t do this initially to keep things as simple as possible, but this is a nicer way of handling this.

const textureSpaceConversion = new DOMMatrix([
  0.5, 0.0, 0.0, 0.0,
  0.0, 0.5, 0.0, 0.0,
  0.0, 0.0, 0.5, 0.0,
  0.5, 0.5, 0.5, 1.0
]);
const textureSpaceMvp = textureSpaceConversion.multiply(lightPovMvp);
const lightPovMvpRenderLocation = gl.getUniformLocation(program, 'lightPovMvp');
gl.useProgram(program);
gl.uniformMatrix4fv(lightPovMvpRenderLocation, false, textureSpaceMvp.toFloat32Array());

We can now remove the times 0.5 plus 0.5 from our shader. Also a minor improvement, but one that we will be using to another advantage shortly, is to enable cull faces.

gl.enable(gl.CULL_FACE);

Add Lighting Calculations

To start, lets swap out vertex colors for vertex normals and add basic lighting. Basic lighting is outside the scope of this tutorial, but you can find the basic lighting example here in the companion repo. For an in depth tutorial on this, I recommend this video: WebGL 2: Directional diffuse lighting.

The base lit and shaded scene looks like this:

We can actually just add our shadow logic directly to this, we just need to be a little careful that we never allow a pixel to be drawn darker than ambient light. With both light angle brightness and shadow brightness we don’t want a fully dark pixel to become even darker due to the shadow. As long as our final value is maxed against the ambient light this isn’t a problem. Here’s our final vertex shader code:

#version 300 es
precision mediump float;

uniform vec3 uLightDirection;

in vec3 vNormal;
in vec4 positionFromLightPov;

uniform mediump sampler2DShadow shadowMap;

out vec3 fragColor;

float ambientLight = 0.2;

vec2 adjacentPixels[5] = vec2[](
  vec2(0, 0),
  vec2(-1, 0), 
  vec2(1, 0), 
  vec2(0, 1), 
  vec2(0, -1)
);

vec3 color = vec3(1.0, 1.0, 1.0);

float bias = 0.002;
float visibility = 1.0;
float shadowSpread = 800.0;

void main()
{
  for (int i = 0; i < 5; i++) {
    vec3 biased = vec3(positionFromLightPov.xy + adjacentPixels[i]/shadowSpread, positionFromLightPov.z - bias);
    float hitByLight = texture(shadowMap, biased);
    visibility *= max(hitByLight, 0.87);
  }
  
  vec3 normalizedNormal = normalize(vNormal);
  float lightCos = dot(uLightDirection, normalizedNormal);
  float brightness = max(lightCos * visibility, ambientLight);
  fragColor = color * brightness;
}

And it looks like this:

Biasing Alternative

So far we’ve added a bias in our fragment shader to eliminate shadow acne, where the pixel depth vs depth texture depth is so close together that it creates random shadow pixels. To solve this we’ve instead sampled our depth texture at just a slightly smaller depth, creating a small gap between the pixel depth and depth texture. There’s actually another way to create this gap with face culling. By culling front faces when rendering the depth texture we’ll only record the depth  of the back face of every object. When we render our scene, we go back to back face culling, testing and rendering the front face of every object. For every area in the light, this creates a gap the size of your geometry. The distance between the back of your mesh and the front of your mesh is the bias. Here’s a comparison of our depth texture for back faces vs front:

Technically this will still create shadow acne in the shaded areas. Any face that is back facing from the point of view of the light now has a very similar depth to what is in the depth texture, which is what causes the random shadow pixels. However, this doesn’t matter, because our lighting logic has already shaded in those faces, so any random shadow pixels are invisible. You can see this in action in the culling instead of bias sample code here. Other than removing the bias from the fragment shader, the key is swapping which faces to cull in the draw call:

gl.useProgram(depthProgram);
gl.bindFramebuffer(gl.FRAMEBUFFER, depthFramebuffer);
gl.viewport(0, 0, depthTextureSize.x, depthTextureSize.y);
gl.cullFace(gl.FRONT);
gl.drawArrays(gl.TRIANGLES, 0, verticesPerCube * 2);

gl.useProgram(program);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.bindTexture(gl.TEXTURE_2D, depthTexture);
gl.uniform1i(shadowMapLocation, 0);
gl.cullFace(gl.BACK);
gl.drawArrays(gl.TRIANGLES, 0, verticesPerCube * 2);

And…that’s it. Please let me know if there’s something still confusing, something I missed, or if anything could be improved! Check out the companion github repo if you haven’t!