The Making of Charon Jr. (JS13K Postmortem)

I came in 6th place! Thanks everyone that played and voted!

This was my first ever 3D game, and really only the third game I’ve ever made. At the end of last year’s JS13K I knew I wanted to move to 3D, so I went on Kahn Academy and learned about vector math, matrix math, and basic trigonometry. This summer I started learning about WebGL and 3D rendering, and eventually put together a game :). This post is a little long so I have an index of topics you can use to jump to places you find interesting.

  1. Basic 3D Rendering and Collision
  2. Better Collision Detection
  3. Texture Mapping and Noise Generation
  4. A (Pretty) Good Third Person Camera
  5. 3D Modeling: A Cube can be Every Shape
  6. Revisiting Texturing
  7. Creating the Environment
  8. 3D Spatial Audio
  9. Making Driving Fun
  10. Lessons Learned and Summary

 

Basic 3D Rendering and Collision

The first step was learning Web GL at a deep enough level to be able to make a renderer that was pretty efficient size-wise. For that I owe a huge thanks to Andrew Adamson’s YouTube channel. Each video breaks down a concept at a pretty deep level, while staying very focused. This let me internalize the info a lot more easily than other sources I had found. Using that info, I eventually got basic shapes rendering, and even implemented basic 3D axis aligned bounding box collision. This was huge for me, I had rendering and collision in 3D! I recorded this video on June 23rd:

 

While this was exciting, AABBs are extremely limited, only allowing rectangles aligned directly on the x, y, or z axis. I wanted a lot more flexibility. I knew I wanted simple but decent 3D collision, and while I had heard of a number of solutions, a lot of them are hard to find broken down in an easy to understand way. But I did know of a likely source for decent yet simple 3D collision detection that was well documented…Mario 64. I checked out the Mario 64 decompilation project and tried to learn how it did collision.

Better Collision Detection

Here’s my floor collision detection:

for (const floor of floorFaces) {
  const { x: x1, z: z1 } = floor.points[0];
  const { x: x2, z: z2 } = floor.points[1];
  const { x: x3, z: z3 } = floor.points[2];

  if ((z1 - position.z) * (x2 - x1) - (x1 - position.x) * (z2 - z1) < 0) {
    continue;
  }

  if ((z2 - position.z) * (x3 - x2) - (x2 - position.x) * (z3 - z2) < 0) {
    continue;
  }

  if ((z3 - position.z) * (x1 - x3) - (x3 - position.x) * (z1 - z3) < 0) {
    continue;
  }

  height = -(position.x * floor.normal.x + floor.normal.z * position.z + floor.originOffset) / floor.normal.y;

  const buffer = -3;
  if (position.y - (height + buffer) < 0) {
    continue;
  }

  return {
    height,
    floor,
  };
}

Here’s an excerpt of Mario 64’s floor collision code:

while (surfaceNode != NULL) {
    surf = surfaceNode->surface;
    surfaceNode = surfaceNode->next;

    x1 = surf->vertex1[0];
    z1 = surf->vertex1[2];
    x2 = surf->vertex2[0];
    z2 = surf->vertex2[2];

    // Check that the point is within the triangle bounds.
    if ((z1 - z) * (x2 - x1) - (x1 - x) * (z2 - z1) < 0) {
        continue;
    }

    // To slightly save on computation time, set this later.
    x3 = surf->vertex3[0];
    z3 = surf->vertex3[2];

    if ((z2 - z) * (x3 - x2) - (x2 - x) * (z3 - z2) < 0) {
        continue;
    }
    if ((z3 - z) * (x1 - x3) - (x3 - x) * (z1 - z3) < 0) {
        continue;
    }

    nx = surf->normal.x;
    ny = surf->normal.y;
    nz = surf->normal.z;
    oo = surf->originOffset;

    // Find the height of the floor at a given location.
    height = -(x * nx + nz * z + oo) / ny;
    // Checks for floor interaction with a 78 unit buffer.
    if (y - (height + -78.0f) < 0.0f) {
        continue;
    }

    *pheight = height;
    floor = surf;
    break;
}

//! (Surface Cucking) Since only the first floor is returned and not the highest,
//  higher floors can be "cucked" by lower floors.
return floor;

You can see there is a large degree of similarity, and this is true for wall collision as well. I made some tweaks to the code first of course to convert it to TypeScript, but also to make it a bit smaller. With this change in place, you can see I had very flexible collision detection! This was working on June 26th.

 

Texture Mapping and Noise Generation

I knew I wanted Perlin noise for texture generation, and last year I had started to look into how to generate it. While the core concepts made sense, there was one large problem, I didn’t know how to make it repeatable so I could tile it across a surface. After a number of google searches, I found that by shuffling a list of preset angles and repeating them, I could create noise that had values that ended up where they started, making them repeat.

Separately, I had to figure out how to use textures in Web GL. Limitations of Web GL textures mean that you either use a single texture as a texture atlas, or you use a 3D texture as a texture array. I initially attempted a texture atlas, but attempting to then repeat those textures caused me issues. Asking about it on stack overflow, I was directed to going down the 3D texture path instead. I’m very glad I did in the end, as this opened up some cool behaviors, like the ability to have smooth trails throughout the level.

With all this done, I not only had tileable textures, but I could use my Perlin noise values as a heightmap and create terrain! By July 6th, I had textures!

 

A (Pretty) Good Third Person Camera

At this point I wanted a decent third person camera. Much like collision detection, this seemed like a bit of a dark art. Very little information was available for this online. Any tutorial I found online for a third person camera was a system where pressing left or right rotates the player in place. Outside of terrible games like Bubsy 3D, I don’t know of any third person camera that behaves this way. Not wanting to make a terrible game, I decided to just study how other simple but effective 3D games did their cameras and try to copy it.

After studying Spyro and Mario 64’s “Mario” cam (the default Lakitu camera is too involved for a 13kb game), I noticed some basic behavior. Behavior that despite playing through these games, I never really noticed. In these games, you turn relative to the camera, and you turn at an angle created by the radius of the distance of your character to the camera. Let me show you what I mean:

In both clips, I hold directly left or right. You can see in both cases, your character runs in a circle around the camera. I can then run towards or away from the camera, and it will eventually follow me and point the direction I look. I made my camera lerp behind you. It approaches it’s position at a percent each frame, so the speed it moves at is relative to how far away it is. This lets it follow smoothly but never get too far away. I combine this with limiting the cameras follow distance to a certain radius, and also using the angle of the camera to the player to change the direction the player moves, and finally after many attempts I got a very similar behavior.

Code sample:

const speed = 0.1;

const mag = controls.direction.magnitude;
const inputAngle = Math.atan2(-controls.direction.x, -controls.direction.z);
const playerCameraDiff = this.mesh.position.clone().subtract(this.camera.position);
const playerCameraAngle = Math.atan2(playerCameraDiff.x, playerCameraDiff.z);

if (controls.direction.x !== 0 || controls.direction.z !== 0) {
  this.angle = inputAngle + playerCameraAngle;
}

this.velocity.z = Math.cos(this.angle) * mag * speed;
this.velocity.x = Math.sin(this.angle) * mag * speed;

this.mesh.setRotation(0, this.angle, 0);

And here is the result in action, achieved on July 13th:

 

3D Modeling: A Cube can be Every Shape

Next I needed 3D models. A big lesson learned from last year was that loading in assets was just not worth the space they took, and I was better off generating assets via code. With this lesson in mind, I started thinking about 3D models, and especially one very dumb yet maybe good idea: A cube can be any shape. Not like Minecraft, where you can make anything out of a lot of cubes, but rather that a cube is just a collection of triangles, so could those triangles not simply be moved around to create other things? The answer interestingly turned out to be yes. Enter my 3D modeling scripting tool, the MoldableCube.

Lets start with a basic cube, no surprises here, it’s an 8x8x8 cube, where each face is made up of two triangles. The code and result can be seen below:

new MoldableCube(8, 8, 8, 1, 1, 1)

However, you can also make the cube surfaces have more triangles in them by changing the widthSegments, heightSegments, and depthSegments. Here I give 6 segments for width, height, and depth.

new MoldableCube(8, 8, 8, 6, 6, 6)

Now you may say, who cares, the cube looks the same. But now there are more vertices to do things with. For instance, what if we took each vertex, normalized it to keep it’s angle from the center but remove the distance, and then scaled all vertices out by the same distance:

spherify(radius: number) {
  this.verticesToActOn.forEach(vertex => {
    vertex.normalize().scale(radius);
  });
}

 

What if while doing the above, we first move the vertex to the origin on one of the axes, then move it back out, effectively not spherifying on one axis:

cylindrify(radius: number, aroundAxis: 'x' | 'y' | 'z' = 'y') {
  this.verticesToActOn.forEach(vertex => {
    const originalAxis = vertex[aroundAxis];
    vertex[aroundAxis] = 0;
    vertex.normalize().scale(radius);
    vertex[aroundAxis] = originalAxis;
  });
}

Combine these with the ability to select only certain vertices, scale, rotate, translate, and merge multiple moldable cubes together, and you can make many many things. Here’s a wheel and tire from the game:

function createTire() {
  return createBox(6, 2, 1, 6, 1, 1)
    .selectBy(vertex => Math.abs(vertex.x) < 2.5 && Math.abs(vertex.z) < 2.5)
    .cylindrify(1.5, 'y')
    .invertSelection()
    .cylindrify(3.5, 'y')
    .all()
    .computeNormalsCrossPlane()
    .done();
}

function createWheel() {
  return new MoldableCube(2, 2, 2, 4, 1, 4)
    .selectBy(vertex => Math.abs(vertex.x) > 0.4 && Math.abs(vertex.z) > 0.4)
    .cylindrify(1.5)
    .invertSelection()
    .scale(1, 0.5, 1)
    .all()
    .computeNormalsPerPlane()
    .done();
}

I took a break from working on the game engine from end of July through to the start of the competition. Once the comp started, I decided on a driving game. This meant I had to tweak some collision work, camera work, and the controls. In addition, I needed nicer terrain and a well textured vehicle…

Revisiting Texturing

When setting up texturing initially with my texture array, I purposely made the texture index an attribute rather than a uniform. This means that a different value can be passed in for every vertex rendered. I had a feeling this would prove useful but wasn’t sure the best way to exploit it. However when modeling my truck and landscape the benefits became apparent.

The first and simplest benefit is its quite easy to have each side of my moldable cube have it’s own texture. A simple helper method creates an array of texture ids for the vertices on each side of the cube. This allowed me to put headlights and a grill on the front of the car, but a skull on the top of the hood.

function getTextureForSide(uDivisions: number, vDivisions: number, texture: Texture) {
  return new Array((uDivisions + 1) * (vDivisions + 1)).fill().map(_ => texture.id);
}

const texturesPerSide = MoldableCubeGeometry.TexturePerSide(3, 3, 5,
  materials.truckCabRightSide.texture,
  materials.truckCabLeftSide.texture,
  materials.truckCabTop.texture,
  materials.truckCabRear.texture,
  materials.truckCabRear.texture,
  materials.truckCabFront.texture,
);

cab.setAttribute(AttributeLocation.TextureDepth, new Float32Array(texturesPerSide), 1);

This is also used on the tombstones to put RIP on the front but not the edges or back.

The more nuanced benefit came into play when I wanted to add trails to my levels. I really wanted something in the game to help give the player a sense of where they were in the world, like more unique landmarks. In addition to the hand placed tombstones added later, an early addition that helped with this was trails in the game. I could generate nice looking lines by doing a very simple modification to my generated Perlin noise, but how could I place them in a nice way across the whole landscape?

First, I limit my line noise to values between 0 and 1. Values of 0 are dirt paths, and values of 1 are basic grass floor. With my noise functions, this is easy enough. Here are some lines drawn with noise, you can see trails are black, floor is white, and there is a slow transition between the two.

The scaling on these has been reduced to make the effect more noticeable, in the noise used in the game, there are only a few gray pixels around the edges of the path, and the rest is all white. However, this shows the smooth transition well. This smooth transition is actually very important for making smooth paths. In order to understand how the paths work, you have to understand a bit about Web GL and 3D textures. We’re using a texture array for textures, where each texture lives at an index, but in reality, this is also a 3D texture. The index is just the z position of the 3D texture. Since Web GL supports 3D textures, you can actually specify different “depths” for each index, and it will interpolate between them.

For a very simple example, consider this cube, upper left, upper right, and lower left use brick texture, which is stored at index 4. Lower right corner uses index of a tile, which is two textures up at position 6. You  can see it starts out with the tile, but then quickly changes to what’s in index 5, which is the tree bark texture, then to brick:

And this can be tweaked, if I set the upper right corner and lower left corner to 4.5, it changes the interpolation rate:

Notice that the surface isn’t just one texture, it’s actually three. And the three textures aren’t split out by individual triangles, but rather where Web GL considers the depth to have changed past halfway to the next texture. In the image above, since upper right and lower left are 4.5, it decides to use the texture at index 5. It creates a smooth line there. Farther down, since the lower right corner is 6, as soon as the texture index passes 5.5, it switches to a tile.  It’s simply interpolating between the values and switching textures when it decides it’s moved to the next index.

Combine this functionality with my noise values, which are smoothly interpolated values between 0 and 1, and we can make nice smooth lines across our floor.

Creating the Environment

Environments in Charon Jr. mostly consist of the previously mentioned trails, plus water, grass, trees, rocks, and spirits to pick up. All of this is generated from Perlin noise. Spirit drop off positions and tombstones however are placed by hand.

 

Level generation takes a heightmap for terrain, a seed for the paths, a water level, and then an “environment seed”, this seed is then used for everything generated in the environment as mentioned above. Perlin noise is generated for every vertex on the floor. First it checks if the value of the floor’s heightmap is below the water level, if it is, we stop here, as there shouldn’t be any trees, grass, rocks, or spirits underwater.

Next we check the noise generated for the paths, we also don’t want any environmental items to block the paths, so those value are also ignored. After this it becomes a bit more nuanced. My environment noise has a scale of three, meaning each noise value is simply a value between -3 and 3. I then simply played with different value ranges to get distributions I thought looked good. See code excerpt below:

// No landscape underwater
if (yPosition <= waterLevel + 5) {
  return;
}

// No landscape on the trail
if (path[index] <= 0.7) {
  return;
}

const currentFloorMesh = this.floorMesh.geometry.vertices[index];

// Rock Positions
if (currentNoiseValue < 0 && currentNoiseValue >= -0.005) {
  const rockGeoTransform = getMatrixForPosition(currentFloorMesh, yPosition, 1, 2);
  rockTransforms.push(rockGeoTransform);
  return;
}

// Spirit Positions
if (
  (currentNoiseValue < -1.2 && currentNoiseValue > -1.22)
  || (currentNoiseValue < 1.8 && currentNoiseValue > 1.82)
  || (currentNoiseValue < -2.0 && currentNoiseValue > -2.02)
) {
  spiritPositions.push(new EnhancedDOMPoint(currentFloorMesh.x, yPosition, currentFloorMesh.z));
  return;
}

// With rocks and spirits drawn, filter out all other values less than 1 before continuing.
// This is so spirits and rocks don't appear in the middle of a "forested" area, so this is like a separator.
if (currentNoiseValue < 1) {
  return;
}

// Place either tree or grass depending on value
if (currentNoiseValue >= 1 && currentNoiseValue < 1.4) {
  grassTransforms.push(getMatrixForPosition(currentFloorMesh, yPosition, 0.7, 1.5));
} else {
  treeTransforms.push(currentFloorMesh);
}

And that’s it, levels can now be generated with just a few seed values. Since Perlin noise creates natural looking flows, the landscape looks fairly natural, rather than just fully randomly positioned. That combined with using similar value ranges for trees and grass create “forested” areas. Rocks and spirits are placed outside this area, and as you can see spirits are the only item that take in three very different value ranges. This is because spirits should be the most widely distributed. I don’t really want them all grouped together, but rather placed all over but spread out. Giving them more but smaller ranges allows this to happen, and you get a nice environment!

3D Spatial Audio

Last year we wrote our own asset engine, which included audio. Again from lessons learned, loading asset files isn’t worth it, just generate assets in code. Since ZZFX already does this, I used it. However, being a 3D game, we need 3D spatial audio. Meaning, sounds get louder as you get closer, and get a different stereo split depending on whether they are to the right or left of you.

This turned out to actually be very simple. Just a minor tweak to zzfx lets you just create the audio buffer (or use zzfxm, however I found it has a bug where it does not treat modulations the same as zzfx, which messed up some sounds for me). Normally zzfx creates the buffer and plays it, instead make it just return the buffer. Now you can do whatever you want with it, including sending it into a panner node:

function createPannerNode(buffer: number[]) {
  return (position: EnhancedDOMPoint) => {
    const panner = new PannerNode(audioCtx, {
      panningModel: 'equalpower',
      distanceModel: 'linear',
      positionX: position.x,
      positionY: position.y,
      positionZ: position.z,
      refDistance: 1,
      maxDistance: 80,
      rolloffFactor: 30,
      coneInnerAngle: 360,
      coneOuterAngle: 360,
      coneOuterGain: 0.4
    });
    const node = zzfxP(buffer);
    node.loop = true;
    node.connect(panner).connect(audioCtx.destination);
    return node;
  }
}

Since I only use 3D spatial audio for ghost sounds, I simply hard coded all the settings, but they could easily be passed in for different sounds.

You also must create a “listener”, and keep it’s position and direction updated. I do this inside my third-person-player class as the player moves around.

this.listener.positionX.value = this.mesh.position.x;
this.listener.positionY.value = this.mesh.position.y;
this.listener.positionZ.value = this.mesh.position.z;

this.listener.forwardX.value = cameraPlayerDirection.x;
this.listener.forwardZ.value = cameraPlayerDirection.z;

Making Driving Fun

The final dark art I had to conquer was making decent driving physics. If you’ve played any racing games, you know this isn’t trivial. Many full release games have fairly poor physics, and I wanted mine to at least feel okay. I won’t claim to have the best arcade driving out there, but given the extreme size and time limitations of the competition, I’m very happy with how they came out.

I initially tried googling how to do this, and much like third person cameras and 3D collision detection….there’s not much out there. There is one main result, a paper written a long time ago that attempts somewhat realistic simulation. However…it’s not very realistic…and it’s not really fun at all. With that in mind and time running out, I once again decided to just figure it out.

My main thought process was this: Have a “direction I want to go” angle and a “direction I am going” angle. The “direction I want to go” would be set by the players steering input, with a modifier curve based on how much ability they have to turn the car. Then based on a traction percentage, the “direction I am going” would move linearly towards the “direction I want to go”. For example, a turn to the right rotates the car to point right but it keeps moving straight. But each frame, based on a traction percentage, you start traveling more and more in the direction you are pointing. With this system, I could simply tweak the traction percent until it felt okay, and the result was decent.

For the “want to go” direction, I created a custom “turning ability” curve, which looks like this:

Here, the X axis is the speed you are moving, and the Y axis is your ability to rotate the car. In a car, if you aren’t moving at all, you can’t rotate, so it starts at zero. As your speed increases, it quickly goes up, giving you maximum rotation ability at around 40% speed. As your speed increases, your turning ability trails off. This gives you more stability at higher speeds, while also kind of simulating understeer when attempting quick turns at high speed on dirt.

Much like rotation ability, having a single curve determine this let me tweak the feel of the car relatively easily. I also support controller input, including analog steering, gas, and brake inputs. Using a controller definitely gives you a better ability to control the car compared to a keyboard, and gives better feeling.

Given more time and space, there are a number of other physics improvements I would definitely make, but I feel very good about the driving feel considering the limitations of the competition.

Lessons Learned and Summary

There were a handful of issues I definitely learned from here, mostly around performance. This is the first game I’ve made where performance could realistically be an issue. Here are some issues I dealt with that will definitely be useful for next year!

  • DOMMatrix multiplication is pretty slow. I used it for the game and it was fine, but I had to put in a couple performance hacks to make it fast enough on slow machines. While my gaming PC had no issue doing a couple thousand multiplications per frame, my 2015 Macbook slowed to a crawl. Reducing the number of spirits (which I needed to do anyway), and merging their geometry into a single mesh largely resolved this issue, but something to note for sure. Lots of advanced animations or tons of particles will not be possible with DOMMatrix.
  • 3D noise generation in JavaScript is pretty slow. This accounts for much of the games slow loading times. Really this should be generated in shader code, but the mental overhead of learning about noise and learning about writing glsl was already a lot, and combining them just wasn’t going to be an option. Next year I’d like to change that though, and it should drastically reduce loading times.
  • Slower machines can only handle around 50 panner nodes. I had no idea what the limit was here, and my gaming PC handled 200+ with no issue, but again going back to the old Macbook and it was very clear this was too many. Again reducing the number of spirits in a level largely fixed this, but I still made it so only 2/3 spirits play sounds for performance reasons.

Obviously a ton more work went into making this game than what is outlined here, but I tried to cover the most interesting bits. Thanks for reading, and please check out the game or code at the links below:

Please play the game here if you haven’t already: https://js13kgames.com/entries/charon-jr

The source code can be found here: https://github.com/roblouie/charon-jr