The Making of Whiskers Witch Adventure (JS13K 2025)
As always, this year builds upon my js13k 3d game engine, that I have been using and improving since 2022. To learn more about the engine and it’s improvements over time, check out previous years blog posts: 2022, 2023, 2024.
Animations
The most obvious improvement this year is the addition of full animation. Previous games have had simple matrix based animations, where different parts of items were added to the scene graph as children of others, allowing them to move or rotate relative to their parent. For instance in Charon Jr, the wheels rotated based on speed, and front wheels turned when you turned the steering wheel. Upyiri and 13th Floor had doors that rotated open.
This year however we have full control over any kind of animation, and without the overhead of another level of matrix multiplication. The idea for animation in JS13k game after working on a UI for the Memcard Pro. This is an aftermarket memory card for the PS1, and now PS2, that lets you store tons of saved games on one large sd card, rather than having to have 100 memory cards. While the hardware is nice, the UI is so bad its effectively useless. It just lets you switch between virtual memory cards, you have no idea what games are on what card. Luckily the UI is just a small webpage, so I made my own, which you can see a video of here.
This was quite a fun project, reverse engineering the PS1 memory card data format to not only pull out the game and save name, but also the animated icon. When this was finished, I moved on to PS2 support, and PS2 memory card saves feature 3d animated icons. They simply would store multiple keyframes for the vertex positions, and then alternate between keyframes using a list. This is very small, but still looks very good, and so I implemented a similar system for my js13k engine.
Since the modeling scripting tool is already used for all modeling, its very easy to simply pass in arguments that adjust transformations for each keyframe. Here is the main character cat’s front legs creation function, which now simply has an argument for left and right rotation:
function catFront(leftRot: number, rightRot: number)
return new MoldableCubeGeometry(bodyRadius, bodyRadius, bodyRadius, 8, 8, 8)
.texturePerSide(materials.iron)
.spherify(bodyRadius)
// Right Leg
.selectBy(vert => vert.x < 0.5 && vert.x > -0.5 && vert.y < -0.2 && vert.z > bodyRadius / 3)
.translate_(0, -1)
.selectBy(vert => vert.y < -bodyRadius - 0.2)
.translate_(0, -0.5, 1)
.scale_(0.5, 1, 0.5)
.selectBy(vert => vert.z > 0)
.rotate_(0, 0, rightRot)
// Left Leg
.selectBy(vert => vert.x < 0.5 && vert.x > -0.5 && vert.y < -0.2 && vert.z < -bodyRadius / 3)
.translate_(0, -1)
.selectBy(vert => vert.y < -bodyRadius - 0.2 && vert.z < 0)
.translate_(0, -0.5, -1)
.scale_(0.5, 1, 0.5).selectBy(vert => vert.z < 0)
.rotate_(0, 0, leftRot);
}
The front two legs are made by taking a single sphere and selecting a portion of the vertices and stretching them out to make legs. I then actually rotate portions of the whole sphere, not just the legs. This gives the impression of chest muscles moving, giving the animation a much fuller feel. A similar approach is done for the rear legs for a classic cat butt wiggle.
This difference of being able to animate actual vertex positions within a single mesh, rather than matrix transforms of multiple individual meshes, allows complete freedom of animation and generally a much nicer look. The additional space for this is extremely minimal. Every mesh can now have multiple frames, and an alpha that controls the blend between them. Combine that with the existing dynamic modeling tools provided by my MoldableCubeGeometry class, and animated characters are barely any more space or work than non-animated.
Improved and Smaller Skybox
Previous games with skyboxes used a texture cube that was sampled from based on the camera direction, and then drawn onto a quad at the farthest distance. This is sort of the "standard" way of handling a skybox, and works well when you can properly project a 360 degree view onto a texture cube. This projection accounts for the fact that a cube has corners, and distorts the image around the corners to make them invisible. This isn't really possible with self generated 13kb skyboxes (or at least not a good use of space), and so instead I've now opted to sample a regular 2d texture, but treating it as if it were a sphere
This approach removes the possibility of any corners, it also removes the need for a texture cube entirely. In previous years I had the horizontal sides of my skybox be one long image, which I then had to slice into even segments for each side of the skybox, then generate the top and bottom, then bind it all to a texture cube. This is all gone now, as the texture is just one long texture. While doing the spherical mapping in the shader added a tiny bit of shader code, all the wiring and binding and texture slicing was removed, meaning I saved 500 bytes in the end.
The only downside of this approach is that when you sample a flat image as a sphere, the very top and very bottom converge to a point, which distorts the image at those points. However, you never see the bottom of your skybox anyway, and this year with a third person game, you can never fully look up, so you never see the distorted clouds. If I was doing another first person game, I'd simply use a gradient to fade the clouds out at the very top. This is something I had to do anyway with the texture cube, as otherwise you would see the seam at the top where the horizontal texture ends and the top texture starts, so this is still a net win.



Above you can see the two approaches, followed by a "debug" view for the spherical sampling. This helps visualize the uv coords across the quad, including seeing how the coords converge at the very top, and the line where the texture loops back to its start.
The code is quite simple, taking in the inverse view projection and quad position from the vertex shader, the fragment shader is simply this:
void main() {
vec4 t = u_viewDirectionProjectionInverse * v_position;
vec3 dir = normalize(t.xyz / t.w);
// Convert direction to spherical (longitude, latitude)
float lon = atan(dir.x, dir.z); // range -PI..PI
float lat = asin(clamp(dir.y, -1.0, 1.0)); // range -PI/2..PI/2
// Map to [0,1]
vec2 uv;
uv.x = (lon / (2.0 * PI)) + 0.5;
uv.y = (lat / PI) + 0.5;
// Sample
outColor = texture(uSampler, uv);
}
Further Refining Third Person Camera
I wrote a bit about making a good third person camera in preparation for Charon Jr here. One of the games used as an example there was Spyro, which this game is fairly heavily based on. The one change I made compared to Spyro is that I never lerp the camera around behind the player. Spyro will do this if you stop moving for a short period. While I don't think this is strictly terrible, moving the camera changes the direction the player moves, as you always move relative to the camera. So doing this felt like I could introduce frustrations, not to mention potential motion sickness depending on the speed of movement.
With this in mind, I followed a slightly more modern approach, which simply leaves the camera where the player put it, but the camera always looks at the player and keeps a constant distance. With this approach, the camera movement is much more predictable, and the camera will always end up behind the player anyway, since it always looks at them.
Above you can see that while I change directions, the camera behavior stays consistent, but still ends up behind me as I move past it and it continues looking at me, all without touching the camera controls. This behavior, combined with the ability to rotate the camera with the right analog stick when you do want/need that, make movement and camera management intuitive.
In addition, the player angle, camera position, and camera look at position are all lerped, although fairly subtly, to avoid any jagged movement. This helps keep the camera stable when landing from jumps or walking up and down ramps.
Audio
Last year marked my move into native web audio, although with the shell of a small synth engine I heavily modified. This was nice as it was easy to see how to make new instruments. However, this came at a size cost, and this year I replaced the synth with just a couple helper functions like an ASDR envelope function. While there is certainly a learning curve here, web audio works like a very powerful modular synth, so you can make a ton of amazing sounds. I think this year I nailed the cat sound!
Here's my cat "meow":
const osc = new OscillatorNode(audioContext, { frequency: 700, type: 'sawtooth' });
const gain = new GainNode(audioContext, { gain: 0 });
const wah = new BiquadFilterNode(audioContext, { type: 'lowpass', frequency: 2200, Q: 8 });
osc.frequency.setValueAtTime(700, audioContext.currentTime + 0.15);
osc.frequency.linearRampToValueAtTime(500, audioContext.currentTime + 0.7);
osc.frequency.linearRampToValueAtTime(900, audioContext.currentTime + 0.9);
gain.gain.linearRampToValueAtTime(0.6, audioContext.currentTime + 0.2);
gain.gain.linearRampToValueAtTime(0.4, audioContext.currentTime + 0.3);
gain.gain.linearRampToValueAtTime(0, audioContext.currentTime + 0.8);
// Sweep the "wah"
wah.frequency.linearRampToValueAtTime(2800, audioContext.currentTime + 0.3);
wah.frequency.linearRampToValueAtTime(40, audioContext.currentTime + 1);
wah.Q.linearRampToValueAtTime(22, audioContext.currentTime + 0.5);
osc.connect(gain);
gain.connect(wah);
wah.connect(audioContext.destination);
osc.start();
osc.stop(audioContext.currentTime + 1);
Note that while the above may look like a decent amount of code, it compresses very well. As JS13k is a competition where you submit your game zipped (and in this case roadrolled before that), this is a very size efficient way of creating sounds.
Particles
This year I added a simple particle engine. The particles themselves are managed on the JavaScript side, so this system wouldn't scale to thousands of particles, but it worked well for the ~100 in the game. This kept rendering logic quite simple, and in fact each particle is rendered as a single point. This means you can't rotate the particles themselves, but you can otherwise place them however you'd like, and the rendering for this is quite small. The core rendering of the particles is simply this in the fragment shader:
void main() {
vec4 texColor = texture(uSampler, vec3(gl_PointCoord, vDepth));
float alpha = texColor.a * vLife;
fragColor = vec4(texColor.rgb, alpha);
}
The vertex shader simply multiplies the positions by the view projection and passes along the size, life, and texture depth (index).
Summary
As always, a ton more work went into the game, especially modeling out the large world with code, but hopefully this covered the more interesting technical additions for this year.
If you haven't played the game yet, please give it a try here: https://js13kgames.com/2025/games/whiskers-witch-adventure