Three.js Static Shadow Baking: One Flag That Saves 20-40% GPU
I posted a perf writeup for my procedural-city Three.js game and listed “static shadow bake” as one of the techniques. @alightinastorm replied with a sharp question, then with a sharper clarification. The exchange sent me into the Three.js source for about an hour and left us both with a tighter mental model. It also taught me that the word I had been using was doing more work than it should.
This post is the honest walkthrough: the mechanism that genuinely saves 20 to 40% GPU on a mostly static scene, and the bit I was calling by the wrong name.
when your game starts, the shadow texture has to be produced (unless you pre-bake it and load a texture file directly, in which case you don’t need any shadows)
once it is produced, there is no need to do any “baking”, since it never changes to begin with
so what you’re doing right now is you’re producing it on runtime (not baked) and just never update it again :D
That last line is the clean description. I’ll come back to it.
Key Takeaways
- The shadow map texture is allocated once. The shadow pass that fills it re-runs every frame by default, even when the output is identical.
- Two
ifguards inWebGLShadowMap.render()control this. Both default totrue, so everycastShadowmesh gets re-rendered from every shadow-casting light on every frame.- For a fully static scene, that is pure waste. The output depth texture is byte-identical frame to frame.
- Fix:
light.shadow.autoUpdate = false, then fliplight.shadow.needsUpdate = truefor exactly one frame whenever geometry changes. Three.js clears the flag for you.- Measured on my procedural-city game: ~20 to 40% GPU per frame reclaimed, zero visual change.
- The word “baking” is imprecise for what’s happening here. The more honest name is runtime caching. I kept the title because that is what people search for, but the post says it straight.
- VSM tunes per-pass cost. This technique tunes pass frequency. Orthogonal, and they stack.
The texture vs the pass
Two things share the name “shadow map” and keeping them apart is the whole article.
- The shadow map texture. A depth buffer that lives in VRAM. Allocated once, reused every frame. This is what VSM filters and what a “high-resolution shadow map” refers to.
- The shadow pass. A full render of the scene from the light’s point of view, whose output is that texture. By default, this pass re-runs every frame, overwriting the texture with byte-identical data.
The texture is a cache slot. The pass is the compute that fills it. If the scene is static, that compute is wasted on every frame after the first.
What Three.js actually does each frame
From src/renderers/webgl/WebGLShadowMap.js :
this.render = function (lights, scene, camera) {
if (_shadowMap.enabled === false) return;
if (_shadowMap.autoUpdate === false && _shadowMap.needsUpdate === false)
return;
// ... full depth pass over every castShadow mesh, per light ...
_shadowMap.needsUpdate = false;
};That is the renderer-global gate. The per-light gate lives inside the loop:
for (let i = 0; i < lights.length; i++) {
const shadow = lights[i].shadow;
if (shadow.autoUpdate === false && shadow.needsUpdate === false) continue;
// ... render this light's depth map ...
shadow.needsUpdate = false;
}Two layers of gating. Both default to true. With defaults, every mesh with castShadow = true gets drawn from every shadow-casting light on every frame, forever. For a scene that never changes, every one of those draws produces the same depth values that were already in the texture from last frame.
The Three.js docs confirm it in plain language:
-
WebGLRenderer.shadowMap.autoUpdate: “If set to true, shadow maps will be updated every time a scene is rendered. Default is true.” -
LightShadow.autoUpdate: “Enables automatic updates of the light’s shadow. If you do not require dynamic lighting/shadows, you may set this to false. Default is true.”
The library is telling you what the default costs.
The word “baking” is doing too much work
In game-dev tradition, baking means precomputing something offline and shipping the result as an asset. A baked lightmap is a texture file that lives in your build output. A baked shadow is a shadow painted into the albedo or stored as a separate channel, loaded at runtime like any other image.
What this optimization does is different. The first frame after the flag gets flipped, Three.js runs a normal depth pass and writes the output into the shadow map texture in VRAM. From that point on, every subsequent frame samples that texture and skips the pass. Nothing is persisted to disk. Nothing ships with the build. The texture is regenerated every time the page loads.
That is runtime caching, not baking. @alightinastorm put it better than I had:
so what you’re doing right now is you’re producing it on runtime (not baked) and just never update it again
That sentence should have been in my head from the start. It wasn’t, because “bake” is the word everyone uses in WebGL threads when they mean “stop re-computing this”. I kept the word in the title because the search volume lives there, but I want the mechanism described straight in the post.
What the blog post used to frame as a correction was really the opposite. The question nudged me to actually read WebGLShadowMap.js, and the clarification made me rename what I was doing in my own head. Both made this post better.
The trigger in my scene
Here is the actual function from my game’s src/game/scene.ts:
sunLight.shadow.autoUpdate = false;
export function bakeStaticShadow(): void {
if (!shadowsEnabled) return;
sunLight.shadow.needsUpdate = true;
}It gets called after my city-generation build queues finish draining, both on initial boot and on every shuffle rebuild. The trigger maps cleanly to “inputs changed”: the geometry in the light’s frustum is different, so the cached depth texture is stale, so produce a fresh one.
I set the flag on the individual light, not on the renderer. If I later add a second shadow-casting light that genuinely needs per-frame updates (a flashlight, a muzzle flash, a moving vehicle headlight), it can keep its own defaults without me having to reorganize anything.
One note on the function name: bakeStaticShadow is a misnomer by the stricter definition above. I’m leaving the name in my codebase because it is expressive and the team vocabulary is already built around it. A future refactor might call it invalidateShadowCache or markShadowDirty, which would be more accurate.
Measured impact on a real scene
The scene: ~2,000 to 3,000 merged building chunks plus terrain, all static per seed, one directional sun with shadow casting enabled. The world gets regenerated on a shuffle action but otherwise does not move.
With the Three.js default of autoUpdate = true, the depth pass was burning 20 to 40% of per-frame GPU time redrawing an identical depth texture. I measured that with Chrome’s GPU timeline on both an M1 Mac and a mid-range gaming laptop (I’m glat pnpm dev --host exists!). The range comes from resolution and shadow-map size: higher framebuffer and bigger shadow map = the pass takes a larger share of the frame.
With the autoUpdate flag off, that cost is paid exactly once per seed. A shuffle rebuild costs one frame of shadow-pass time, amortized across the following few thousand frames the player spends in that world.
Zero visual change. The receiver meshes keep sampling the same depth texture they would have sampled anyway, because every frame’s output was identical to every other frame’s.
VSM is orthogonal
@alightinastorm ’s first suggestion in the thread was Variance Shadow Mapping. Worth being clear about what it does and when to reach for it.
VSM and PCSS change the filtering math inside the depth pass. They tune per-pass cost and quality, mostly by letting you pre-filter the depth texture and get softer shadows without the per-sample blur cost of PCF. They do not change how often the pass runs.
- VSM with
autoUpdate = true: cheaper per pass, still runs 60 times a second. - Default shadow map with
autoUpdate = false: same per-pass cost, runs exactly once.
For a static scene, the pass-frequency lever is bigger. “Expensive and rare” beats “cheap and constant” when the output is identical either way.
VSM is the right reach when the scene genuinely is dynamic and you need the per-frame pass to stay cheap. If your scene is mostly static with a single moving element, the two optimizations stack: flip autoUpdate off with VSM enabled, then only pay the cheaper-per-pass cost on the rare invalidation.
VRAM cost is also real, as @alightinastorm noted: a crisp shadow map is a big texture, and shadow-map resolution is the lever to reach for on low-end devices regardless of how often the pass runs.
When this is the wrong fix
There are three scene shapes where keeping the default is the right call:
- Moving casters. A swinging crane, a rotating windmill, a walking NPC. The depth texture genuinely changes each frame. Freezing it would pin shadows to a wrong pose.
- Moving lights. A day/night cycle with a rotating sun. Same reason. The depth texture depends on light direction, so any change invalidates it.
- Partially dynamic. Most of the scene is static but one character moves. Two options: split casters across two shadow-casting lights (one cached, one live), or keep
autoUpdate = trueand accept the cost. In the latter case, shadow-map resolution is your next lever.
When to re-invalidate
Anything that changes the geometry in the light’s frustum:
- Level load or scene swap (my case: shuffle rebuild).
- Player-driven construction or destruction.
- Large prop spawns or despawns.
- Light direction changes (time-of-day ticks, for instance).
In all of those, flip needsUpdate = true once, let Three.js do the rest on the next frame.
For a broader look at the other techniques that matter for Three.js perf (intersection observers to delay scene loading, geometry merging, texture compression), see my earlier post on optimizing Three.js scenes with four practical techniques .
Summary
- The shadow map texture is one-time. The shadow pass is not.
- Default Three.js: full scene-from-light render every frame, per shadow-casting light.
- For a static scene, turn that off (
autoUpdate = false) and mark dirty explicitly (needsUpdate = true) when geometry changes. - What the WebGL community calls “baking” here is more accurately runtime caching. Nothing gets precomputed offline. Thanks to @alightinastorm for pushing on that.
- VSM tunes per-pass cost. This technique tunes pass frequency. They stack.
- My scene: 20 to 40% GPU reclaimed, zero visual cost, one function call per rebuild.
FAQ
Does autoUpdate = false cause any visible flicker on the first frame?
No. The first call to render after you set needsUpdate = true does the depth pass and writes the texture before any receiver samples it. From the user’s point of view, the first frame with shadows looks identical to every subsequent frame.
Should I set this on the renderer or on each light?
Prefer the per-light flag (light.shadow.autoUpdate = false). If you later add a light that genuinely needs per-frame shadow updates, you want its defaults to work without you having to remember that the renderer-global flag is off. Per-light scopes the optimization to where you intend it.
Is this the same as baking lightmaps in Blender?
No, and that’s the honest-terminology answer from earlier in the post. Lightmap baking computes lighting offline and stores the result in a texture file that ships with your build. This technique computes the shadow map at runtime, once per invalidation, and keeps it in VRAM. The output never leaves the browser session.
What about receiveShadow meshes?
Untouched. The sampling cost in the fragment shader is always paid, and it is cheap. Flipping autoUpdate does not change anything about how receivers read the depth texture. It only changes how often the texture gets rewritten.
Does this work with multiple shadow-casting lights?
Yes. Each light owns its own LightShadow instance with its own autoUpdate and needsUpdate flags. Freeze the static ones, leave the dynamic ones on defaults. They coexist.
How do I verify the pass is actually being skipped?
In Chrome, open the Performance tab, record a few seconds, and look at GPU traces. With defaults you will see repeated depth-pass entries each frame. With the flag off, those entries disappear after the first frame and only return on the frame where you flipped needsUpdate back to true.