Skip to content

Instantly share code, notes, and snippets.

@meshula
Last active February 27, 2026 20:04
Show Gist options
  • Select an option

  • Save meshula/2a3f989c1fb0c46ad65ff0ea585b1c4d to your computer and use it in GitHub Desktop.

Select an option

Save meshula/2a3f989c1fb0c46ad65ff0ea585b1c4d to your computer and use it in GitHub Desktop.

There is absolutely an advantage to gaussian splats to traditional geometry in both traditional offline and also realtime rendering contexts. Instead of thinking of gsplats as some sort of alternative representation, we should think of it is a new form of importance sampling; however instead of prioritizing ray saliency versus illumination, we should prioritize spatial density versus visual contribution.

I actually worked on this problem, more generally, at ILM with Pat Conran ("Retaining a Surface Detail", 2012).

Gsplats move us away from the strict intersection checks and poly rasters to the a probabilistic domain. A high-poly mesh can waste computational budget on surfaces that contribute little to the final image's fidelity. By treating the scene as a collection of anisotropic kernels, we are performing a high-dimensional optimization of where bits of information should live to minimize reconstruction error. If we prioritize perceptual saliency, we can think of gsplats as a sparse, cached representation, of a pre-solved Plenoptic Function.

In offline rendering we brute force path tracing through the plenoptic function embodying every possible effect through complex scattering and material interactions. gsplats allow us to bake a sampling of the plenoptic function as a simplified radiance value within the splats themselves. We turn a stocastic simulation problem into a signal reconstruction problem.

In realtime rendering we can avoid overdraw and forward shading and go straight for fuzzy rasterization with "free" transparency and anti-aliasing. Obviously I say free in a guarded and nuanced way.

If we view gsplats as a sampling strategy, the "splat" is the sample. In a standard Monte Carlo integrator, we sample directions to find light. With gsplats, we have already found the light and the geometry, and we are now sampling the density of information. We are prioritizing gradient magnitude (since splats are split in high frequency regions), and view dependency (since the sample bakes the light field).

Essentially, the idea is to prioritize information density over geometric fidelity. A piece of glass and a piece of paper have topological equality, but there's plenoptic inequality. Both are one quad as geometry, but the glass has way more information (refraction, internal scattering). So gsplats function as importance sampling on the plenoptic function, assigning few, simple splats to regions of low radiance variance, and dense splats where the plenoptic field is interesting.

So this trades an intersector and acceleration structure plus integrator for a memory bandwidth bottleneck on the pre-importance-sampled plenoptic field.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment