Context
This note records a small experiment I wrote recently to explore shader-like procedural graphics, implemented entirely on the CPU using Rust.
Code Link: https://github.com/BriceLucifer/shader
The goal was not performance, but to understand:
- how fragment-shader style math maps to plain code
- how time, space, and iteration interact visually
- how much of “shader thinking” is independent of GPU APIs
The final result is a short rendered video:
👉 Output video:
High-level Idea
The program emulates a fragment shader loop:
- each pixel corresponds to a ray / sample direction
- a procedural function iteratively transforms a point in space
- color is accumulated along a pseudo ray-marching path
- time (
t) is used to animate rotation and deformation
Instead of running on the GPU, everything is computed on the CPU and written out as PPM frames, which are later combined into a video using ffmpeg.
Coordinate Setup
Each pixel (x, y) is mapped into a centered coordinate system:
- normalized to
[-1, 1] - aspect-ratio corrected
- embedded into a pseudo-3D vector
| |
This mirrors how fragment coordinates (fragCoord) are usually handled in shaders.
The Whirl Shader Loop
The core logic lives in whirl_shader, which repeatedly:
- projects the fragment direction into space
- applies a time-dependent swirl on the XY plane
- applies trigonometric deformation
- estimates a step distance
- accumulates color along the path
Conceptually, this behaves like a very rough ray-marching loop:
| |
There is no strict signed-distance function here — it is closer to procedural exploration than geometric correctness.
Color Accumulation and Tone Mapping
Color is accumulated incrementally based on the depth (p.z) and iteration distance.
After the loop, a simple tone mapping is applied:
| |
This compresses high dynamic range values into [0,1] smoothly, without abrupt clipping.
A simple gamma correction is then applied before converting to u8.
Parallel Rendering
Each frame is rendered using Rayon:
- pixels are independent
- parallelism is embarrassingly parallel
- easy speedup without changing logic
| |
This reinforces how naturally shader workloads map to data-parallel execution.
Frame Output Pipeline
The rendering pipeline is deliberately simple:
- Render frames as
PPM (P6)images - Store them in
frames/ - Use
ffmpegto assemble a video
| |
This keeps the experiment focused on math and structure, not tooling.
Observations
- Shader-style math is largely API-independent
- Many visual effects emerge purely from iteration + trigonometry
- CPU implementations are slow, but extremely transparent for learning
- Writing this in Rust made vector operations and ownership explicit
Next Steps (Ideas)
- Move the same logic to a real GPU fragment shader
- Explore signed-distance–based ray marching
- Experiment with fewer iterations and smarter step estimation
- Compare CPU vs GPU mental models directly
This note is intentionally informal and exploratory. It serves as a record of understanding rather than a polished tutorial.