I am a designer. I don’t write shaders. Or at least, I didn’t.
But I kept seeing these dithered images everywhere: that crunchy, grainy texture that feels both old and new. And I wanted to make one myself. Not by running images through some filter, but in real time, on 3D models, with controls I could adjust.
My first experiment was actually for Lummiwhere I used v0 prototypes of one effects tool. It was hacky and limited, but it worked so well that I got hooked.
So I started building Effect. What started as a quick experiment continued to expand as I read about different algorithms and became curious about how they worked.
I couldn’t have done any of this without the work others have shared. Shadow toy was where I learned by reading other people’s code. The Book of Shaders by Patricio Gonzalez Vivo taught me the basics. And libraries love it post-processing And React Three Fibers gave me something to build on.
This is what I encountered along the way.

Starting with dithering
Dithering is a technique that creates the illusion of more colors than you actually have. If you only have black and white pixels, you cannot display gray. But when you arrange black and white pixels in a pattern, your brain puts them together and perceives them as gray.
The technology comes from newspapers. Before digital was possible, printers had to figure out how to reproduce photos using only black ink on white paper. Their solution was halftones: small dots of different sizes that allow your eye to see continuous shades.

The digital version of this started in 1976 with an article by Robert Floyd and Louis Steinberg. Their insight: When you round a pixel to the nearest available color, you get an “error” (the difference between what you wanted and what you got). Instead of throwing that error away, you can propagate it to adjacent pixels. This creates organic patterns instead of hard bonds.
Here’s the basic idea in code:
// For each pixel...
const [r, g, b] = getPixel(x, y)
// Find the nearest color in our palette
const [qR, qG, qB] = findNearestColor(r, g, b, palette)
// Calculate the error
const errR = r - qR
const errG = g - qG
const errB = b - qB
// Spread that error to neighbors (Floyd-Steinberg weights)
addError(x + 1, y, errR * 7/16, errG * 7/16, errB * 7/16)
addError(x - 1, y + 1, errR * 3/16, errG * 3/16, errB * 3/16)
addError(x, y + 1, errR * 5/16, errG * 5/16, errB * 5/16)
addError(x + 1, y + 1, errR * 1/16, errG * 1/16, errB * 1/16)The weights (7/16, 3/16, 5/16, 1/16) add up to 1, so you redistribute 100% of the error. The asymmetrical distribution prevents visible diagonal patterns.
Try dithering with the original Floyd Steinberg error diffusion algorithm from 1976.
Other algorithms
Once I got Floyd-Steinberg working, I wanted to try others. Each algorithm distributes errors differently, creating different textures:
Atkinson (1984) was created by Bill Atkinson for the original Macintosh, which could only display black or white. His trick: distribute only 75% of the error. This creates higher-contrast images with a slightly “crispy” quality.
const atkinson = {
kernel: [
[1, 0, 1], // right
[2, 0, 1], // two right
[-1, 1, 1], // bottom-left
[0, 1, 1], // bottom
[1, 1, 1], // bottom-right
[0, 2, 1], // two below
],
divisor: 8, // 6 neighbors Ă— 1 = 6, but divisor is 8
}Notice how only 6/8 of the error is distributed. That ‘lost’ 25% gives Atkinson its distinctive appearance.
Try dithering with the Bill Atkinson’s algorithm of the original Macintosh.
Jarvis-Judice-Nike spreads the error to 12 neighbors over 3 rows. It is slower but produces smoother slopes:
Try the Jarvis-Judice-Nike 12-neighbor algorithm for ultra-smooth gradients.
In the end I implemented 8 different algorithms. Each has its own character. Which one looks best depends on the image.
Add color
Dithering in two colors (black and white) is classic, but multi-color palettes offer more options. Efecto includes 31 preset palettes organized into categories: classic terminal colors, warm tones, cool tones, neon/synthwave, earth tones, and monochrome. You can also create custom palettes with 2-6 colors.
The Game Boy had four shades of green. That’s it. But artists made memorable games within those limitations. The limited palette forced creativity.
Try the classic Game boy 4-color palette from 1989.
The palette you choose completely changes the mood. Warm palettes feel nostalgic, neon feels cyberpunk, monochrome feels like old print.

Efecto maps colors using luminance. First calculate the brightness of each pixel:
const luminance = 0.299 * r + 0.587 * g + 0.114 * bThen assign that brightness to a palette index. Palettes are arranged from dark to light, so a dark pixel chooses colors from the beginning of the palette, bright pixels from the end:
const index = Math.floor(luminance * palette.length)
const color = palette[Math.min(index, palette.length - 1)]This means that the palette order matters. Flip the colors and you get an inverted image.
There is also a pixel control (block size 1-10) that processes the image in chunks rather than individual pixels. Higher values ​​give you a thick, low-resolution look. The error diffusion still works, but spreads between block centers instead of individual pixels.
Try the Synthwave palette with pink, purple and cyan gradients.
The bloom trick
I wanted to simulate what CRT monitors looked like, and bloom turned out to be the key. Dithering creates high-contrast pixel patterns. Bloom causes bright pixels to fade into dark pixels, softening hard edges and preserving dithered texture.
Apply a green monochrome look with a CRT style glow and vibrant with flourish.
Then I wanted ASCII
After starting work, I became curious about ASCII art. Same basic idea (display brightness with patterns), but use text characters instead of pixel arrangements.

The challenge: shaders have no fonts. You can’t just call drawText(). Everything has to be math.
The solution is to draw characters procedurally on a 5Ă—7 pixel grid. Each character becomes a function that returns 1 (filled) or 0 (empty) for each position:
// A colon: two dots vertically centered
if (grid.x == 2.0 && (grid.y == 2.0 || grid.y == 4.0)) {
return 1.0;
}
return 0.0;// An asterisk: center + arms + diagonals
bool center = (grid.x == 2.0 && grid.y == 3.0);
bool vert = (grid.x == 2.0 && (grid.y >= 2.0 && grid.y <= 4.0));
bool horiz = (grid.y == 3.0 && (grid.x >= 1.0 && grid.x <= 3.0));
bool diag1 = ((grid.x == 1.0 && grid.y == 2.0) || (grid.x == 3.0 && grid.y == 4.0));
bool diag2 = ((grid.x == 1.0 && grid.y == 4.0) || (grid.x == 3.0 && grid.y == 2.0));
return (center || vert || horiz || diag1 || diag2) ? 1.0 : 0.0;The hatch divides the screen into a grid of cells. The following applies to each cell:
- Samples of the color in the cell center
- Calculates the brightness
- Chooses a character based on that brightness
Darker areas will have denser characters (@, #, 8), lighter regions get thinner regions (., :room).
float brightness = dot(cellColor.rgb, vec3(0.299, 0.587, 0.114));Those numbers (0.299, 0.587, 0.114) come from the way human eyes perceive color. We are most sensitive to green, then red and then blue. This produces perceptually accurate grayscale.

Efecto has 8 different ASCII styles. Each uses a different character set and format:

CRT effects
Both dithering and ASCII evoke early computing, so I added some post-effects to complete the look:
Scan lines are horizontal dark bands simulating CRT phosphor rows.
Screen curvature mimics the curved glass of old monitors:
vec2 centered = uv * 2.0 - 1.0;
float dist = dot(centered, centered);
centered *= 1.0 + curvature * dist;
uv = centered * 0.5 + 0.5;This pushes pixels outwards from the center, more towards the edges. Simple math, convincing effect.
Chromatic aberration separates RGB channels somewhat, like cheap optics.
Vignette darkens the edges and draws attention to the center.
Combined with a green phosphor or amber palette, the whole thing feels like an old terminal.

How Efecto is built
Dithering runs on the CPU. Error diffusion is inherently sequential, as each pixel depends on previously processed pixels. The actual dithering algorithm runs in JavaScript and processes pixel data in memory. WebGPU handles the texture management and bloom effect (which is GPU accelerated). When WebGPU is not available (such as in Firefox), there is a Canvas 2D fallback.
ASCII works as a WebGL shader. Unlike dithering, each cell is independent so it can run entirely on the GPU. The shader is built with Three.js and the postprocessing library. Characters are procedurally generated in GLSL, not from bitmap fonts.
Some effects are heavy. Complex shaders with a lot of post-processing can significantly reduce frame rates, especially on older hardware. This is a trade-off between visual complexity and performance.
Try it
Here are some basic principles:






What I learned
Historical algorithms endure. Floyd-Steinberg from 1976 is still one of the best. The original papers are worth reading.
Constraints force creativity. Working within technical limitations forces other solutions. Shaders cannot use fonts, so characters must be drawn like math. Error Diffusion cannot easily be parallelized, so it runs on the CPU while Bloom runs on the GPU.
The details are important. These luminance weights (0.299, 0.587, 0.114) exist because someone studied how human vision works. The asymmetric error distribution in Floyd-Steinberg exists because someone noticed diagonal artifacts. These small decisions reinforce each other.
If you want to dig deeper:
Paper:
Learning resources:
Libraries I’ve built on:
And if you build something with these techniques, I’d like to see it.
#Efecto #Building #RealTime #ASCII #Dithering #Effects #WebGL #Shaders #Codrops


