rndr realm

Mar 23, 2026

Intro

I've always been fascinated by displacement effects. There's something satisfying about seeing an image warp and distort in response to your cursor, like you're physically pushing pixels around on the screen.

The effect I'm going to show you combines a few different techniques: a canvas-based mouse trail that tracks your cursor movement, a custom shader that reads that trail as a displacement map, and React Three Fiber to tie it all together. The result is an image that ripples and distorts wherever you move your mouse, creating this organic, fluid interaction.

The whole thing runs at 60fps, works on touch devices, and the code is surprisingly straightforward once you break it down. Let's walk through how it works.

Mouse trail displacement effect in action

The Core Idea

Before we dive into code, here's what's happening under the hood:

  1. A hidden canvas tracks your mouse/touch position and draws a trail that fades over time
  2. The trail is converted into a WebGL texture that updates every frame
  3. A custom shader reads the brightness of the trail and uses it to displace the image texture
  4. React Three Fiber handles the WebGL setup and rendering

The clever part is using the canvas as a data source. Instead of tracking mouse positions in an array or doing displacement calculations in JavaScript, we just paint to a canvas and let the GPU read it directly. The canvas API handles the drawing and fading, and the shader does the displacement. It's fast and the code stays clean.

Step 1: The Trail Canvas Hook

The first piece we need is a system to track the mouse cursor and paint its trail onto a canvas. This canvas will become our displacement map. I built this as a custom hook that handles everything: canvas creation, event listeners, and the animation loop that fades the trail over time.

Setting Up the Canvas

The function starts by creating an offscreen canvas that matches your screen's aspect ratio. We don't need full resolution—512px height is plenty for smooth displacement:

useTrailCanvas.ts

1
export function useTrailCanvas(props) {
2
const {
3
height = 512,
4
showCanvas = false,
5
image,
6
radius = height * 0.125,
7
softness = 0.5,
8
onResize = () => {},
9
} = props || {};
10
11
const sizes = {
12
screenWidth: window.innerWidth,
13
screenHeight: window.innerHeight,
14
};
15
const aspectRatio = sizes.screenWidth / sizes.screenHeight;
16
const cursorPosition = { x: 9999, y: 9999 };
17
const canvasPosition = { x: 9999, y: 9999 };
18
const previousCursorPosition = { x: 9999, y: 9999 };
19
20
const canvas = document.createElement("canvas");
21
canvas.width = height * aspectRatio;
22
canvas.height = height;
23
24
const ctx = canvas.getContext("2d");
25
ctx.fillStyle = "rgba(0, 0, 0, 1)";
26
ctx.fillRect(0, 0, canvas.width, canvas.height);
27
}

The canvas starts completely black. Black means "no displacement", and white will mean "maximum displacement". We'll paint white circles where the cursor moves. The width is calculated from the aspect ratio so the displacement map matches your screen proportions.

Tracking Mouse and Touch Events

Next, we need to track where the user's cursor is. We normalize the coordinates to [0, 1] range so they work at any screen size:

useTrailCanvas.ts

1
function handleMouseMove(ev) {
2
const x = ev.clientX / sizes.screenWidth;
3
const y = 1 - ev.clientY / sizes.screenHeight;
4
5
cursorPosition.x = x;
6
cursorPosition.y = y;
7
canvasPosition.x = x * canvas.width;
8
canvasPosition.y = (1 - y) * canvas.height;
9
}
10
11
function handleTouchMove(ev) {
12
ev.preventDefault();
13
14
if (ev.touches.length > 0) {
15
const touch = ev.touches[0];
16
const x = touch.clientX / sizes.screenWidth;
17
const y = 1 - touch.clientY / sizes.screenHeight;
18
19
cursorPosition.x = x;
20
cursorPosition.y = y;
21
canvasPosition.x = x * canvas.width;
22
canvasPosition.y = (1 - y) * canvas.height;
23
}
24
}
25
26
if (typeof window !== "undefined") {
27
window.addEventListener("mousemove", handleMouseMove);
28
window.addEventListener("touchmove", handleTouchMove, { passive: false });
29
window.addEventListener("touchstart", handleTouchStart, { passive: false });
30
window.addEventListener("resize", handleResize);
31
}

We track two sets of coordinates: normalized cursorPosition for WebGL and pixel-based canvasPosition for drawing. The { passive: false } is important for touch events so we can prevent scrolling.

The Animation Loop: Fade and Draw

Here's where the magic happens. Every frame, we do two things:

  1. Fade the existing trail by painting a semi-transparent black rectangle over everything
  2. Draw a new white circle at the current cursor position

useTrailCanvas.ts

1
let animationFrameId = null;
2
let lastTime = performance.now();
3
4
function animate(currentTime) {
5
const delta = (currentTime - lastTime) / 1000;
6
lastTime = currentTime;
7
8
ctx.globalCompositeOperation = "source-over";
9
ctx.fillStyle = "#000";
10
ctx.globalAlpha = delta * 3;
11
ctx.fillRect(0, 0, canvas.width, canvas.height);
12
13
const distance = Math.hypot(
14
canvasPosition.x - previousCursorPosition.x,
15
canvasPosition.y - previousCursorPosition.y,
16
);
17
18
previousCursorPosition.x = canvasPosition.x;
19
previousCursorPosition.y = canvasPosition.y;
20
21
ctx.globalCompositeOperation = "lighten";
22
ctx.globalAlpha = Math.min(1, distance * 0.1);
23
24
const innerRadius = radius * (1 - softness);
25
const gradient = ctx.createRadialGradient(
26
canvasPosition.x,
27
canvasPosition.y,
28
innerRadius,
29
canvasPosition.x,
30
canvasPosition.y,
31
radius,
32
);
33
gradient.addColorStop(0, "rgba(255, 255, 255, 1)");
34
gradient.addColorStop(1, "rgba(255, 255, 255, 0)");
35
36
ctx.beginPath();
37
ctx.arc(canvasPosition.x, canvasPosition.y, radius, 0, Math.PI * 2);
38
ctx.fillStyle = gradient;
39
ctx.fill();
40
41
animationFrameId = requestAnimationFrame(animate);
42
}
43
44
animationFrameId = requestAnimationFrame(animate);

Understanding globalCompositeOperation

This is a canvas property that controls how new pixels interact with existing ones. Think of it like Photoshop blend modes.

  • "source-over" (default): The new shape is drawn on top of existing content. We use this for the fade pass because we want the semi-transparent black rectangle to darken everything underneath it. Each frame, this gradually fades the trail to black.

  • "lighten": Only pixels that are brighter than what's already there get drawn. We use this for drawing the white circles because it means our trails accumulate and build up brightness where they overlap. If we used "source-over" instead, new circles would replace old ones instead of adding to them.

The trail opacity is tied to cursor speed. Move fast → brighter trail → stronger displacement. Move slow → fainter trail → subtle displacement.

You can experiment with different blend modes like "screen", "multiply", or "difference" to create different trail effects. Read more about globalCompositeOperation on MDN or this visual guide.

Using Custom Images as Brushes

While the radial gradient creates a nice soft circle, you can also use custom images as "brushes" for more interesting displacement patterns. The implementation already supports this through the image prop:

useTrailCanvas.ts

1
// Initialize with an image instead of gradient
2
const { getTexture, dispose } = useTrailCanvas({
3
showCanvas: true,
4
image: "https://example.com/brush.png", // Or pass an Image element
5
onResize: () => {
6
/* ... */
7
},
8
});

In the animation loop, when an image is provided, it draws the image instead of the gradient:

useTrailCanvas.ts

1
if (loadedImage) {
2
ctx.drawImage(
3
loadedImage,
4
canvasPosition.x - radius,
5
canvasPosition.y - radius,
6
radius * 2,
7
radius * 2,
8
);
9
} else {
10
// Fall back to gradient circle
11
const innerRadius = radius * (1 - softness);
12
const gradient = ctx.createRadialGradient(/* ... */);
13
// ... gradient drawing code
14
}

You could use noise textures, shapes, symbols, or even emoji as your brush. The brighter parts of the image will create stronger displacement. This opens up creative possibilities:

  • Noise texture: Creates organic, chaotic displacement
  • Directional arrow: Displacement flows in specific directions
  • Text/logo: Brand-specific displacement patterns
  • Particle textures: Splatter or explosion effects

Complete Implementation

Here's the full function with cleanup and resize support:

useTrailCanvas.ts

1
export function useTrailCanvas(props) {
2
const {
3
height = 512,
4
showCanvas = false,
5
image,
6
radius = height * 0.125,
7
softness = 0.5,
8
onResize = () => {},
9
} = props || {};
10
11
const sizes = {
12
screenWidth: window.innerWidth,
13
screenHeight: window.innerHeight,
14
};
15
const aspectRatio = sizes.screenWidth / sizes.screenHeight;
16
const cursorPosition = { x: 9999, y: 9999 };
17
const canvasPosition = { x: 9999, y: 9999 };
18
const previousCursorPosition = { x: 9999, y: 9999 };
19
20
const canvas = document.createElement("canvas");
21
canvas.width = height * aspectRatio;
22
canvas.height = height;
23
24
const ctx = canvas.getContext("2d");
25
ctx.fillStyle = "rgba(0, 0, 0, 1)";
26
ctx.fillRect(0, 0, canvas.width, canvas.height);
27
28
// Event handlers for mouse/touch...
29
// Animation loop...
30
31
function dispose() {
32
if (animationFrameId !== null) {
33
cancelAnimationFrame(animationFrameId);
34
}
35
if (typeof window !== "undefined") {
36
window.removeEventListener("mousemove", handleMouseMove);
37
window.removeEventListener("touchmove", handleTouchMove);
38
window.removeEventListener("touchstart", handleTouchStart);
39
window.removeEventListener("resize", handleResize);
40
}
41
if (canvas.parentElement) {
42
canvas.parentElement.removeChild(canvas);
43
}
44
ctx.clearRect(0, 0, canvas.width, canvas.height);
45
}
46
47
return {
48
getTexture: () => canvas,
49
dispose,
50
};
51
}

The function returns an object with getTexture() to access the canvas and dispose() for cleanup when you're done.

Step 2: Converting Canvas to WebGL Texture

Now we need to get that canvas into Three.js as a texture. In the main component, we call useTrailCanvas and create a CanvasTexture from it:

page.tsx

1
const materialRef = useRef<any>(null);
2
const canvasTexture = useRef<CanvasTexture | null>(null);
3
4
useEffect(() => {
5
const { getTexture, dispose } = useTrailCanvas({
6
showCanvas: true,
7
onResize: () => {
8
if (canvasTexture.current) {
9
canvasTexture.current.dispose();
10
}
11
canvasTexture.current = new CanvasTexture(getTexture());
12
canvasTexture.current.minFilter = LinearFilter;
13
canvasTexture.current.magFilter = LinearFilter;
14
canvasTexture.current.generateMipmaps = false;
15
canvasTexture.current.needsUpdate = true;
16
},
17
});
18
19
canvasTexture.current = new CanvasTexture(getTexture());
20
canvasTexture.current.minFilter = LinearFilter;
21
canvasTexture.current.magFilter = LinearFilter;
22
canvasTexture.current.generateMipmaps = false;
23
24
return () => {
25
dispose();
26
if (canvasTexture.current) {
27
canvasTexture.current.dispose();
28
}
29
};
30
}, []);

The important part is setting needsUpdate = true and disabling mipmaps. Since the canvas changes every frame, we need to tell Three.js to re-upload the texture each frame. We'll handle that in useFrame:

page.tsx

1
useFrame(() => {
2
if (materialRef.current && canvasTexture.current) {
3
materialRef.current.uniforms.uDisplacementTexture.value =
4
canvasTexture.current;
5
materialRef.current.uniforms.uDisplacementTexture.value.needsUpdate = true;
6
}
7
});

Step 3: The Shader Material

This is where the magic happens. We create a custom shader material using @react-three/drei's shaderMaterial helper:

page.tsx

1
export const CardMaterial = shaderMaterial(
2
{
3
uTexture: null,
4
uDisplacementTexture: null,
5
uResolution: new Vector2(0, 0),
6
uImageResolution: new Vector2(0, 0),
7
},
8
vertexShader,
9
fragmentShader,
10
);
11
12
extend({ CardMaterial });

The uniforms are straightforward:

  • uTexture - The image we want to displace
  • uDisplacementTexture - Our mouse trail canvas
  • uResolution - Screen resolution for UV calculations
  • uImageResolution - Image dimensions for proper aspect ratio

The Vertex Shader

The vertex shader is dead simple. We're doing all the displacement in the fragment shader, so this just passes through UVs:

vertex.glsl

1
uniform float uTime;
2
3
varying vec2 vUv;
4
5
float PI = 3.141592653589793;
6
7
void main() {
8
vec3 pos = position;
9
10
vec4 modelPosition = modelMatrix * vec4(vec3(pos), 1.0);
11
vec4 viewPosition = viewMatrix * modelPosition;
12
vec4 projectionPosition = projectionMatrix * viewPosition;
13
14
gl_Position = projectionPosition;
15
vUv = uv;
16
}

The Fragment Shader

This is where displacement happens. We sample the displacement texture, convert its brightness to an angle, and use that to offset our UVs:

fragment.glsl

1
uniform sampler2D uTexture;
2
uniform sampler2D uDisplacementTexture;
3
uniform vec2 uResolution;
4
uniform vec2 uImageResolution;
5
6
float PI = 3.141592653589793;
7
8
varying vec2 vUv;
9
10
vec2 CoverUV(vec2 u, vec2 s, vec2 i) {
11
float rs = s.x / s.y; // Aspect screen size
12
float ri = i.x / i.y; // Aspect image size
13
vec2 st = rs < ri ? vec2(i.x * s.y / i.y, s.y) : vec2(s.x, i.y * s.x / i.x);
14
vec2 o = (rs < ri ? vec2((st.x - s.x) / 2.0, 0.0) : vec2(0.0, (st.y - s.y) / 2.0)) / st;
15
return u * s / st + o;
16
}
17
18
void main() {
19
vec2 coverUv = CoverUV(vUv, uResolution, uImageResolution);
20
vec4 displacement = texture2D(uDisplacementTexture, vUv);
21
22
// Smooth the displacement values
23
displacement = smoothstep(0.1, 1.0, displacement);
24
25
// Convert brightness to angle (0 to 2Ď€)
26
float theta = displacement.r * 2.0 * PI;
27
28
// Create a directional offset
29
vec2 offset = vec2(cos(theta), sin(theta));
30
31
// Apply the displacement
32
coverUv += offset * displacement.r * 0.05;
33
34
vec4 color = texture2D(uTexture, coverUv);
35
36
gl_FragColor = color;
37
38
#include <tonemapping_fragment>
39
#include <colorspace_fragment>
40
}

The CoverUV function ensures the image fills the screen while maintaining aspect ratio, like CSS background-size: cover.

The displacement logic is interesting. We take the red channel (brightness), map it to an angle from 0 to 2Ď€, then use that angle to create a direction vector with cos and sin. This means brighter parts of the trail push the image in different directions based on the brightness value, creating a swirling, organic displacement instead of just pushing in one direction.

The smoothstep(0.1, 1.0, displacement) adds a threshold so very faint parts of the trail don't cause displacement, making the effect cleaner.

Visualizing the Displacement Texture

Before we apply the displacement, let's see what the displacement texture actually looks like. Move your mouse around in this demo—the white trails you see are what the shader uses to displace the image. Notice how the canvas preview in the top-right corner matches what's displayed:

import React, { useEffect, useRef } from "react";
import { shaderMaterial } from "@react-three/drei";
import { Canvas, extend, useFrame, useThree } from "@react-three/fiber";
import { CanvasTexture, LinearFilter, Vector2 } from "three";
import { useTrailCanvas } from "./useTrailCanvas";

const vertexShader = `
uniform float uTime;

varying vec2 vUv;

float PI = 3.141592653589793;

void main() {
  vec3 pos = position;

  vec4 modelPosition = modelMatrix * vec4(vec3(pos), 1.0);

  vec4 viewPosition = viewMatrix * modelPosition;

  vec4 projectionPosition = projectionMatrix * viewPosition;

  gl_Position = projectionPosition;

  vUv = uv;
}
`;

const fragmentShader = `
uniform sampler2D uDisplacementTexture;

varying vec2 vUv;

void main() {
  vec4 displacement = texture2D(uDisplacementTexture, vUv);
  
  // Visualize the displacement texture directly
  gl_FragColor = displacement;

  #include <tonemapping_fragment>
  #include <colorspace_fragment>
}
`;

export const VisualizationMaterial = shaderMaterial(
  {
    uDisplacementTexture: null,
  },
  vertexShader,
  fragmentShader,
);

extend({ VisualizationMaterial });

function Experience() {
  const { viewport, size } = useThree((state) => state);

  const materialRef = useRef(null);
  const canvasTexture = useRef(null);

  useEffect(() => {
    const { getTexture, dispose } = useTrailCanvas({
      showCanvas: true,
      onResize: () => {
        if (canvasTexture.current) {
          canvasTexture.current.dispose();
        }
        canvasTexture.current = new CanvasTexture(getTexture());
        canvasTexture.current.minFilter = LinearFilter;
        canvasTexture.current.magFilter = LinearFilter;
        canvasTexture.current.generateMipmaps = false;

        canvasTexture.current.needsUpdate = true;
      },
    });
    canvasTexture.current = new CanvasTexture(getTexture());
    canvasTexture.current.minFilter = LinearFilter;
    canvasTexture.current.magFilter = LinearFilter;
    canvasTexture.current.generateMipmaps = false;

    return () => {
      dispose();
      if (canvasTexture.current) {
        canvasTexture.current.dispose();
      }
    };
  }, []);

  useFrame(() => {
    if (materialRef.current && canvasTexture.current) {
      materialRef.current.uniforms.uDisplacementTexture.value =
        canvasTexture.current;
      materialRef.current.uniforms.uDisplacementTexture.value.needsUpdate = true;
    }
  });

  return (
    <mesh>
      <planeGeometry args={[viewport.width, viewport.height]} />
      <visualizationMaterial
        ref={materialRef}
        uDisplacementTexture={canvasTexture.current}
      />
    </mesh>
  );
}

export default function Page() {
  return (
    <Canvas
      style={{
        position: "fixed",
        top: 0,
        left: 0,
        width: "100%",
        height: "100%",
      }}
      camera={{
        position: [0, 0, 1],
      }}
    >
      <Experience />
    </Canvas>
  );
}

This is the raw displacement data. In the next step, we'll use this to distort the actual image.

Step 4: Putting It Together

Finally, we render a plane that fills the viewport and apply our material:

page.tsx

1
function Experience() {
2
const { viewport, size } = useThree((state) => state);
3
4
const materialRef = useRef<any>(null);
5
const canvasTexture = useRef<CanvasTexture | null>(null);
6
7
const texture = useTexture(
8
"https://cdn.cosmos.so/084d20d7-19de-416f-8ab4-a23b7d5efafd?format=jpeg",
9
);
10
11
// Canvas setup and useFrame code from earlier...
12
13
return (
14
<mesh>
15
<planeGeometry args={[viewport.width, viewport.height]} />
16
{/* @ts-ignore */}
17
<cardMaterial
18
ref={materialRef}
19
wireframe={false}
20
uDisplacementTexture={canvasTexture.current}
21
uTexture={texture}
22
uImageResolution={
23
new Vector2(texture.source.data.width, texture.source.data.height)
24
}
25
uResolution={
26
new Vector2(viewport.dpr * size.width, viewport.dpr * size.height)
27
}
28
/>
29
</mesh>
30
);
31
}

Touch Support

One thing I'm proud of with this implementation is that touch works seamlessly. The useTrailCanvas hook listens for both mouse and touch events:

useTrailCanvas.ts

1
function handleTouchMove(ev: TouchEvent) {
2
ev.preventDefault(); // Prevent scrolling while drawing
3
4
if (ev.touches.length > 0) {
5
const touch = ev.touches[0];
6
const x = touch.clientX / sizes.screenWidth;
7
const y = 1 - touch.clientY / sizes.screenHeight;
8
9
cursorPosition.x = x;
10
cursorPosition.y = y;
11
canvasPosition.x = x * canvas.width;
12
canvasPosition.y = (1 - y) * canvas.height;
13
}
14
}
15
16
window.addEventListener("touchmove", handleTouchMove, { passive: false });
17
window.addEventListener("touchstart", handleTouchStart, { passive: false });

The { passive: false } is important because we call preventDefault() to stop the page from scrolling while you're drawing. On mobile, you can drag your finger across the screen and watch the image distort in real-time.

Performance Considerations

This effect is surprisingly performant for what it's doing. The canvas operations are lightweight, and the shader is simple enough that it runs at 60fps on most devices. But there are a few things to keep in mind:

Canvas resolution: The canvas is sized by aspect ratio—512px height with width scaled to match your screen. Higher resolution gives more detail in the displacement but costs more in texture upload time.

Fade speed: The trail fade rate (delta * 3) affects how long the displacement lingers. Faster fade = more responsive but less trailing effect. Tune this to taste.

Mipmap generation: We disable mipmaps with generateMipmaps = false because the texture updates every frame. Generating mipmaps on each update would tank performance.

Variations and Experiments

Once you have the basic setup working, there are tons of ways to play with it:

Different displacement directions: Instead of using the brightness as an angle, you could displace based on the gradient of the trail (using the trail's direction as the displacement direction).

Multiple images: Displace between two images and use the trail to blend between them.

Custom brush images: As mentioned earlier, you can pass custom images to useTrailCanvas instead of using the radial gradient. Try noise textures for chaotic effects, directional arrows for flow fields, or even text/logos for branded displacement patterns.

3D displacement: Do the displacement in the vertex shader instead of fragment shader to actually move the geometry, not just the texture. This creates a relief/emboss effect.

Color displacement: Instead of a grayscale trail, use RGB channels to displace in different directions or displace different color channels separately for chromatic aberration.

Interactive Demo

Here's the full implementation you can play with. Move your mouse around the preview to see the displacement effect in action:

import React, { useEffect, useRef } from "react";
import { shaderMaterial, useTexture } from "@react-three/drei";
import { Canvas, extend, useFrame, useThree } from "@react-three/fiber";
import { CanvasTexture, LinearFilter, Vector2 } from "three";
import { useTrailCanvas } from "./useTrailCanvas";

const vertexShader = `
uniform float uTime;

varying vec2 vUv;

float PI = 3.141592653589793;

void main() {
  vec3 pos = position;

  vec4 modelPosition = modelMatrix * vec4(vec3(pos), 1.0);

  vec4 viewPosition = viewMatrix * modelPosition;

  vec4 projectionPosition = projectionMatrix * viewPosition;

  gl_Position = projectionPosition;

  vUv = uv;
}
`;

const fragmentShader = `
uniform sampler2D uTexture;
uniform sampler2D uDisplacementTexture;
uniform vec2 uResolution;
uniform vec2 uImageResolution;
uniform float uTime;

float PI = 3.141592653589793;

varying vec2 vUv;

vec2 CoverUV(vec2 u, vec2 s, vec2 i) {
  float rs = s.x / s.y; // Aspect screen size
  float ri = i.x / i.y; // Aspect image size
  vec2 st = rs < ri ? vec2(i.x * s.y / i.y, s.y) : vec2(s.x, i.y * s.x / i.x); // New st
  vec2 o = (rs < ri ? vec2((st.x - s.x) / 2.0, 0.0) : vec2(0.0, (st.y - s.y) / 2.0)) / st; // Offset
  return u * s / st + o;
}

void main() {
  vec2 coverUv = CoverUV(vUv, uResolution, uImageResolution);
  vec4 displacement = texture2D(uDisplacementTexture, vUv);

  displacement = smoothstep(0.1, 1.0, displacement);

  float theta = displacement.r * 2.0 * PI; // Convert from [0, 1] to [0, 2Ď€]
  vec2 offset = vec2(cos(theta), sin(theta));

  coverUv += offset * displacement.r * 0.05;

  vec4 color = texture2D(uTexture, coverUv);

  gl_FragColor = color;

  #include <tonemapping_fragment>
  #include <colorspace_fragment>
}
`;

export const CardMaterial = shaderMaterial(
  {
    uTexture: null,
    uDisplacementTexture: null,
    uResolution: new Vector2(0, 0),
    uImageResolution: new Vector2(0, 0),
  },
  vertexShader,
  fragmentShader,
);

extend({ CardMaterial });

function Experience() {
  const { viewport, size } = useThree((state) => state);

  const materialRef = useRef(null);
  const canvasTexture = useRef(null);

  const texture = useTexture(
    "https://cdn.cosmos.so/084d20d7-19de-416f-8ab4-a23b7d5efafd?format=jpeg",
  );

  useEffect(() => {
    const { getTexture, dispose } = useTrailCanvas({
      showCanvas: true,
      onResize: () => {
        if (canvasTexture.current) {
          canvasTexture.current.dispose();
        }
        canvasTexture.current = new CanvasTexture(getTexture());
        canvasTexture.current.minFilter = LinearFilter;
        canvasTexture.current.magFilter = LinearFilter;
        canvasTexture.current.generateMipmaps = false;

        canvasTexture.current.needsUpdate = true;
      },
    });
    canvasTexture.current = new CanvasTexture(getTexture());
    canvasTexture.current.minFilter = LinearFilter;
    canvasTexture.current.magFilter = LinearFilter;
    canvasTexture.current.generateMipmaps = false;

    return () => {
      dispose();
      if (canvasTexture.current) {
        canvasTexture.current.dispose();
      }
    };
  }, []);

  useFrame(() => {
    if (materialRef.current && canvasTexture.current) {
      materialRef.current.uniforms.uDisplacementTexture.value =
        canvasTexture.current;
      materialRef.current.uniforms.uDisplacementTexture.value.needsUpdate = true;
    }
  });

  return (
    <mesh>
      <planeGeometry args={[viewport.width, viewport.height]} />
      <cardMaterial
        ref={materialRef}
        wireframe={false}
        uDisplacementTexture={canvasTexture.current}
        uTexture={texture}
        uImageResolution={
          new Vector2(texture.source.data.width, texture.source.data.height)
        }
        uResolution={
          new Vector2(viewport.dpr * size.width, viewport.dpr * size.height)
        }
      />
    </mesh>
  );
}

export default function Page() {
  return (
    <Canvas
      style={{
        position: "fixed",
        top: 0,
        left: 0,
        width: "100%",
        height: "100%",
      }}
      camera={{
        position: [0, 0, 1],
      }}
    >
      <Experience />
    </Canvas>
  );
}

Wrapping Up

This effect combines a bunch of different web technologies in a way that feels natural once you break it down. Canvas for painting, WebGL for rendering, React for coordination. Each piece does what it's good at.

The nice thing about building effects like this is that the techniques are transferable. The canvas trail could drive audio reactivity, particle systems, or post-processing effects. The shader displacement could respond to video input, webcam data, or generative noise. Once you understand the pipeline, you can plug different sources and effects together.

If you build something with this technique, I'd love to see it. Hit me up on Twitter.

Interactive Image Displacement with Mouse Trail | RNDR Realm