Getting Started with WebGPU: A Practical Guide for Web Developers
Context: Triangle Shader Lab is a small Next.js + TypeScript study site deployed to GitHub Pages. It hosts two demos (Hello Triangle + Textured Cube) with annotated copy.
AI assist: ChatGPT/Copilot helped with WGSL snippets, adapter/device explanations, and code comments. I cite them directly on the site.
Status: Learning tool only. It adapts public WebGPU samples, adds notes, and keeps expectations low.
Reality snapshot
- Built with Next.js 14, Bun, Tailwind, and MDX. No custom engine—just readable wrappers around the official WebGPU samples.
- Goal: understand adapter requests, pipeline creation, buffers, and render loops, then share the notes publicly.
- Limitations: Works where WebGPU is enabled (Chrome Canary, Edge with flags). No WebGL fallback, no production features.
Architecture
triangle-shader-lab/├── pages/│ ├── index.mdx│ ├── hello-triangle.mdx│ └── textured-cube.mdx├── components/│ ├── DemoWrapper.tsx│ └── CodeBlock.tsx├── lib/│ └── webgpu.ts└── public/└── shaders/
DemoWrapperhandles feature detection (navigator.gpu), adapter/device negotiation, and fallback messaging.- Each demo page includes: overview, code, “Reality check” block (what’s working/missing), and proof links.
Demo breakdown
Hello Triangle
- Rebuilds the canonical sample. WGSL shaders render a single triangle with color interpolation.
- Notes explain
requestAdapter,requestDevice, swap chain configuration, and command encoder setup. - Users can toggle debugging overlays to see pipeline state (bind groups, buffers) in real time.
Textured Cube
- Adds vertex buffers, indices, textures, uniform buffers, and a basic rotation matrix.
- Highlights the extra state needed (samplers, bind group layouts) and warns about missing features (lighting, depth testing).
- Links back to the original sample so readers know I didn’t invent it.
Tooling
- Next.js 14 + MDX: Lets me combine prose + code.
- Bun: Fast dev server, zero-config TypeScript support.
- Tailwind: Provides consistent typography/layout.
- Netlify analytics: Tracks which sections get the most attention; I use that data to prioritize updates.
- Honesty log: Each page includes “Reality snapshot” + AI disclosure.
Lessons learned
- Feature detection + honest fallbacks matter. If WebGPU isn’t available, I show a clear message instead of a blank canvas.
- Annotated copy helps more than fancy effects. Recruiters appreciated that I call it a study site, not a framework.
- Sharing prompt logs builds trust. I include links to AI conversations so engineers know where I got help.
Setup checklist (what actually worked)
- Chrome Canary or Edge with
--enable-unsafe-webgpuflags; macOS needs Metal support. - Bun
1.x, Node20.x, Next.js 14 with theexperimental.webgpuflag enabled. - Install GPU debuggers (Chrome DevTools WebGPU tab) and use render doc captures sparingly to avoid huge files.
- Keep shaders small and readable; every line gets a comment tied to the tutorial step.
Debugging patterns that saved me
- Blank canvas: Verify
navigator.gpufirst, then check adapter requests. If adapter is null, bail with a user-facing message. - Nothing renders after resize: Ensure
device.queue.submitstill runs and swap chain is reconfigured; resizing resets context state. - Weird colors/vertices: Inspect buffer layouts and alignments; a single stride mistake ruined the cube until I logged offsets.
- Performance dips: Use small frame counters; disable fancy overlays unless debugging. WebGPU is fast, but the page still needs to be honest on low-power laptops.
Mistakes and corrections
- Copying samples without understanding: Early on I pasted code and hoped. Now every demo has a “Why this line exists” note.
- Ignoring accessibility: Added captions, transcripts for Looms, and clear “WebGPU required” copy.
- No fallback demos: Added a “read-only” mode with screenshots and code snippets so readers without WebGPU can still learn.
- Too few tests: I now snapshot the rendered canvas dimensions and check for thrown errors during adapter/device negotiation.
How this project shows up in interviews
- I treat it as a learning case study, not production. I walk interviewers through the “Reality snapshot” and the bug logs.
- I use it to answer “How do you learn new low-level tech?” → start small, annotate heavily, build honesty logs, and keep proof links.
- I call out that AI drafted early WGSL snippets, but I rewrote them after validating against the WebGPU spec.
Reading and reference list
- WebGPU spec + MDN docs (annotated in
notes/webgpu.md). - Chrome WebGPU samples and Mozilla’s demos.
- “WebGPU Fundamentals” blog series for mental models of pipelines and bind groups.
- RenderDoc + Chrome DevTools docs for capture/inspection tips.
TODOs
- Add WebGPU flag instructions for each browser.
- Record Loom walkthroughs to explain the demos verbally.
- Experiment with WebGPU compute shaders and document those notes the same way.
- Publish the repo publicly (currently private while I scrub assets).
Links
- Live demo: https://bradleymatera.github.io/TriangleDemo/
- GitHub repo: https://github.com/BradleyMatera/TriangleDemo
- Prompt log:
notes/webgpu-prompts.md