How I Learn by Doing: My CodePen Journey
Context: CodePen is the sandbox where I learn in public while juggling school, family, and internship work. None of these pens are “products”—they’re reps.
AI assist: Copilot/ChatGPT help with scaffolds, regex, and wording. I note AI involvement on every pen.
Status: Learning log first, portfolio second. Expect rough edges and TODOs.
TL;DR
- I start in CodePen because it is fast, safe to break, and easy to share.
- AI gives me scaffolds; I supply constraints, debugging, and deploys.
- Pens only graduate to repos when they are stable, accessible, and tested.
- Every pen gets a “Reality” block, GIF, and backlog note to keep me honest.
Reality snapshot
- Current loop: CodePen for fast prototypes → debug/annotate → graduate to repos once stable.
- Guardrails: reality blocks, backlog notes, and “graduate or archive” choices logged per pen.
- Tooling: AI for scaffolds; I constrain prompts, rewrite risky bits, and own deploys.
- Gaps: deeper CS/algorithms, full test coverage, and some mobile/audio edge cases.
Table of contents
- Where my skills actually sit
- Why CodePen is my gym
- How I start a pen
- Anchor pens & lessons
- Learning pattern
- Case studies
- AI: helpful vs harmful
- Retro + graduation rules
- Failure modes + checklists
- Prompt recipes
- Next steps

Hands-on loop: type, run, break, fix. Source: Giphy.
Where My Skills Actually Sit
- B.S. Web Development (Full Sail 2025) gave me HTML/CSS/JS fundamentals, starter React, a dash of Python/SQL, and UX basics.
- Comfort zone: reading JS/React, tracing requests, wiring small APIs, and deploying to GitHub Pages/Netlify without fear.
- Gaps: classic data structures/algorithms depth, writing everything from a blank file without references, and formal CS math.
- AI’s role: accelerant, not autopilot. I still debug, decide scope, and wire deploys.
- Constraints: balancing family, internship work, and school means I need feedback loops that are short, cheap, and honest—hence CodePen.
- Taste: prioritize honest copy, realistic scopes, and receipts that survive interviews over “perfect” patterns.
Why CodePen Is My Gym
- Instant feedback: No local setup; I can prototype an idea in minutes.
- Safe failure: Breaking a pen doesn’t take down Netlify or a Render backend.
- Documentation: Every pen description now includes a “Reality” block (what works, what doesn’t, what AI wrote).
- Shareable receipts: I can drop a single URL into DMs or interviews to show behavior without asking anyone to clone a repo.
- Scope guardrails: Pens force me to keep things small; if the surface area blows up, that is a signal to graduate to a repo.
- Visual proof: GIFs and screenshots live beside the code so reviewers see behavior before clicking anything.

Prototyping fast: sketch, test, ship. Source: Giphy.
What I Carry Into Every Pen
- Small CSS token set (spacing, typography, brand colors) so pens look related.
- Console helpers (
console.table, inline debug overlays) for fast inspection. - A “Reality” badge that states AI involvement, known bugs, and TODOs.
- Copy blocks that admit what is missing (“Mobile audio limited,” “Keyboard nav WIP”).
- A cleanup checklist: keyboard, focus ring, reduced motion, and loading states before I publish.
How I Start a Pen (Even When I Don’t Know Enough)
- Talk like a human: “I want a calculator that won’t crash on divide by zero,” not “build me a perfect architecture.”
- Let AI propose a first file: Folder structure + a single entry point. Nothing more.
- Ship a broken version fast: Paste, run, screenshot the error. That’s my real syllabus.
- Iterate with receipts: I send errors back to AI, adjust, and log “Reality” notes in the pen description.
- Graduate or archive: Stable pens move to GitHub repos with tests; noisy ones stay on CodePen as cautionary tales.
- Add observability: Even in tiny pens I include console tables, inline status text, and sometimes a debug overlay that renders current state—cheap observability muscles.
Anchor Pens & What They Taught Me
| Pen | Link | Focus | Reality snapshot |
|---|---|---|---|
| Garbage Collection Visualizer | Pen | Mark-and-sweep animation via vanilla JS/CSS | Good for interviews. Numbers are illustrative, not engine-accurate. AI helped outline the visuals. |
| React Calculator | Pen | Controlled inputs + useState patterns | Handles multiple decimals + divide-by-zero messaging. Keyboard support still TODO. |
| Sound Machine | Pen | Keyboard accessibility + audio APIs | Resetting audio.currentTime prevents overlaps; mobile audio quirks remain. |
| Markdown Previewer | Pen | marked + sanitization | Gracefully handles malformed input. Needs tests before repo promotion. |
| Regex Analyzer | Pen | Regex visualization + copy-to-clipboard | Explains capture groups; AI wrote starter text, I rewrote logic. |
| Random Quote Generator v1/v2 | Pen / Pen | Async fetches + design iteration | Shows my jump from “works” to “pleasant.” Debounce and loading states are in. |

Cataloging pens and lessons. Source: Giphy.
Learning Pattern That Keeps Me Honest
- Set a hypothesis: “Can I explain garbage collection visually?” “Can I mask card numbers accessibly?”
- Instrument immediately:
console.table, live regions, overlays—debug without leaving the pen. - Break it on purpose: Invalid inputs, rapid keypresses, offline mode. Note what fails and why.
- Document the truth: Each pen gets a “Reality” block with scope, AI involvement, and TODOs.
- Promote selectively: Stable pens graduate to GitHub + tests + CI (CheeseMath is the pattern).
- Archive without guilt: If it stays noisy, I leave it as a lesson, not a product.
- Track drift: When a pen diverges from its original intent, I either rename it or split it. This prevents “mystery blobs” that do too much.

Iteration loop: build, test, adjust, repeat. Source: Giphy.
Case Study: Card Obscurer (CheeseMath)
- Goal: practice credit card masking/validation with clear UX.
- Live demo: https://bradleymatera.github.io/CheeseMath-Jest-Tests/.
- Reality: regex alone was brittle; added Luhn checks, inline errors, and ARIA labels.
- Next: axe/Lighthouse audits + unit tests before calling it “portfolio-ready.”

Testing validation flows. Source: Giphy.
Case Study: Garbage Collection Visualizer
- Goal: teach myself GC phases (mark, sweep) without hand-waving.
- Build notes: started with AI-generated SVG circles; rewrote to a simple canvas grid to control performance. Added a “slow-mo” toggle to see the sweep phase.
- Reality: numbers are illustrative, not tied to real engine telemetry. Memory fragmentation visuals are crude but good enough to talk through in interviews.
- Next: log simulated heap snapshots to show fragmentation over time and link out to real engine docs for honesty.

Visualizing a simple mark-and-sweep pass. Source: Giphy.
Case Study: Sound Machine
- Goal: practice keyboard accessibility and audio APIs together.
- Build notes: AI scaffolded the key map; I rewrote focus management and added ARIA live regions for “playing” announcements.
- Reality: desktop is solid; mobile Safari throttles audio without user gestures. I document that right in the pen.
- Next: lightweight visualizer bars (no canvas) and a latency note comparing browsers.

Keeping keyboard focus in sync with audio. Source: Giphy.
Case Study: Regex Analyzer
- Goal: turn abstract regex into readable feedback.
- Build notes: AI drafted the table; I added copy-to-clipboard with fallback, highlighted capture groups, and error handling for malformed patterns.
- Reality: complex lookbehinds still trip it up; I flag that in the “Reality” section.
- Next: mini “regex cookbook” with preset patterns and pitfalls, plus a toggle to show the raw JavaScript errors.
Where AI Helps and Where It Breaks Things
- Helps: bootstrap layouts, remind me of API signatures, draft copy, generate starter tests, propose folder trees.
- Hurts: invents config options, forgets previous decisions, mixes CSS methodologies, and overfits to frameworks I am not using.
- Mitigation: I pin patterns (e.g., “stick to CSS modules,” “use fetch not axios”), paste real errors, and keep scope narrow (one file at a time).

AI as the fast typer; I’m the driver. Source: Giphy.
My Monthly Retro Loop
- AI disclosure: If Copilot/ChatGPT wrote most of it, I label it “Needs manual review.”
- Backlog maintenance:
codepen-ideas.mdtracks upcoming pens + references (MDN, WCAG). - Graduation list: Pens that earn repos, tests, and CI.
- Stalled list: Pens that stay messy—kept as learning artifacts, not hidden.
- Metric: Did I learn something I can reuse? If yes, I document and keep it. If no, I archive ruthlessly.
How I Decide to Graduate a Pen to a Repo
- Stability: fewer than 2 open “Reality” bugs.
- Accessibility: basic keyboard + screen reader checks pass.
- Testing: at least one smoke test or story that guards core behavior.
- Deployment plan: static export works on GitHub Pages/Netlify without hacks.
- Honesty note: README includes AI usage and remaining gaps.
Common Failure Modes (and How I Respond)
- Performance dips: If FPS drops in a canvas pen, I cap draw calls or switch to CSS animations.
- State bugs: I log state transitions in a sidebar so I can see bad flows instantly.
- API shifts: If a library updates, I lock versions in the pen description and plan a follow-up.
- Scope creep: If features pile up, I fork the pen and freeze the original as “v1.”
- Caching ghosts: If assets or service workers stick around, I version URLs and add a “hard refresh” note.
- CSS thrash: When class names balloon, I reset to a minimal spacing/typography palette, then reapply components.

When bugs appear: slow down, isolate, fix. Source: Giphy.
Debugging & Observability Checklist
- Reproduce on a clean load (disable cache).
- Capture the exact error stack and paste it into the pen description.
- Log network requests + payloads; note CORS/caching layers.
- Add inline status text (loading/error/success) that is visible without DevTools.
- Verify keyboard-only flows and focus order.
- Validate on Chrome + mobile Safari for audio/input pens.
- Record a quick GIF of the failure for future-me.
Tactics for Staying Honest
- Include a “Reality” badge in the pen header.
- Write the failure case first (“Does not support mobile Safari yet”).
- Link to the GitHub issue or TODO that tracks the gap.
- Capture a GIF of the broken behavior so future me remembers why it mattered.
What AI Does for Me (and What It Breaks)
- Useful: starter components, regex helpers, CRUD stubs, copy tweaks, quick diagrams.
- Risky: inventing APIs that don’t exist, mixing patterns mid-pen, forgetting my file paths, and hallucinating service worker settings.
- Guardrails: I paste real errors back, constrain scope, and rewrite any security-sensitive code by hand.

AI as pair: fast, but needs direction. Source: Giphy.
Prompt Recipes I Actually Use
- “Give me a minimal folder structure for React in a single CodePen. Keep styles in one CSS file. No TypeScript.”
- “Write one React component that renders X behavior. Do not include build config.”
- “Here is the error log. Fix only what the error mentions. Do not refactor the rest.”
- “Rewrite this copy to be honest about what is broken and what is stable.”
- “Generate Jest tests for this function; prefer
viteststyle but avoid external mocks.” - “Summarize what this pen currently does in 3 bullets for a README.”
Takeaways
- CodePen is my controlled gym: fast feedback, scoped experiments, and honest “Reality” notes.
- AI accelerates scaffolds, but I constrain prompts, rewrite sensitive code, and rely on real debugging.
- Pens graduate only when stable, accessible, and tested; noisy pens stay as lessons with documented gaps.
Next Steps
- Convert Garbage Collection, Sound Machine, and Regex Analyzer into repos with Jest + axe checks.
- Add a “skills index” page mapping pens to concepts (interview prep).
- Record short GIFs (Giphy) per pen to show behavior before someone clicks.
References
- MDN Web Docs, “Memory Management,” https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Memory_management
- React Docs, “State and Lifecycle,” https://react.dev/learn/state-a-components-memory
- MDN Web Docs, “HTMLMediaElement,” https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement