I Shipped an Accessibility Auditor in 30 Days — Here’s Every Decision I Made

30 days, one developer, zero copied code. How I built an accessibility auditing tool from scratch — the architecture decisions, the real-world chaos, and what shipped.

amejia
amejia
· 3 min read

Last month I set a challenge for myself: build and ship a fully functional accessibility auditing tool in 30 days. Not a prototype. Not a landing page. A working product that scans websites, identifies WCAG violations, and gives actionable recommendations — all without copying a single line from existing tools.

I made it. Here’s everything that happened.

Week 1: The Rules Engine

The core of any accessibility tool is its rules engine. This is the thing that knows “an image without alt text is a WCAG 1.1.1 violation” and can find every instance on a page.

I started with the 10 most impactful rules — the ones that catch 80% of real-world accessibility issues. Missing alt text. Empty links. Missing form labels. Color contrast failures. Missing language attribute. Each rule has three parts: a selector (which elements to check), a check function (the logic), and metadata (severity, WCAG criteria, help text).

By Friday I had 10 rules passing tests. The architecture was clean enough that adding new rules meant creating a single file and registering it.

Week 2: The Scanner

Rules are useless without something to run them against. I built a Puppeteer-based scanner that loads a page in a headless browser, injects the rules engine, and collects results.

The hard part wasn’t the scanning — it was handling real-world websites. Pages with infinite scroll. Pages that require authentication. Pages that load content dynamically after 3 seconds of JavaScript execution. SPAs that never trigger a load event.

I built a waiting strategy that combines multiple signals: DOM mutation observers, network idle detection, and a configurable timeout. It handles 95% of sites without configuration.

Week 3: The Report

Nobody wants to read a JSON blob of violations. The report needed to be clear, actionable, and prioritized. I built a scoring system that weighs violations by severity and impact:

  • Critical: Makes content completely inaccessible (missing alt text on the only image conveying meaning)
  • Serious: Creates significant barriers (form without labels)
  • Moderate: Inconvenient but workaround exists (color contrast at 4.0:1 instead of 4.5:1)
  • Minor: Best practice violations (missing landmark roles)

Each violation includes the element, the rule it broke, why it matters, and exactly how to fix it. Not “add alt text” but “add a descriptive alt attribute to this specific image that conveys its meaning in context.”

Week 4: Polish and Ship

The last week was about edges. Error handling for sites that refuse to load. Performance optimization so a scan completes in under 30 seconds for most sites. A clean CLI interface for developers and a web dashboard for non-technical users.

I also wrote tests. Not after the fact — the tests were there from week 1. Every rule has 15+ test cases covering pass, fail, and edge scenarios. The test suite runs in under 10 seconds and catches regressions instantly.

What I Learned

Architecture decisions on day 1 compound by day 30. The decision to make each rule a self-contained module with its own metadata and tests meant I could add rules 10x faster by week 3.

Real websites are chaos. My rules worked perfectly on clean HTML test cases. Then I scanned a real e-commerce site and discovered inline styles overriding CSS, JavaScript-generated content with no semantic markup, and iframes nested three levels deep.

Shipping is a muscle. The temptation to keep polishing is real. At some point you have to draw the line and say “this is v1” and put it in front of real users. Their feedback is worth more than another week of your own opinions.

What’s Next

The tool currently has 40+ rules and I’m working toward full coverage of the WCAG 2.1 AA standard. Next on the roadmap: CI/CD integration so teams can catch accessibility regressions before they merge, and an API for embedding audits into existing workflows.

Building in public means sharing the messy middle, not just the polished launch. This was the messy middle. Now it’s shipped.