We Scanned 29 AI Product Landing Pages. All 29 Failed.
We asked 29 AI product companies to let us scan their landing pages. We didn’t actually ask — axe-core doesn’t need permission. 29 out of 29 failed WCAG 2.1 AA.
That is not a typo and it is not the same story as our last cohort scan. When we ran the same tooling at 30 SaaS pricing pages three days ago, nine of them came back clean. Figma, Netlify, Twilio, a handful of others. Not perfect, but at least the top of the distribution held. This time the top of the distribution is still a violation.
Why this cohort, and why now
A reader of the SaaS pricing scan replied with a suggestion: run the same pass on AI product landing pages next. The reader was Ali Afana, founder of Provia, commenting publicly on our dev.to post — credit where credit is due. We said yes in the reply. Architect queued the crawl for the next morning. The plan was to sit on the data until Friday and pair it with something lighter, but the failure rate made it worth publishing early.
29 sites, 0 skipped, 0 blocked. That is unusual on its own. SaaS marketing pages often throw 403s at headless Chromium or serve a Cloudflare challenge — AI product landing pages, it turns out, mostly don’t. They want to be crawled. They want to rank. They open the door.
And then axe-core walks in.
The methodology, kept deliberately boring
Same setup as the two previous cohort runs (SaaS pricing and the AI-generated UI component audit from earlier in the week). axe-core 4.11, headless Chromium, WCAG 2.1 AA plus WCAG 2.2 AA plus the best-practice tag so the landmark rules actually fire. Color contrast disabled because headless browsers cannot resolve computed colors reliably and any number it gave us would be noise.
We scanned the marketing landing page only — not the product, not the docs, not the pricing page if it lived on a separate URL, not the sign-up flow. The single page a person lands on when they click through from a newsletter or an X post. That narrows the axe surface but it is the most honest snapshot of the first impression a disabled visitor gets.
The numbers
- 29 sites scanned, 29 with at least one violation. Zero clean passes.
- 96 total violations across the cohort.
- 512 DOM nodes flagged.
- Average of 3.3 violations per site.
- 22 critical-severity issues, 27 serious, 41 moderate, 6 minor.
The headline is the 100% failure rate, but the 3.3-violations-per-site average is the quieter story. In the SaaS pricing scan, 30% of the cohort was clean and the violating 70% still averaged something comparable. Here, the whole distribution has shifted right. Every site is contributing to the pile.
Why does 100% feel different from 70%? Because 70% lets you tell a story about outliers. Most SaaS companies are doing fine, some are lagging, the tail needs help. 100% doesn’t give you that escape hatch. It says something about how the category is building.
What is failing, ranked
The top violation rule is region, triggered on 12 sites and covering 206 DOM nodes. region fires when meaningful content lives outside of any landmark — no <main>, no <nav>, no <section aria-label> wrapping it. It is the rule screen reader users feel most directly, because their jump-to-landmark shortcut is how they move through a page. When content lives in the void, arrow keys are the only way.
Second is heading-order, on 9 sites. This one is almost always a design-system symptom. A marketing page uses an <h3> because the designer wanted a visual weight that happened to match the h3 styling, without an intermediate <h2> above it. Or a product tile grid skips from h2 directly to h4 because h3 is reserved for the hero. Screen readers announce the jump and the outline falls apart. These bugs rarely get noticed by the sighted team that shipped them.
Then the pair that usually moves together: button-name (critical, 8 sites) and link-name (serious, 8 sites, 129 nodes). These are icon-only controls with no accessible name. Hamburger menus, social icons in the footer, a close-X on a cookie banner, a GitHub glyph in the nav. Easy to ship, hard to catch in a visual review, trivial for axe to find. The 129 nodes under link-name tell you this is a template-level issue on a small number of sites, not a hundred sites each with one broken icon.
And then the one that surprised us: target-size, a WCAG 2.2 rule, already hitting four landing pages. target-size requires interactive targets to be at least 24x24 CSS pixels. Four sites are already failing a criterion that only became conformance-mandatory with 2.2. If the rest of the cohort upgrades their a11y targets from 2.1 to 2.2 — which most teams are planning for — those four become the leading edge of a longer list.
One more footnote that matters. meta-refresh (critical) is still flagging on three production AI landing pages. That rule fires when a <meta http-equiv="refresh"> tag auto-reloads the page, which can disorient screen reader users mid-read. It is not exotic, it has been a WCAG failure since 2.0, and it is still shipping.
Who is where
The two sites with the most violations are both 7. Writer’s landing page at writer.com produced 7 violations across 81 nodes, including 2 critical and 4 serious. Hugging Face produced 7 violations across 15 nodes, 1 critical. Same rule count, very different surface area — Writer’s higher node count means the failures are hitting repeated templated elements across the page rather than one-off bugs.
The highest node count in the cohort belongs to Replicate at 90 affected nodes across 5 violations. Together AI is right behind at 88 nodes across 3 violations. Both sites pack their marketing pages with repeated model cards, pricing rows, and navigational chrome. When one template is missing an accessible name, it multiplies hard.
v0, Anyscale, and Lovable round out the top five by violation count — all in the 5-6 range, all with at least one critical issue.
At the other end, five sites came back with exactly 1 violation and 1 affected node each: Perplexity, You.com, Claude, Mistral, and Jasper. Before anyone reads this as a leaderboard, look at those pages. They are sparse. Hero, input box, one CTA. The axe surface is small because there isn’t much surface. That is not the same as “the team is further along on accessibility” — it is “there is less to get wrong.” We would want to scan them again after they add a product tour, a pricing section, and a footer before saying anything stronger than that.
The AI coder tools clustered in the middle in a revealing way. Cursor (2), Replit (2), bolt.new (2), Continue (3), Tabnine (3), GitHub Copilot (4), Codeium (4), Sourcegraph (4). Most of them are tripping on the same 2-3 rules: region, button-name or link-name, heading-order. That consistency across unrelated companies says something about shared Next.js marketing templates, shared component libraries, shared shortcuts.
The thing we keep noticing
A few days ago we wrote about running axe-core on AI-generated UI components. The summary from that audit: the code passes visual review, it passes static checks, but it ships without the semantic structure screen readers need to navigate it. Same two incomplete rules kept surfacing — landmark-one-main and page-has-heading-one. Same category of failure.
This scan has the same shape at a different layer. The landing pages are visually correct. They load fast, they look good on mobile, they probably have decent Lighthouse scores. The semantic structure — the landmarks, the heading outline, the accessible names on the icons — is where the rot lives.
We don’t think this is about AI companies being uniquely careless. The SaaS pricing cohort had a 70% violation rate, which is also not a good number. But the AI product category’s landing pages are built newer, shipped faster, iterated on weekly, and leaning heavily on the same handful of Next.js-plus-design-system template stacks. Less accumulated remediation, less time for a disability-adjacent bug report to walk into the tracker. So the violations compound.
It is the same failure mode as the AI-coder-generated-components story, just one level up. The code your AI writes looks right. The landing page your fast-moving marketing team ships looks right. Both of them skip the layer that screen reader users actually walk on.
Caveats before anyone quotes this
This is a shallow scan by design. Homepage only, no sign-up flows, no pricing pages for most of these sites, no authenticated product surface. axe-core itself catches somewhere in the range of 30-40% of real WCAG issues — manual keyboard testing and real assistive-tech runs find more. We turned color contrast off because headless can’t compute it. A human running NVDA on any of these 29 sites would almost certainly find additional problems we didn’t.
So this is a floor, not a ceiling. Nobody in the cohort gets to say “we passed” based on what we did or didn’t find. Even the five one-violation sites have work axe-core cannot see.
We’re going to keep running cohort scans weekly. Next on the list is AI agent framework landing pages — a smaller cohort, and we already have a hunch about where their regressions live. If there is a cohort you want us to run, same place we heard about this one: the comment section on the previous post. We read them.
For the general-purpose “we have a WCAG failure, what now” question, the Accessibe alternatives guide is probably more useful than another scan post. The EAA compliance checklist is the other one people keep asking about. Both are free.
Get our free accessibility toolkit
We're building a simple accessibility checker for non-developers. Join the waitlist for early access and a free EAA compliance checklist.
No spam. Unsubscribe anytime.