Back to all posts

My Axe-con 2026 Takeaways

8 min read
Accessibility Axe-con Testing AI

I attended eight sessions at Axe-con 2026. Here's what I learned.

The Tab Key Is Your Best Testing Tool

The most powerful accessibility testing tool isn't a browser extension or an AI agent. It's the Tab key.

Greg Gibson (Red Hat) highlighted this by creating a permanent demo page. It runs well in WAVE, but has many keyboard barriers. That gap — tools say it's fine, but a human can't use it — ran through the entire conference.

Six keys cover most tests.

He showed these live on cloudflare.com. It's a well-designed site but lacks a skip link. The search button doesn't respond to keyboard input. Focus indicators are hidden behind hover states. Plus, the tab component changes every 20 seconds without a keyboard stop. Everyone did their job, except nobody had the job of testing with a keyboard.

His priorities: start with a skip link (quick win, legally important), then fix core features. A keyboard-inaccessible search button means an entire user group can't discover content.

Gibson explored AI for accessibility testing. He asked chatbots to work as keyboard users. He found them "so agreeable" that results changed based on how he phrased the prompt. His verdict: "I would much rather use my own energy than a data centre's energy."

Watch the full session: "Testing Web Experiences with Your Keyboard" by Greg Gibson

Passion Doesn't Scale

The keyboard session focused on what one person can do. Three case studies showed the results when you scaled to thousands.

Cvent: VPATs First, Training Later

"Accessibility usually starts in quality. Unfortunately, it also usually dies in quality." — Stephen Cutchins

The usual pattern is simple: a sharp QA person finds defects. Developers often say, "not in scope," so progress stops at the individual contributor level.

Cvent flipped it. Instead of training engineers first, they pushed VPATs into clients' hands within two months. Were they great? No. Clients loved them. They began asking for more, and suddenly, accessibility brought in revenue. Engineers who had resisted were now requesting checklists because client demand required it.

They also learned hard lessons about training. Mandatory courses had high completion rates. However, retention was low. Most trainees weren't assigned accessibility work right away. Their current model: cross-functional task forces, manager-led with allocated time. Champion groups are passionate but can be fragile. Task forces with strong leadership support tend to be more sustainable.

Watch the full session: "Small Team, Big Shift" by Stephen Cutchins (Cvent)

Atlassian: Growth Profiles and Specialist Cohorts

Atlassian described a six-year journey from checkbox to culture. Their playbook: make accessibility part of career levels so every designer knows what is expected at each seniority level. Build a specialist community with structured cohorts that now cover most product teams. Measure impact through confidence and manager-reported influence, not attendance.

Watch the full session: "The Accessible Design Specialists Playbook" (Atlassian)

Wolters Kluwer: Structure Over Intentions

Ryan Schoch saw accessibility scaling as a systems issue, not just a lack of knowledge: "Good intentions don't normalize systems. Structure does."

The audits showed similar issues in all divisions. Accordions and combo boxes acted inconsistently. Keyboard focus landed unpredictably, and some shared components were completely bypassed. Not isolated defects but structural variants. The usual approach adds roles, training, and tools. It assumes that fixing the parts will fix the whole. In complex systems, it doesn't.

What moved the needle: encoding accessible interaction expectations directly into design system patterns. Create functional HTML prototypes early in the design process. Test them with screen readers before sharing widely. Designers annotate during handoff: focus order, keyboard behaviour, landmarks. Patterns normalise not just because they exist, but because we expect them consistently in the process.

His advice: build where adoption already exists. Use audits as learning tools, not report cards.

Watch the full session: "Scaling Accessibility in a Complex Enterprise" by Ryan Schoch (Wolters Kluwer)

AI: Massive Opportunity, Modest Results (So Far)

Ed Summers, Head of Accessibility at GitHub and a blind software developer, presented some unsettling data. The 2025 DORA report: 90% of developers use AI, 80% say it boosts productivity. Yet the WebAIM Million shows accessibility violations per page only dropped from 60 to 50 over six years. The Web Almanac's Lighthouse scores: 72% in 2019 to 85% in 2025. Slow improvement, no AI-driven inflection point.

The most striking data came from Microsoft's a11y-LLM-eval project. This project measured how well LLMs create accessible HTML. Without instructions: 10% average WCAG pass rate. With basic guidance: 46%. With detailed instructions: 58%. Custom instructions aren't optional — they're the difference between 10% and 58% compliance. This is why GitHub built accessibility documentation specifically for AI tools.

GitHub: Continuous AI for Accessibility

GitHub introduced "continuous AI for accessibility" — adding accessibility checks at each step of the development pipeline. Five calls to action, all free to start:

  1. Custom instructions teaching your AI about your design system and accessibility standards
  2. The GitHub Accessibility Scanner — open-source Action that scans with axe-core, creates issues, and assigns them to Copilot for automated fix PRs
  3. Custom agents focused on specific accessibility domains
  4. Process automation with AI-powered triage and labelling
  5. Experimentation — even non-developers can prototype with plain language

Summers shows a live demo where he writes a "click here" link. Copilot's code review uses custom instructions to flag issues. It gets the linked page's title, rewrites the sentence with a clear link, and offers a one-click commit option. Accessibility education is built into the pull request workflow.

He was clear about limits: "AI can accelerate what we are doing but there is no substitute for great design, thoughtful design, considering the needs of users, and there is no substitute for inclusive user research."

Watch the full session: "Building Without Barriers on GitHub" by Ed Summers

Deque: Three Layers in the IDE

Harris Schneiderman demonstrated three progressive testing layers, each catching what the previous missed. axe Linter is a free VS Code plugin. It catches basic issues as you type, like links without text and buttons without labels. An axe-linter.yml maps your custom components to HTML semantics. The axe MCP Server links AI coding tools to Deque's testing engine. It starts a headless browser, runs analyses, and shows violations with remediation tips. And the axe DevTools Extension adds AI-powered rules for issues standard engines cannot detect, like colour contrast on gradient backgrounds.

Each layer caught things the others missed — no single tool was sufficient.

Watch the full session: "Shift Left Without Shifting Gears" by Harris Schneiderman (Deque)

Thomson Reuters: "Is It Perfect? No."

Thomson Reuters delivered the most straightforward session: scaling automated testing across over 100 engineering organisations and more than 250 applications, including products from brand new to 50 years old.

Their hybrid strategy: automated testing early and often, manual evaluations for major releases. The real lessons:

  • Automation covers 30-60% of issues. This number grounds every AI enthusiasm. The remaining 40-70% requires human judgment.
  • Vendor tools need enterprise wrapping. Axe Developer Hub didn't show results for the entire organisation. So, they created custom databases, dashboards, and reports. Every large organisation at the conference did the same.
  • One size doesn't fit all. The stated ideal was testing every pull request. For complex product families, the best coverage came from regression suites on a fixed schedule, with dedicated accessibility test cases included.

Results: 250+ applications integrated, 12,000+ test runs, overall reduction in issues. The 2026 roadmap: mobile testing and AI-powered remediation with explicit human-in-the-lead.

Watch the full session: "Integrating axe for Automated Testing" (Thomson Reuters)

Measuring What Users Actually Experience

Most sessions focused on what teams build. Arizona State University flipped the lens to what users experience.

Victoria Polchinski added one question to their CSAT surveys: "Do you use assistive technologies or devices?" Then she split the results by group. Across over 12 surveys, AT users rated their satisfaction about 5 points lower than non-AT users. That gap is invisible in aggregate scores.

One product reversed the gap entirely. ASU's graduate admissions required no letters of recommendation, no fees, and no lengthy forms. With that simplified process, AT users were actually more satisfied than non-AT users.

That single survey question also became a recruitment tool. Over 800 AT users opted into future research, giving ASU a standing panel for usability testing without a separate recruitment effort.

Watch the full session: "CSAT as a Tool for Accessibility Insights" (Arizona State University)

Accessible Charts: The Deep Technical Challenge

Atlassian's visualisation team showed the engineering depth behind a single component type. When charts are hard to access, 1 in 4 people can't read the data, 1 in 20 can't trust colours, and 1 in 7 can't navigate without a mouse.

Pattern fills make charts colour-blind accessible. You can add SVG patterns like stripes, dots, and crosshatching by using fill="url(#pattern-id)". They are effective for all types of colour blindness. The visx library provides ready-to-use components; Color Brewer offers colour-blind-safe palettes.

Focus management requires a surprising choice: don't make every chart element a Tab stop. Treat the chart like one Tab stop. Use Enter to focus in, arrow keys to move between data points, and Escape to exit.

The session also covered new methods like sonification. This technique lets you hear data trends through rising or falling pitches. It included tactile charts for blind users and AI-made text summaries of chart patterns. These aren't compliance requirements yet. They suggest a future where data visualisation goes beyond just visuals.

Watch the full session: "Making Platform React Chart Components Accessible" (Atlassian)

Conclusion

  1. Test with your keyboard today. Tab through your site top to bottom. You'll find things that no automated tool catches. Gibson's demo page teaches the basics in fifteen minutes.

  2. Add accessibility custom instructions to your AI tools. Microsoft's benchmarks show a big difference. Between 10% and 58% WCAG compliance, the key is telling the AI what to do.

  3. Put accessibility in career expectations. Growth profiles don't necessarily need top-down rollout. Any manager can add accessibility to how they evaluate and grow their team. If accessibility is part of how people advance, it stops being extracurricular.

  4. Encode expectations into systems, not documents. Training doesn't equal behaviour; documentation doesn't equal adoption. Bake interaction expectations into design system patterns where teams can't bypass them.

  5. Get VPATs into clients' hands early. Cvent's VPATs-first strategy attached revenue to accessibility within two months. Client demand makes programmes unkillable.

  6. Accept "better is better." The top organisations know their gaps. They also celebrate their wins.

The web won't become accessible through automation alone, AI alone, or compliance alone. It'll happen when every developer, designer, and product manager sees accessibility as essential. It's not just someone else's responsibility; it's a basic expectation.

Sources

  1. Testing Web Experiences with Your Keyboard — Greg Gibson (opens in new tab)
  2. GitHub Accessibility Documentation (opens in new tab)
  3. GitHub Accessibility Scanner (opens in new tab)
  4. Microsoft a11y-LLM-eval Report (opens in new tab)
  5. WebAIM Million — 2025 Accessibility Report (opens in new tab)
  6. Web Almanac 2025 — Accessibility Chapter (opens in new tab)
  7. Chartability — Accessible Data Visualization Heuristics (opens in new tab)
  8. Color Brewer — Colorblind-Safe Palettes (opens in new tab)
  9. visx Pattern Fills (opens in new tab)