Axe-con 2026 — Session Summaries
Conference: Axe-con 2026 Digital Accessibility Conference
Date: February 24-25, 2026
Format: Virtual, free — 75+ speakers, multi-track
URL: https://www.deque.com/axe-con/
Session 1: Testing Web Experiences with Your Keyboard
Speaker: Greg Gibson, Principal UX Producer (Accessibility Testing & QA), Red Hat
Session URL: https://www.deque.com/axe-con/sessions/testing-web-experiences-with-your-keyboard-2/
Demo page: https://hellogreg.org/axe26/ (permanent, reusable for own presentations)
Key Thesis
Keyboard testing is underrated, catches high-impact issues that automated tools miss, and requires zero software beyond a browser. It's the lowest barrier entry point into accessibility testing — anyone with a keyboard can do it.
Core Keyboard Testing Keys
| Key |
Action |
| Tab |
Move forward through interactive elements |
| Shift+Tab |
Move backward |
| Arrow keys |
Scroll page; navigate within complex components (tabs, dropdowns, radio groups) |
| Space |
Trigger buttons, toggle checkboxes, open dropdowns |
| Enter/Return |
Activate links, trigger buttons |
| Escape |
Close modals, reset to default state |
What to Test (Checklist from the Session)
- Skip links — First element on page should skip repeated nav. Test: Tab once from top → skip link should appear → Enter should jump past nav to main content.
- Visible focus — Every interactive element must show a visible focus indicator (recommended: ≥3px solid outline). Flag any element where focus "disappears."
- Obscured focus — Sticky headers/cookie banners can hide focus. Fix: CSS
scroll-padding on the page.
- Inline link visibility — Tabbing through text can reveal invisible links (no underline, same color as text). Especially problematic for color-blind users.
- Focus order — Tab order must match logical reading order (not visual left-to-right). Bad example: form coded left-to-right instead of section-by-section.
- Buttons vs. links — Buttons (actions): Space + Enter. Links (navigation): Enter only. Space on a link scrolls the page. Flag if wrong element used.
- Details/Summary — Native expandable disclosure widget. Space or Enter toggles. Less code than custom implementations.
- Tooltips — Must appear on focus, not just hover. Otherwise excludes keyboard + touch users.
- Modals — Close button must be keyboard accessible. Focus should return to trigger button after close. Escape should close.
- Tabs pattern — Tab list must be focusable. Navigate between tabs with arrow keys (like radio buttons). Tab key enters the active panel.
- Scrollable regions — Code blocks and overflow containers must be keyboard focusable to allow arrow key scrolling.
- Auto-playing media — Must have keyboard-accessible pause/stop mechanism.
- Zoom — Page must work at 200% zoom (Cmd+= / Ctrl+=). Cmd+0 resets.
Case Study: Cloudflare Homepage
Gibson live-tested cloudflare.com, demonstrating real-world issues on a well-designed site:
- No skip link — Forces tabbing through dozens of nav links
- Search button non-functional — Space and Enter don't activate it (keyboard user can't search the site)
- Focus disappears in main nav — Hidden behind hover-only states; must hover + tab simultaneously to see focus
- Unstoppable animations — Spinning globe animation can't be paused via keyboard or mouse; ignores
prefers-reduced-motion
- Tab interface not keyboard operable — Tabs cycle on 20-second timer with no keyboard control; content changes while reading
- Carousel issues — Items scroll horizontally, unpredictable keyboard behavior, content hidden from keyboard users
- Tile links with invisible focus — Hover state masks focus indicator
- Chat widget — Close button only visible on hover
Prioritization recommendation (from Q&A):
- Skip link — quick win, high legal risk, easy to implement
- Search functionality — high impact, core site function inaccessible
- Navigation focus styles — broad impact across all pages
Quotable Moments
"If everybody building for the web would test their pages by tabbing from top to bottom, the web would be a better place." — Crystal Preston-Watson (cited by Gibson)
"Don't ask how carousel — ask why carousel." — Greg Gibson's colleague at Red Hat
"Technically accessible is the worst kind of accessible." — Audience member in chat, echoed by Gibson
"I would much rather use my own energy than a data center's energy." — Gibson on AI vs. manual keyboard testing
"A page that works well with a keyboard is also likely to work well with a mouse or touch screen."
- Demo page at hellogreg.org/axe26 passes WAVE with zero errors — yet is deliberately inaccessible
- Automated tools catch a fraction; keyboard testing catches the "show stoppers that keep users from navigating and accomplishing tasks"
- axe DevTools Pro guided testing is useful for hand-holding and reporting (code snippets, grouped issues), but can be slower than raw keyboard testing
On Screen Readers + Keyboard Testing
- Screen readers change keyboard behavior (different modes, different commands)
- Screen readers can access things keyboard-only users can't (landmarks, headings, forms navigation)
- Recommendation: do visual keyboard test first, then screen reader test separately — multitasking both risks missing things
- Deque University has shortcut references for all major screen readers: https://dequeuniversity.com/screenreaders
On AI for Accessibility Testing
- Gibson tested AI chatbots as "personas" doing keyboard testing
- Findings: AI is agreeable — results change based on how you phrase the prompt (positively vs. negatively)
- Useful for discovering unexpected user routes, less useful for single-page keyboard testing
- Not a replacement for hands-on testing
Resources from This Session
- Demo page: https://hellogreg.org/axe26/ — reusable, permanent, customizable
- Keystroke display app: Keystroke Pro (Mac)
- Screen reader shortcuts: https://dequeuniversity.com/screenreaders
- Contact: hellogreg@hey.com
Session 2: Small Team, Big Shift — Building an Accessibility Program at a Mid-Sized SaaS
Speakers:
- Stephen Cutchins, Senior Manager-Accessibility, Cvent (20+ years in accessibility, organizational transformation specialist)
- Evelyn Wightman, Senior Accessibility Specialist, Cvent (role she "slowly created" over 5 years)
- Amanda Bolton, Senior Software Development Engineer in Test, Cvent (QA, personal experience with hearing loss)
Session URL: https://www.deque.com/axe-con/sessions/small-team-big-shift-lessons-learned-from-four-years-of-building-an-accessibility-program-at-a-mid-sized-saas/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Small-Team-Big-Shift-Lessons-learned-from-four-years-of-building-an-accessibility-program-at-a-midsized-SaaS_a11y.pdf
Key Thesis
Building a sustainable accessibility program requires shifting it from a cost center to a revenue driver — and the fastest way to do that is getting VPATs into clients' hands early, even before your engineers know how to fix the issues. Client demand makes the program impossible to kill.
About Cvent
- Software company for event & hospitality management + corporate travel
- 5,000+ employees globally, HQ in Tysons, Virginia (near Deque)
- Products: conference registration, hotel booking, badge printing, lead capture, 3D venue modeling
- Direct customers: event planners; end users: attendees
The Typical (Failing) Accessibility Roadmap
Stephen outlined a pattern he's seen across 20+ years:
- Person in QA starts doing accessibility "off the side of their desk"
- They log defects → developers push back ("not in scope", "no client need", "tech stack doesn't support it")
- Program dies at individual contributor level — no executive buy-in
If lucky, some internal interest leads to:
- Checklists, training, program launch
- VPATs (Voluntary Product Accessibility Template — standardized compliance document)
- Client interest → revenue
Critical insight: Up until client interest, accessibility is pure cost. Programs that stay in the "cost" phase die. Revenue only appears when clients engage.
The Cvent Shortcut: VPATs First
Instead of the standard path (train engineers → build program → create VPATs → find clients), Cvent flipped the order:
- Hired Stephen as first "dash-accessibility" role (Senior Product Manager-Accessibility)
- Went straight to VPATs — contracted external auditors, starting with highest-traffic attendee-facing products
- Pushed VPATs to sales teams → into client hands immediately
- Clients loved them → created demand → leadership couldn't say no
- Engineers now requested training and checklists (pull, not push)
- Marketing issued press releases → industry awards followed
Revenue shifted "way, way left" — from ~2 years (traditional path) to ~2 months to get first VPAT in a client's hands.
Stephen told Cvent's CEO on his second week: "I'm going to change your company." Brazen, but it set the tone.
Organizing the Program: Hub-and-Spoke Model
Evolution of accessibility groups at Cvent:
| Group |
Model |
Composition |
Strengths |
Weaknesses |
| Quality A11y Task Force |
Manager-led task force |
Rep from each product (mix of volunteers + "voluntolds") |
Consistent leadership, allocated time, cross-product coverage |
Some members not genuinely interested |
| Accessibility Guild |
Hybrid (champion passion + manager sponsorship) |
Cross-functional volunteers: QA, dev, design |
Passion + consistency via specialist lead (Evelyn) |
Champions have day jobs; limited bandwidth |
| UX A11y Champions |
Voluntary champion group |
Interested designers from UX |
High motivation, lots of ideas |
Participation drops off; viewed as "extracurricular" |
| Cross-Department Champions Network |
Newest, cross-org |
Tech + sales, marketing, legal |
Broad organizational reach |
Still forming |
Key finding: Champions groups vs. task forces:
- Champions bring passion but are fragile — bandwidth limited, viewed as extracurricular, hard to replace when they leave
- Task forces have consistent leadership and allocated time but may lack enthusiasm
- Developers are hardest to get involved — not from lack of interest, but lack of bandwidth for "extras"
Training: What Worked and What Didn't
What they did: Bought Deque University licenses, mandatory training tracks for devs, testers, designers, managers. Intensive, detailed course.
What worked:
- Tapped into existing tech training process → high completion rates
- People actively fixing audit issues got hands-on practice → skills stuck
What didn't work:
- Most trainees had no immediate accessibility work → forgot everything
- Training without process change = wasted effort
Lesson learned: Either:
- Start with a shorter intro-level training (spread awareness), OR
- Pair technical training with required baseline checks per role ("here is the checklist for your role at this step in your SDLC")
"If it's not required, you have to rely on people wanting to do it AND having a manager who gives them time. Having both on the same team is a treat. But you can't build an accessible platform that way."
Scaling Strategy: Framework First
Started accessibility efforts at the framework level — buttons, tables, forms, base components.
Logic: If the building blocks are accessible, everything built from them gets a head start. Individual feature teams can then use accessible components instead of solving accessibility from scratch.
Change Management: Hard Lessons
The struggle: Accessibility experts know what needs to change but lack authority and change management skills. Change management people have the skills but don't know accessibility.
"There's nobody who knows what needs to change AND knows how to make it happen AND is supposed to do that as part of their role."
What didn't work: Evelyn (individual contributor) spent a year pushing out mandatory developer accessibility tests. Huge slog. Discovered 75% of the way through that a Change Management Team existed that could have helped.
What works better: Collaboration model:
- Accessibility expert recommends the change
- A manager owns it as a project and drives it to completion using their network and authority
- Example: Quality manager Pratik took on adding accessibility to existing team scorecards → dev teams now get dinged for leaving accessibility bugs open too long
Build vs. Buy Decisions
| Area |
Cvent's Approach |
Why |
| VPATs |
External auditors (for now) |
Credibility with clients; lacked internal expertise initially |
| Testing tools |
JAWS licenses + open axe library (custom wrappers) |
Company culture prefers building; evaluating paid options |
| Issue tracking |
Custom JIRA fields + reporting |
If issues aren't in JIRA, "they don't exist and nobody looks at them" |
| Training (generic) |
Off-the-shelf (Deque University) |
Good at all levels of depth |
| Training (advanced) |
Built in-house |
Screen reader testing needs instructor feedback; cVent-specific processes |
| Guidance docs |
Internal "good, better, best" practices |
WCAG says what's wrong but is open-ended on how to fix; internal docs standardize solutions (e.g., "2px focus ring fully enclosing the element") |
External Evangelism & Industry Positioning
- Launched public accessibility statement + dedicated email address
- Published "Big Book of Event Accessibility" (cvent.com/bigbook)
- Accessibility booth at Cvent Connect (annual conference, thousands of event planners)
- Sponsored/spoke at CSUN, Axe-con
- Marketing submitted for industry awards → won several (Stephen named "changemaker" by MeetingsNet 2022)
- Used EAA (European Accessibility Act) as leverage for European sales teams
- Used ADA Title II updates for higher ed and nonprofit clients
Strategic insight: Becoming an industry accessibility leader creates external pressure that reinforces the internal program.
Current Priorities / What's Still Missing
- Direct feedback from people with disabilities — Most feedback comes through event planners, not end users. Have an email address but no in-product feedback mechanism. Created an Employee Resource Group (ERG) for employees with disabilities and allies.
- Clear learning paths — Lots of training available but unclear who should do what when. Critical as Cvent acquires companies and onboards entire new teams.
- Broader collaboration beyond a11y groups — Need change management expertise, AI expertise, educational scaffolding. Champions have done everything they can alone; now need to plug into existing organizational systems.
Quotable Moments
"Accessibility usually starts in quality. Unfortunately, it also usually dies in quality." — Stephen Cutchins
"The passionate few is never sustainable." — Stephen Cutchins
"Don't ask how carousel — ask why carousel." (also quoted in Session 1!)
"I'm going to change your company." — Stephen Cutchins to Cvent's CEO, two weeks into the job
"Passion is useful. It is motivating. It is inspiring. But it isn't a long-term strategy. It's fragile." — Amanda Bolton
"If it's not required, you have to rely on people wanting to do it. And having a manager who will give them time for it. Having both on the same team is a treat. But you can't build an accessible platform that way." — Evelyn Wightman
"Sometimes you do just have to do things and learn better as you go." — Evelyn Wightman
"I don't even know what I don't know." — CTO of a federal agency, to Stephen
Key Takeaways (from the speakers)
Stephen:
- If your clients care about accessibility, they are your best advocates. If they don't care yet, it's your job to get them to care.
- Marketing is a very, very good friend to have. Share every win, even small ones. ("We should have done a press release when they posted the job listing for me.")
- Take advantage of legislative updates (EAA, ADA Title II) — even if "we have to for legal reasons" doesn't feel great, it moves the needle.
Evelyn:
4. Executive buy-in is necessary, but managers get the work done. Buy-in doesn't trickle down automatically.
5. Support your passionate few so they don't burn out while you're still relying on them.
Amanda:
6. Getting organized increases your impact — clear instructions, a place to ask questions, recognized experts.
Q&A Highlights
On quantifying accessibility revenue (from Matt):
- Stephen tracks every client mention of "VPAT", "accessibility", "508", "ADA" in Salesforce
- Tracks VPAT downloads (e.g., "350 downloads in 6 months for registration product")
- Tried dollar figures but found count-based metrics more reliable since ~80% of prospects don't convert
On early VPAT transparency creating fear (from Glenda):
- "We didn't put guardrails. It was risky." One product got ~230 defects at once.
- Framed it as: "There are blind people who can't attend events until we fix the critical ones."
- Message was well-received despite initial shock.
On who should own VPATs without dedicated a11y specialists:
- Product management first, technology second. The people who fix it should own it.
Resources from This Session
- Slides: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Small-Team-Big-Shift-Lessons-learned-from-four-years-of-building-an-accessibility-program-at-a-midsized-SaaS_a11y.pdf
- Big Book of Event Accessibility: cvent.com/bigbook
- Contact: webaccessibility@cvent.com
Session 3: Building Without Barriers on GitHub
Speaker: Ed Summers, Head of Accessibility, GitHub (blind software developer, decades of experience)
Session URL: https://www.deque.com/axe-con/sessions/building-without-barriers-on-github/
Track: Organizational Success with Accessibility
Key Thesis
180 million developers build on GitHub, and we're missing a huge opportunity to improve accessibility across the industry using AI. GitHub's Accessibility Team is dogfooding "continuous AI for accessibility" — injecting accessibility expertise into every step of the development lifecycle through custom instructions, automated scanning, and custom agents. The tools are free, open-source, and available today.
State of Accessibility: Four Observations
1. Strongest regulatory framework in decades
| Regulation |
Deadline |
Scope |
| European Accessibility Act (EAA) |
June 2025 (live) |
Consumer products |
| ADA Title II (US) |
2026-2027 |
State/local government, higher ed |
| Accessible Canada Act |
2027-2028 |
Federally regulated sectors |
Evidence of real investment: Ed analyzed ~400 job postings on a11yjobs.com (Dec-Jan) — 31% were from state/local government (ADA Title II scope). Huge shout-out to George Hewitt who maintains the site.
2. AI dev tools are ubiquitous
- 2025 DORA report (DevOps Research & Assessment): 90% of 5,000 developers surveyed use AI, 80% say it increases productivity
3. AI has had negligible impact on accessibility (so far)
- WebAIM Million report (2019-2025): Average violations per page dropped from 60 → 50. Trending right direction, but no acceleration from AI adoption.
- Web Almanac report (2019-2025): Lighthouse accessibility scores went from 72% → 85%. Slow, steady improvement — no AI-driven inflection point.
- Interesting finding: Websites on
.AI domains ranked 3rd best for accessibility (behind only .gov and .edu). Speculation: new AI companies use latest tools/frameworks.
4. We're missing a huge opportunity
- The gap between AI adoption (90%) and accessibility improvement (marginal) means the tools exist but aren't being directed at accessibility. That's what this talk addresses.
Concept: Continuous AI for Accessibility
GitHub Labs coined "continuous AI" — extending CI/CD thinking to leverage AI across the entire software development lifecycle, not just in the editor.
GitHub's Accessibility Team applies this as "continuous AI for accessibility" — injecting accessibility expertise at every development touchpoint: code completion, chat, agent mode, code review, and automation.
GitHub Building Blocks (Quick Reference)
| Concept |
What it is |
| Repositories |
Digital containers for project files, support branching |
| Pull Requests (PRs) |
Proposed changes, with discussion/review before merge |
| Issues |
Bug/feature tracking, assignable to people or agents |
| Projects |
Group issues into sprints/iterations |
GitHub Copilot Features for Accessibility
| Feature |
What it does |
A11y Application |
| Code Completion |
Inline code suggestions in editor |
Suggests accessible patterns if custom instructions set |
| Copilot Chat |
Conversational AI about code/repo |
Ask about accessibility of your codebase |
| Copilot Coding Agent |
Assign issues to Copilot → async PR creation |
Assign 10-20 a11y bugs to Copilot, it creates fix PRs |
| Copilot Code Review |
AI reviews every PR with comments + diffs |
Catches a11y issues at PR time; just-in-time developer education |
Code Review is the biggest opportunity for a11y professionals — it educates developers at exactly the right moment, at every PR, with specific suggestions they can accept with one click.
Call to Action 1: Custom Instructions for Accessibility
What: Plain-language instructions that modify Copilot's behavior across all features (completions, chat, agent, code review).
Best practices (from Kendall Gasner's guide):
- Tell Copilot about your design system — which components are accessible, how to use them
- Use directive language: "must", "should", "may"
- Be specific about what "good" means for your team
Example — Markdown accessibility instructions:
5 rules for accessible markdown:
- Missing or empty alt text on images
- Incorrect heading levels
- Non-descriptive link text (e.g., "click here")
- Emojis used as bullet points or list markers
- Plain language readability improvements
Live demo result: Ed submitted a PR with a "click here" link. Copilot's code review:
- Flagged the non-descriptive link text
- Explained why descriptive link text matters
- Fetched the title of the linked page
- Rewrote the sentence with the page title as link text
- One-click "commit" button to accept the fix
Call to Action 2: GitHub's AI-Powered Accessibility Scanner
What: Free, open-source GitHub Action that scans sites for accessibility issues, creates GitHub Issues for each violation, and assigns them to Copilot Coding Agent for automated fix PRs.
How it works:
- Scans using axe-core (no false positives)
- Creates a GitHub Issue per violation, with descriptions optimized for both humans and AI
- Copilot Coding Agent reads the issue, creates a fix PR in the background
- Human reviews, modifies, and merges
Key features:
- Authentication support (login-protected pages via credentials or Playwright session)
- Screenshot capture for visual documentation
- Result caching across runs
- Public preview status — "cannot guarantee fully accessible code suggestions"
Link: gh.io/a11y-scanner (also: https://github.com/github/accessibility-scanner)
"Custom instructions + scanner = better together" — the scanner finds issues, custom instructions guide the fixes.
Call to Action 3: Build Custom Agents for Accessibility
What: Custom agents are tightly focused on a specific domain problem (vs. custom instructions which cover many topics and can get "watered down").
When to create a custom agent:
- Encoding tight, specific domain knowledge about a particular problem
- Goal-directed work with a definition of "done" and a way to measure it
- Multistep workflows with strictly limited tools (security benefit)
Example — Markdown accessibility agent:
Goes beyond custom instructions: "Review all markdown in my repo and create a PR that makes accessibility improvements." Uses linters and tools to measure improvements, not just suggest them.
Getting started guide: gh.io/a11y-docs → "Getting Started with Custom Agents for Accessibility" (authored by Roberto Perez from GitHub's Accessibility Team)
Call to Action 4: Automate Your Accessibility Processes
Recipe for automating a11y workflows on GitHub:
- Dedicated repository for each accessibility process (audits, user feedback, compliance, etc.)
- Issue templates with required fields to ensure right information captured
- Automations (AI or deterministic):
- Auto-add issues to projects
- Auto-assign labels (AI useful for severity/priority assessment)
- Triage incoming issues (AI checks if necessary info is provided)
- Prompt people for next steps in the process
- Assign right people or agents to move work forward
GitHub's own examples:
- Single repo tracking all accessibility bugs/features across all products
- User feedback process (led by Carrie Fisher): 7-step workflow from intake → resolution → user confirmation → learnings incorporated. Blog post coming soon on github.com/blog.
- Exception request process for compliance program
Call to Action 5: Block Time to Experiment
Ed's personal plea: block time in your schedule to experiment with AI, even if you're not a developer. Share what you learn — post on LinkedIn, contribute to the awesome-copilot repo, tag Ed.
Example — Non-developer contribution: Janice Rymer (Program Manager, not a developer) used Copilot Chat to prototype an addition to GitHub's accessibility governance framework via "spec-driven development" — describing what she wanted in plain language, iterating with Copilot. It was then handed to a dev team for production implementation. Blog post on github.com/blog (~mid 2025).
Microsoft's a11y-LLM-eval Report (Key Data Point)
What: Automated accessibility benchmarking by Michael Fairchild (Microsoft) — tests how well LLMs generate accessible HTML with and without custom instructions.
Stunning finding:
| Condition |
Average WCAG Pass Rate |
| No instructions |
10% |
| Basic accessibility guidance |
46% (+36.9 pp) |
| Detailed instructions |
58% (+48.4 pp) |
- GPT-5.2 jumped from 41% → 95% with detailed instructions
- Proves custom instructions are not optional — they're the difference between 10% and 58%+ compliance
Link: https://microsoft.github.io/a11y-llm-eval-report/
Q&A Highlights
On preventing developers from blindly merging Copilot PRs:
- Trend toward spec-driven development — spend more time defining "what good means" upfront
- Implement checks on PRs: unit tests, integration tests, linters, axe-core scans — "your automations, your checks will catch those beforehand"
- "Everybody makes mistakes, AI makes mistakes, humans make mistakes — a multifaceted approach for quality"
- AI accelerates but doesn't replace: no substitute for thoughtful design, inclusive user research (GitHub uses Fable for studies with developers with disabilities)
On custom instructions + agent mode:
- Custom instructions apply to out-of-the-box Copilot Coding Agent (confirmed)
- Whether they apply inside custom agents — Ed wasn't certain, invited community testing
On non-developers using these tools:
- "If you can express what you want in plain language... that is a superpower"
- Non-devs can prototype and prove value, then hand to dev team for production
- Janice Rymer's governance framework is the proof point
On where to start (1 hour after Axe-con):
- Sign up for free GitHub account
- GitHub Copilot has a free tier
- Create a repo, go to gh.io/a11y-docs
- Start with custom instructions — "within an hour or two, you're going to have a lot of fun"
- Check Michael Fairchild's a11y-LLM-eval for instruction examples
Quotable Moments
"We are currently experiencing the strongest accessibility regulatory framework that I've seen in several decades." — Ed Summers
"90% of developers are using AI... and we have not seen acceleration [in accessibility]. We are missing a huge opportunity." — Ed Summers, synthesizing DORA + WebAIM data
"Custom instructions + scanner = better together."
"AI can accelerate what we are doing but there is no substitute for great design, thoughtful design, considering the needs of users, and there is no substitute for inclusive user research." — Ed Summers
"If you can express what you want in plain language, the ability to articulate what you want, what good means, what done means, in plain language — that is a superpower." — Ed Summers
"Is it perfect? No. But it's emerging technology and your experience with it is going to help improve it."
Resources from This Session
- GitHub A11y Documentation: https://accessibility.github.com/documentation (gh.io/a11y-docs)
- Accessibility Scanner: https://github.com/github/accessibility-scanner (gh.io/a11y-scanner)
- Microsoft a11y-LLM-eval: https://microsoft.github.io/a11y-llm-eval-report/
- awesome-copilot repo: Contains open-source custom instructions examples including markdown a11y
- a11yjobs.com: Job board for accessibility roles (by George Hewitt)
- DORA Report 2025: DevOps Research & Assessment, AI adoption data
- WebAIM Million 2025: https://webaim.org/projects/million/
- Web Almanac 2025: Accessibility chapter
- Fable: Inclusive user research platform used by GitHub
- Upcoming blog posts: Carrie Fisher on user feedback process, Janice Rymer on governance framework (github.com/blog)
Session 4: Shift Left Without Shifting Gears — Accessibility in Your Existing Workflow
Speaker: Harris Schneiderman, Director of Product Management, Deque Systems (former software engineer at Deque for ~13 years, ~10 as engineer before switching to product management; always focused on building accessibility tools for dev teams)
Moderator: Liz Moore
Session URL: https://www.deque.com/axe-con/sessions/shift-left-without-shifting-gears-accessibility-in-your-existing-workflow/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shift-Left-Without-Shifting-Gears_-Accessibility-in-Your-Existing-Workflow_a11y.pdf (8 slides — mostly a live demo session)
YouTube: https://www.youtube.com/watch?v=8vmHUgqtndo
Key Thesis
You can bake accessibility testing into every stage of your existing development workflow — from coding in the IDE to browser testing — without switching tools, slowing down, or becoming an accessibility expert. A three-layer testing approach (linter → MCP Server → browser extension) progressively catches more sophisticated issues while keeping the developer in their flow state.
Demo Application
App: "Casey Jones Railway Co" — a fictional train booking website (React + TypeScript + Vite + Tailwind CSS). Named after the real-life train conductor Casey Jones ("famous for always arriving on time"), and also a Grateful Dead reference.
Task: GitHub ticket to add a traveler selection UI — two passenger counter rows (adults defaulting to 1, children defaulting to 0) with increment/decrement buttons. Adults minimum 1, children minimum 0.
Three-Layer Testing Approach
Layer 1: axe Linter (VS Code Plugin) — FREE
What it is: Accessibility linter for VS Code. Works like a spell checker — shows red squiggly lines under code with a11y issues as you type. Completely free.
What it catches (static analysis):
- Links without discernible text (e.g., anchor with
aria-hidden="true")
- Buttons without discernible text (icon-only buttons missing labels)
- Images without alt text (including SVG components)
Key feature — Component mapping via axe-linter.yml:
Declare how custom React (or other framework) components map to HTML semantics. Example: telling axe Linter that <SearchIcon> renders as an <img> and its label prop maps to aria-label. The linter then understands your design system, not just raw HTML.
Broader than eslint-plugin-jsx-a11y: Supports React Native, Liquid, JS — not just JSX. Rule sets for JSX are "quite similar" but axe Linter has broader language support and the component mapping system.
Linter Server (premium): REST endpoint version for pre-commit hooks and CI checks via GitHub Actions. Returns JSON report of all violations.
Limitations: Static analysis only — can't detect color contrast from runtime CSS, can't test fully rendered apps.
What it is: MCP (Model Context Protocol) server that connects coding agents (Copilot, Cursor, etc.) to Deque's accessibility testing engine. Runs inside the IDE — no browser context switching.
How it works:
- Configured via
.vscode/mcp.json — Docker container (deque-systems/axe-mcp-server), API key stored securely
- API key authenticates with axe account portal → fetches org-wide config (testing standard, e.g., WCAG 2.2 AA; best practices; axe-core version)
- Two tools: #analyze and #remediate (hashtag notation helps coding agents recognize MCP tool calls)
- Analyze: Spins up headless browser with axe DevTools Extension pre-installed, runs full analysis on rendered page, returns axe-core JSON
- Remediate: For each violation, provides description, paragraph-form remediation guidance, and expected output HTML. Coding agent translates the fix into framework code.
Design philosophy — symbiotic relationship:
"We stay in our lane. Our lane is accessibility testing. We're the best at finding and fixing issues."
Deque provides a11y expertise + remediation guidance. Coding agents (Copilot, Cursor) provide framework translation. Neither needs to learn the other's domain.
Verification loop via Copilot instructions:
Add custom instructions: "After applying fixes, you must rerun #analyze to verify all issues are resolved. Confirm zero violations before considering the task complete." Creates an iterative loop — if AI makes a mistake, it catches it on re-scan and keeps trying.
Issues found and auto-fixed in demo:
| Issue |
What happened |
| Input without label |
Departure date field had separate <label> but input wasn't associated. Copilot discovered the TextInput component had a built-in label prop — knew the codebase better than the developer |
| Color contrast |
Tailwind CSS class with insufficient contrast; Copilot adjusted to meet 4.5:1 ratio |
| Button without label |
Reset button (icon-only) missing aria-label |
| Missing landmark |
Added <header> around hero banner |
After remediation: re-analysis returned zero violations.
Roadmap: Advanced rules and automated IGTs coming to axe MCP Server (before next Axe-con), so devs can run keyboard testing and guided tests from within the IDE.
What it adds beyond axe-core:
| Feature |
Description |
| Advanced Rules |
Use AI + automation to detect issues standard rules engines can't (has browser-level access: screenshots, longer-running tasks) |
| Intelligent Guided Tests (IGTs) |
"TurboTax for accessibility testing" — questionnaire-style yes/no questions about your app |
| Automated IGTs (new) |
AI answers the IGT questions automatically — "sit back and sip your coffee" |
Advanced Rules findings:
- Heading markup not used on headings — a
<div> styled to look like a heading ("Plan Your Trip") was not using heading semantics. Fix: <div> → <h2>
- Color contrast on gradient backgrounds — Text over gradient header, contrast ratio ranged from 3.75:1 to 3.89:1 (threshold: 4.5:1). Standard axe-core can't test gradients reliably because of hundreds of foreground-background combinations. Advanced Rules handles this.
Automated IGT findings:
- Selected only the 4 newly-added interactive elements (minus/plus buttons) — "stay focused on what you touched in your branch"
- AI also detected "Search Trains" was a
<div> that should be a <button>
- All 4 buttons failed the name check:
aria-label="decrease" and aria-label="increase" were insufficient because there are two of each. Screen reader users would hear "decrease, decrease" with no way to distinguish them.
- AI reasoning was transparent: explained that in full page context, multiple decrement buttons need more specific names. Suggested "decrease number of adults" / "decrease number of children"
- How the AI was trained: Deque shadowed subject matter experts doing manual assessments. Experts always understand full page context before testing individual elements. AI models do the same.
Fix: Updated PassengerCounter component to use template strings: `decrease number of ${label}`. Re-ran: zero automatic issues, zero IGT failures.
The Complete Demo Workflow
- Pick up GitHub ticket — enhancement to add traveler selection
- Review existing code — axe Linter immediately flags pre-existing issues
- Fix linter issues while browsing (campsite rule: "leave it cleaner than you found it"):
- Removed
aria-hidden="true" from a nav link
- Added
label prop to SearchIcon component
- Added alt text to hero banner image
- Write feature code — import
PassengerCounter, wire up state, implement increment/decrement with min values
- Run axe MCP Server from IDE — prompt: "Analyze localhost:5173 for accessibility issues. Remediate any violations found."
- Review auto-fixes — accept, reject, or modify each change
- Verification scan — re-run analyze, confirm zero violations
- Switch to browser — axe DevTools Extension, full scan with Advanced Rules
- Run Automated IGT on just the new elements (scoped to branch changes)
- Fix remaining issues from Advanced Rules and IGT
- Final verification — zero issues across all layers
- Commit and create PR
Issue Tally Across Layers
~11+ issues found total — each layer caught things the previous one missed:
- axe Linter (static): ~3 basic issues (missing labels, alt text)
- axe MCP Server (rendered): ~4 issues (color contrast, input labels, landmarks, button labels)
- Advanced Rules: ~2 issues (heading semantics on styled divs, gradient contrast)
- Automated IGT: ~4+ issues (context-dependent accessible names, wrong element roles)
Quotable Moments
"Casey Jones was famous for always arriving on time. And today we're going to make sure our feature arrives on time. But it doesn't leave anybody behind." — Harris Schneiderman
"Linters are essentially like spell checkers for your code." — Harris Schneiderman
"We stay in our lane. Our lane is accessibility testing. We're the best at finding and fixing issues." — Harris Schneiderman on the division of labor between Deque's tools and coding agents
"I'm happy, I'm lazy, so I'm happy I don't have to do anything." — Harris Schneiderman on staying in the IDE while MCP Server tests in the background
"It's a tough battle trying to be better at things than ChatGPT and Claude, but I think we have the expertise baked into our models that helps us outshine them." — Harris Schneiderman
"Don't just focus on the negative." — Harris on Automated IGT showing reasoning for both passes and failures
"Always review the changes that it's making on your behalf." — Harris Schneiderman
Q&A Highlights
On design system compatibility with AI fixes:
- axe MCP Server doesn't compare against Figma designs yet ("maybe we'll have something like that coming out soon")
- You can add design system context to Copilot instructions (e.g., "choose from our color palette when fixing contrast")
- Dylan (Deque colleague) has a proof of concept where MCP Server chooses colors from the available design palette
- Ultimate catch-all: always review AI-generated changes
On axe MCP Server vs. asking Claude/ChatGPT directly:
- General LLMs are "pretty knowledgeable" in accessibility basics
- Deque's tools excel at: Advanced Rules, Guided Tests, context-aware analysis
- MCP Server runs the actual axe DevTools Extension in a real browser, not just source code analysis
- Remediation guidance trained on decades of Deque testing expertise — "higher quality, more likely to not result in additional accessibility issues"
- Harris welcomed direct comparison: "I welcome you to try it and give us feedback"
On listing all linter errors at once:
- VS Code shows inline red squiggles, but the underlying axe Linter Server returns full JSON report of all violations
- Can be set up as pre-commit hooks or CI checks via GitHub Actions
Resources from This Session
- axe Linter VS Code Plugin: Free, VS Code Extension Marketplace
- axe Linter Server: REST endpoint for CI/CD integration (premium)
- axe MCP Server: Docker container
deque-systems/axe-mcp-server (premium, requires axe DevTools for Web subscription)
- axe DevTools Browser Extension: Premium, Advanced Rules + Automated IGTs
- Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shift-Left-Without-Shifting-Gears_-Accessibility-in-Your-Existing-Workflow_a11y.pdf
- Deque Community Discord: For continued conversation
Session 5: The Accessible Design Specialist's Playbook
Speaker: Pawel Wodkowski, Lead Designer / Design System Accessibility, Atlassian (based in Sydney; 9.5 years at Atlassian; also a Zumba instructor)
Moderator: Catherine Jordan (Deque)
Session URL: https://www.deque.com/axe-con/sessions/the-accessible-design-specialists-playbook/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The-Accessible-Design-Specialists-Playbook_a11y.pdf
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The_Accessible_Design_Specialists_Playbook.txt
Track: Design
Key Thesis
Accessibility scales through building local community, not through relying on a central expert. Atlassian grew from 5 accessibility design specialists to ~60 across four cohorts by embedding expectations into growth profiles, creating a structured 12-week specialist program, and measuring what matters (confidence + impact, not attendance). The playbook: embed expectations → build local capability → formalize and measure.
Context: Atlassian's Accessibility Journey
- Company: 12,000+ employees, products include Jira, Confluence, Trello
- Starting point (2019): First accessibility project in Jira Data Center. Pawel thought it would take a year.
- Reality: Six years later, still evolving. The journey went from "checkbox" (ticking requirements without changing behavior) to "culture" (shared habits that make accessible outcomes the default).
- Big disclaimer: Pawel's personal views, not official Atlassian position.
Three-Chapter Playbook
Chapter 1: Adding Accessibility to Design Growth Profiles
What: Embedded accessibility knowledge expectations across all designer levels, from Grad Designer through Senior Principal Designer.
Level-by-level expectations:
| Level |
Accessibility Expectations |
| Grad Designer |
Understand accessible design fundamentals, WCAG P.O.U.R. principles, 5 major disability cohorts. Understand accessible use of color. |
| Designer |
Understand accessible form design, use of images. Basic understanding of accessible information architecture, text resizing, basic keyboard interaction (focus order, reading order). Consistently use available accessible design resources. |
| Senior Designer |
Strong understanding of motion design principles and impact on UX. Implement complex architectures, create accessible interfaces, optimize navigation systems. Participate in accessibility design reviews. |
| Senior/Lead (Specialist) |
Set quality bar by producing accessible design specs. Give accessibility feedback and review design specs. Be the go-to person for a11y questions on the team. Lead local a11y workshops. Cooperate on global a11y initiatives. |
Why it matters: Growth profiles make accessibility a career expectation, not an optional interest. Every designer knows what's expected at their level.
Core message: "Accessibility is a team sport."
Specialization Program vs. Mentorship Program (Circle):
| Aspect |
Specialization Program |
Mentorship Circle |
| Motivation |
Company needs + participant growth interest |
Peer-to-peer, general interest |
| Curriculum |
Based on official growth profiles |
Built by mentors and mentees together |
| Assessment |
Self-assessment + formal graduation |
Informal, self-assessed |
| Expectations |
Formal expectations from growth profiles after graduation |
None formal |
Program structure — 7 live sessions over ~12 weeks:
- Kick-off
- Being a specialist
- Annotations Workshop
- Accessible design review
- Giving feedback on a11y
- Discussion Panel, Q&A
- Graduation
Growth timeline:
| Date |
Event |
Cumulative Specialists |
| Oct 2023 |
New growth profiles + specializations |
5 |
| Mar-Jun 2024 |
First cohort |
27 (+22) |
| Oct-Dec 2024 |
Second cohort |
61 (+34) |
| Oct-Dec 2025 |
Third cohort |
72 (+11) |
Current ratio: ~10:1 designer-to-specialist ratio across the design org.
Three pillars of formalization:
- Clear structure — Program lead, reinforced expectations, leadership backing. Being "real about my time commitment as program lead."
- Regular rhythms — Fortnightly Jam Sessions, Specialists Pulse (quarterly surveys), checkpoints, office hours, feedback loops.
- Learning & development — Specialists Buddy mentoring program (90-day head start for new specialists pairing with experienced ones).
Specialists Pulse — measurement system:
| Component |
Timing |
Details |
| Specialists Survey |
Quarterly, month 2, open ~6 weeks |
Anonymous. Measures confidence, support, motivation (Likert scale). "In one word, describe your experience." |
| Managers Survey |
Quarterly, month 3, open ~3 weeks |
Tracks completion rates. Measures specialist's contribution to team a11y capability, awareness impact. "Share one example of how your specialist made a difference." |
| Report + Action Plan |
After surveys close |
Synthesizes findings into actions |
| Specialists Retro |
Month 3, multiple time zones |
Bright spots, Frictions, 5 Whys columns |
The Critical Insight: Engagement ≠ Impact
What Pawel assumed: Specialists weren't showing up to Jam Sessions → they must be demotivated and don't want to be specialists.
What the data showed:
Motivation (Specialists Survey): Consistently high — most responses at 4-5 on a 5-point scale across both Jul-Sep and Oct-Dec 2025 quarters.
Manager-reported impact on team awareness: Strong and improving — majority of managers rated specialist influence at 4-5 on a 5-point scale.
Confidence (Specialists Survey): Very high — overwhelming majority at 4-5, with improvement from Jul-Sep to Oct-Dec 2025.
Conclusion: Low Jam Session attendance reflected scheduling friction, not declining motivation. "Engagement is not the same as impact." This distinction refocused the program on outcomes rather than attendance metrics.
Scaling Recommendations (Any Org Size)
- Make accessibility visible — through growth profiles, checklists, shared language
- Build community — mentorship circles, champion programs, or specialist roles tailored to your org culture
- Start small and formalize gradually — begin with "one or two people interested," add structure as impact becomes evident
For small organizations: "It can be you and five other people interested in accessibility as a mentorship circle and then you start to add it to growth profiles." Formal specialization isn't required — embedding a11y entirely in growth profiles can work.
The program equipped specialists with practical tools:
- Checklists
- Annotations (for design specs)
- Tooling
- Guidance & Docs
- AI agents
- Measurements
Quotable Moments
"Accessibility is a team sport." — Pawel Wodkowski
"Engagement is not the same as impact." — Pawel Wodkowski, on low Jam Session attendance vs. high survey motivation
"Start with your need. Finish with theirs." — Pawel Wodkowski (playbook principle)
"Start small and scrappy, but be ready to formalize it." — Pawel Wodkowski
"Accessibility scales through building a local community." — Pawel Wodkowski
"From checkbox to culture — the shared habits, assumptions, and routines that make accessible outcomes the default." — Pawel Wodkowski
"It can be you and five other people interested in accessibility as a mentorship circle." — Pawel Wodkowski, on getting started at any org size
Q&A Highlights
On scaling to smaller organizations:
- Start with mentorship circles; formal specialization isn't required
- Even mentoring two people creates a multiplication effect as those mentees mentor others
On specialization vs. baseline:
- Organizations can embed accessibility entirely in growth profiles rather than creating separate specialization tracks
- The specialist track is an addition to growth profiles, not a replacement
On single specialist leadership:
- Be realistic about capacity as program lead
- Mentoring even two people creates exponential growth over time
On future measurement:
- Team plans to survey non-specialists about specialist impact on specific projects (next evolution of the Pulse)
Resources from This Session
- Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The-Accessible-Design-Specialists-Playbook_a11y.pdf
- Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The_Accessible_Design_Specialists_Playbook.txt
- LinkedIn: linkedin.com/in/pawelwodkowski
Session 6: Integrating Axe for Automated Testing in a Distributed Engineering Environment
Speakers:
- Peter Bossley, Senior Manager, Accessibility, Thomson Reuters
- Corey Hinshaw, Lead Accessibility Specialist, Thomson Reuters
- Pavan Mudigonda, Lead QA Engineer (Developer Experience), Thomson Reuters
Session URL: https://www.deque.com/axe-con/sessions/integrating-axe-for-automated-testing-in-a-distributed-engineering-environment/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-Axe-for-automated-testing-in-a-distributed-engineering-environment_A11Y.pdf
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-axe-for-automated-testing-in-a-distributed-environment.txt
Track: Development
Key Thesis
Scaling accessibility automation across a large, decentralized enterprise (100+ engineering orgs, 250+ applications, products ranging from brand new to 50 years old) requires a hybrid model, pragmatic compromises, and custom tooling to bridge the gap between what vendor platforms provide and what enterprise oversight demands.
About Thomson Reuters
- Content-driven technology and media company
- Focus areas: Legal, Tax & Accounting, News, Government, Print
- ~25,000 employees, offices globally (US, Canada, UK, Switzerland, India, Mexico, Brazil, China, Japan)
- Hundreds of products and services
- 100+ engineering organizations
- Product age range: brand new to 50 years old
The Core Challenge
How do we scale accessibility across a diverse, complex, and decentralized organization?
Approach: Manual vs. Automated vs. Hybrid
| Approach |
Pros |
Cons |
| Manual |
Shift-left mindset, a11y as first-class citizen, deep expertise at all stages |
Resource constraints, process bottlenecks, cost |
| Automated |
Scalable, cost effective, empowers individual contributors, consistency |
Covers only 30-60% of issues, introduces a11y debt, technically conformant ≠ accessible |
| Hybrid (TR's choice) |
Best of both |
Requires careful orchestration |
TR's hybrid strategy:
- Accessibility-first mindset, shift left as far as possible
- Empower designers and developers with training + materials
- Adopt AI a11y assistants for quick questions
- Provide dedicated accessibility expertise where most needed
- Automated a11y testing early and often:
- Static code analysis in IDE (axe Linter)
- End-to-end tests on code commits
- Regular automated tests of deployed applications
- Manual evaluations and audits for major releases
Technical Reality: Diverse Stacks
Test automation tools in use:
- Selenium
- Playwright
- WebDriverIO
- No-Code/Low-Code Proprietary Tools
CI/CD platforms:
- GitHub Actions
- Jenkins
- Azure DevOps Pipelines
- AWS CodeBuild
- Local Runs
This diversity is the defining challenge — no single integration path works for everyone.
- axe-core (open source) — the foundation
- Axe Developer Hub — integrates axe-core with unit and e2e testing frameworks, collects results in project dashboard
- Axe Linter — static code analysis, available in IDE or automated tests
Six Challenges and Solutions
1. Limits of Automation
Problem: Automated tools cover only 30-60% of possible a11y issues. Some applications can't be easily tested. Tools lack subjective understanding (though AI rules are improving this).
Solution: Hybrid manual/automated model. Developer/specialist/tester training. Comprehensive library of a11y courses (Deque University).
2. Technical Implementation — Axe Developer Hub
Problem: Axe Developer Hub relies on existing UI automation tests. Integration was often challenging across diverse stacks.
Solution:
- Structured rollout program with dedicated technical + accessibility expertise
- Custom scripts and code repo actions for edge cases
- Review of initial implementation with reports and assistance
3. Technical Implementation — Axe Watcher
Problem: Not all products use technologies or processes natively compatible with Axe Watcher.
Solution:
- Custom scripts and workflows relying on raw JSON output
- Raw test results uploaded to Axe Developer Hub
- Dedicated support for migration to supported technologies
4. Technical Implementation — Axe Linter
Problem: Axe Linter sends code to external systems. Security and IP concerns at enterprise scale.
Solution:
- Created internal, standalone Axe Linter server (self-hosted)
- Distributed access information and keys to engineering organizations
5. Organization and Oversight — Enterprise Management
Problem: Axe Developer Hub doesn't provide out-of-the-box enterprise management tools. Only basic per-project metrics.
Solution:
- Created internal database and dashboard application
- Axe test results sent to internal database AND Developer Hub
- Collects and displays test run summaries per product
- Uses internal identifiers for reporting
- Created reporting automations to collect data from Developer Hub + internal database
- Defined success metrics: reduction in issue count over time, issue thresholds
- Aggregated analysis dashboards for ongoing monitoring
6. One Size Does Not Fit All
Problem: Program assumptions (e.g., "test every PR") don't match organizational realities. Complex product families had best coverage in regression test suites, which have real costs and are deliberately scoped.
Solution:
- Created minimum standards, added to internal policies
- Flexibility in test run cadence (on PR, daily, weekly, monthly)
- Compromise: additional accessibility test cases + fixed periodic cadence for complex products
- Automations to collect result data from multiple sources centrally
Results
| Metric |
Value |
| Applications integrated with Axe Developer Hub |
250+ across 23 business units |
| Individual test runs performed |
12,000+ |
| Issue trend |
Overall reduction in reported issues in majority of tested applications |
| Compliance milestones |
Met several regulatory milestones |
| Cultural impact |
Increased engineering team awareness of accessibility |
2026 Roadmap
- Integrate Axe Developer Hub into remaining products and applications
- Mobile apps (iOS and Android)
- AI-powered remediation with human in the loop
- Internal Developer Portal (IDP) to increase visibility of axe workflows
Key Takeaways
- Automated testing is a starting point, not gospel — covers 30-60% of issues, manual testing remains essential
- Enterprise reality requires custom tooling — vendor dashboards don't provide org-wide visibility; build your own aggregation layer
- Security concerns are real — axe Linter's external code transmission required an internal self-hosted server
- Flexibility beats dogma — "test every PR" is ideal but complex products may need periodic cadence instead; define minimum standards with flexibility
- Training and cultural buy-in matter as much as tools — tools find issues, culture fixes them
- Balance deadline pressure with accessibility — the hybrid model acknowledges this tension
Quotable Moments
"Is it perfect? Nope. (But nothing is.)" — Slide on axe tools adoption
"Automated tools cover between 30-60% of possible accessibility issues." — Thomson Reuters team
"Technically conformant but inaccessible experiences" — on the risk of automation-only approaches
"Many organisations are waking up to the fact that embracing accessibility leads to multiple benefits — reducing legal risks, strengthening brand presence, improving customer experience and colleague productivity." — Paul Smyth, Barclays (cited)
"Accessibility is a core value... something we view as a basic human right." — Sarah Herrlinger, Apple (cited)
Resources from This Session
- Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-Axe-for-automated-testing-in-a-distributed-engineering-environment_A11Y.pdf
- Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-axe-for-automated-testing-in-a-distributed-environment.txt
Session 7: Shifting Left — Building an Ecosystem to Scale Accessibility
Speakers:
- Todd Keith, EVP / Head of UX, Regions Bank
- Katrina Lee, UX Program Manager (Design Ops), Regions Bank
Moderator: Jon (Deque)
Session URL: https://www.deque.com/axe-con/sessions/shifting-left-building-an-ecosystem-to-scale-accessibility/
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shifting_left_Building_an_ecosystem_to_scale_accessibility.txt
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shifting-left_-Building-an-ecosystem-to-scale-accessibility_a11y.pdf (note: PDF contains different session content — "Shift-up Accessibility" by D2L/AT&T — appears to be a mis-upload by Deque)
Track: Organizational Success with Accessibility
Key Thesis
Accessibility at a top-20 US bank required a 14-year cultural transformation — from a single passionate front-end developer asking "What about accessibility?" to an ecosystem where accessibility analysts are embedded in cross-functional squads, 40% of staff have completed a11y training, and WCAG 2.2 is in vendor contracts. "Shift left" means starting at discovery, not development. "Better is better."
About Regions Bank / Regions XD
- Top-20 US bank
- Experience Design team ("Regions XD") handles UX
- Regulated industry (banking/financial services)
- Design Ops function manages vendor relationships, tooling, standards
14-Year Journey: Three Maturation Phases
Phase 1: Introduction (2013-2014)
- A single passionate front-end developer championed accessibility by asking "What about accessibility?" across projects
- Organization simultaneously created an ADA Advisory Council focused on digital accessibility
- Classic "passionate individual" origin story
Phase 2: Growth (4-year span)
- Team expanded from one individual to multiple accessibility analysts
- Restructured from discipline-based silos (designers separate from developers separate from a11y) into cross-functional squads where these roles collaborate daily within dedicated platform groups
- Launched a design system with built-in accessible components
Phase 3: Ecosystem Building (recent)
- Adopted WCAG 2.2 as the standard
- Third-party vendor management — updated contracts to require WCAG 2.2 compliance
- Broader organizational cultural integration
Key Organizational Insight: Where A11y Analysts Belong
Initial assumption: Accessibility expertise belongs in technology/development teams.
What they learned: Greater value in positioning accessibility analysts within the design organization, closer to product strategy and user research. This is the literal "shift left" — moving a11y expertise from the end of the pipeline (development/QA) to the beginning (design/discovery).
Org Structure Evolution
- Squad managers report to directors focused on value streams and platforms
- Design Ops Manager (Katrina) serves all directors — handles vendor relationships, tooling partnerships, accessibility standards
- Accessibility analysts embedded in daily squad operations → amplified influence
- Managers and business leaders now proactively request accessibility review before planning phases
Cultural Alignment
Connected accessibility to Regions' core corporate values:
- Put people first
- Do what is right
- Focus on your customer
- Reach higher
- Enjoy life
This framing moved accessibility from "compliance obligation" to "core value expression."
Training and Upskilling
- 40% of staff completed at least one Deque University course in the past year
- Three squads formed study groups meeting monthly to discuss a11y concepts and real-world applications
- Weekly accessibility community practice meetings for peer support and problem-solving
- Organization-wide office hours for accessibility questions
Audit Results
Partnered with Deque for reviews of three strategically selected platforms:
- High-visibility website
- Greenfield product
- Enterprise authentication system
Key finding: ~50% of identified issues traced to third-party components — validating the need for vendor management and contractual WCAG requirements.
Accessibility Analyst Team
- 130 combined years of accessibility experience across the team
- AI-generated avatars celebrating each team member's unique background and personality
- Embedded in squads rather than operating as a separate function
Overcoming Resistance
"Accessibility slows us down" concern:
- Countered by demonstrating that early integration prevents costly downstream remediation
- Reframed: accessibility doesn't add work, it moves work earlier where it's cheaper to fix
Leadership viewing a11y as pure compliance:
- Start with empathy exercises involving engineers and business leaders
- Connect accessibility to customer-focused organizational mission
- Share documented successes with peers and leadership
- Emphasize usability improvements benefit ALL users
- Tell stories connecting accessibility work to actual customer experiences
Future Initiatives
- Accessibility testing adoption across all teams
- Weekly accessibility community practice (expanding)
- Organization-wide office hours
- Process documentation establishing shared responsibility across product management, design, and technology
Quotable Moments
"You start with empathy. Until someone knows and understands the value, you cannot have empathy for it." — Todd Keith
"Better is better." — Todd Keith
"Accessibility is always a work in progress — that is our theme today and our takeaway." — Katrina Lee
"What about accessibility?" — The original question from a single passionate front-end developer that started Regions' 14-year journey
Key Takeaways
- Shift left = move a11y into design, not just development — accessibility analysts belong closest to product strategy and user research, not at the end of the pipeline
- Connect a11y to existing corporate values — framing accessibility as "put people first" and "do what is right" resonates more than compliance arguments
- 50% of audit issues came from third-party code — vendor management and contractual WCAG requirements are essential
- Squad-based embedding > separate a11y team — daily collaboration with designers and developers amplifies influence; leaders start requesting a11y review proactively
- Empathy first, then compliance — empathy exercises and customer stories move people more than legal arguments
- "Better is better" — perfectionism is the enemy; continuous improvement beats waiting for perfection
Resources from This Session
- Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shifting_left_Building_an_ecosystem_to_scale_accessibility.txt
Speaker: Ambika Yadav, Visualization Engineer, Atlassian Visualization Platform (based in Seattle; MA in Media, Arts and Technology from UC Santa Barbara)
Session URL: https://www.deque.com/axe-con/sessions/making-platform-react-chart-components-accessible/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Accessible-Platform-Chart-Components_a11y.pdf
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Making_Platform_React_Chart_Components_Accessible.txt
Track: Development
Key Thesis
Charts shape decisions across democracy, health, climate, and business — and when they're inaccessible, entire populations are excluded from information that affects their lives. Building accessible charts at platform scale (React components used across Atlassian products) requires addressing 10 distinct areas: design context, colors, pattern fills, focus management, screen readers, data tables, AI insights, tactile charts, and sonification.
Why Accessible Charts Matter
Charts influence decisions in:
- Democracy — election results, polling, gerrymandering visualizations
- Public health — COVID dashboards, vaccination rates, hospital capacity
- Climate — temperature trends, emissions, flood/fire risk maps
- Business — revenue graphs, reliability dashboards, incident timelines
Who is excluded when charts aren't accessible:
| Barrier |
Population Affected |
Approximate Prevalence |
| Blind/low vision users can't read data |
1 in 4 people |
WHO/CDC |
| Colorblind users can't see differences |
1 in 20 people |
|
| Cognitive disabilities can't interpret |
1 in 10 people |
|
| Motion sensitive users must avoid it |
1 in 100 people |
|
| Non-mouse users can't explore effectively |
1 in 7 people |
|
Framework: Chartability Heuristics (built on WCAG POUR)
| Principle |
Chart Application |
| Perceivable |
Clear titles/context, meaningful alt text, non-visual data access (table/summary), don't rely on color alone |
| Operable |
All content reachable via keyboard (not mouse-only), motion/animation doesn't trap or harm |
| Understandable |
Logically ordered, clearly labeled, consistent AT announcements, predictable interactions |
| Robust |
Solid semantics + ARIA, works across browsers and screen readers, validated by actual testing |
10 Areas for Accessible Charts
1. Design — Context and Labeling
A chart without context is useless. Required elements:
- Title and subtitle
- Axis labels with ticks and grid
- Data labels
- Units (critical — "274" means nothing without "km" or "$")
Platform requirement: React chart components must expose customization for ALL chart elements via props, hooks, or composable patterns.
2. Colors — Contrast and Palettes
- Contrast ratios: 3:1 for geometric shapes and large text, 4.5:1 for regular text
- Must work in both light and dark modes
- Three palette types:
- Categorical — distinct categories, no numeric order
- Sequential — low to high values
- Diverging — meaningful midpoint
Tools: Color Brewer (colorbrewer2.org), Chroma.js (gka.github.io/palettes), Viz Palette (projects.susielu.com/viz-palette)
3. Pattern Fills — Beyond Color
Patterns (stripes, dots, crosshatching) provide secondary encoding for colorblind users.
Implementation:
- SVG
<defs> container for reusable definitions
<pattern> defines repeating tile with an id
- Apply via
fill="url(#pattern-id)" on chart shapes
- visx library provides ready-to-use pattern components: visx.airbnb.tech/patterns
Demonstrated across color blindness types:
- Deuteranomaly (~2.7% users) — distorted red-green
- Deuteranopia (~0.56%) — strong red-green confusion
- Tritanopia (~0.016%) — blue-yellow blindness
- Achromatopsia (~0.0001%) — very little/no color vision
Each type shown before/after pattern fills — dramatic improvement.
4. Focus Management — Keyboard Navigation
Critical design decisions:
- Don't add tab stops to every chart element — with large datasets, users would Tab dozens of times. Treat chart as a single Tab stop, use internal arrow navigation.
- Enter key moves focus INTO chart; Escape moves focus OUT
- Arrow-Left/Right moves between marks of same category
- Arrow-Up/Down moves between marks of same y-value
Implementation:
- Custom
onKeyDown handler for specific key presses
- Browser
focus() method with React refs or query selectors
tabIndex={-1} on elements that should be programmatically focusable but not in default Tab order
- Expose hooks and props for customization
- Data Navigator — JavaScript module providing focus-management helpers for charts (out of the box)
Must include: Visible helper text describing keyboard navigation ("Use arrow keys to navigate between data points, Escape to exit chart")
5. Screen Reader Interface
Chart container:
- Alt text formula: "Chart type of type of data where reason for including chart"
- Example: "Stacked bar chart showing distances walked by each friend for every month of 2025, used to compare monthly activity and see who contributes most."
- Role:
img or graphics-object
Chart marks (individual data points):
aria-label formula: "Series name, X value, Y value, units"
- Example: "Priya, x: Jan, y: 274"
- Role:
img or graphics-symbol
- Keep succinct with maximum data information
Reference for alt text: medium.com/nightingale/writing-alt-text-for-data-visualization-2a218ef43f81
6. Data Table Alternative
Always provide a data table alongside the chart — accessible, searchable, and sortable. Tables and visual charts serve different accessibility needs; tables don't replace labels, both are necessary.
7. AI Insights
AI-generated text descriptions of chart patterns and trends — helps screen reader users understand the "story" the chart tells without navigating every data point.
8. Tactile Charts
Prototype designs with smart defaults for tactile graphics — physical representations for blind users. Reference: vis.csail.mit.edu/pubs/tactile-vega-lite.pdf
9. Sonification
Map data values to audio frequencies so users can hear patterns and trends. A rising line chart becomes a rising pitch.
10. Mobile Accessibility (mentioned in Q&A)
Mobile chart accessibility requires different navigation paradigms than desktop — an area for future exploration.
Q&A Highlights
On label density and overwhelm:
- Customize what labels display per chart type
- Use collision detection or density mapping to hide redundant labels
- Highlight important data points
On pattern "busyness" in bar charts:
- Design refinements can reduce visual clutter while maintaining pattern distinction
On data tables vs. labels:
- Tables accompany charts, don't replace labels — both serve different needs
On enabling patterns:
- Use clear language: "Enable/Disable Pattern Fill" as a toggle
Quotable Moments
"Charts shape decisions across almost every part of life." — Ambika Yadav
"If charts are not accessible, 1 in 4 people can't read the data, 1 in 20 can't rely on colors, 1 in 7 can't explore without a mouse." — Ambika Yadav (combining WHO/CDC data)
"Don't add tab stops to every chart element. Treat the chart as a single Tab stop and use internal navigation." — Ambika Yadav
"These solutions aren't perfect, but researchers, designers and engineers are steadily pushing the boundaries of chart accessibility." — Ambika Yadav
Resources from This Session
- Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Accessible-Platform-Chart-Components_a11y.pdf
- Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Making_Platform_React_Chart_Components_Accessible.txt
- Chartability heuristics: Framework for chart accessibility built on WCAG POUR
- Color Brewer: https://colorbrewer2.org/
- Chroma.js Palettes: https://gka.github.io/palettes
- Viz Palette: https://projects.susielu.com/viz-palette
- visx Patterns: https://visx.airbnb.tech/patterns
- Data Navigator: JS module for chart focus management
- Tactile Vega-Lite paper: https://vis.csail.mit.edu/pubs/tactile-vega-lite.pdf
- Alt text for data viz: https://medium.com/nightingale/writing-alt-text-for-data-visualization-2a218ef43f81
Speaker: Victoria Polchinski, Lead UX Researcher, Arizona State University
Session page: https://www.deque.com/axe-con/sessions/csat-as-a-tool-for-accessibility-insights-lessons-from-arizona-state-university/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/CSAT-as-a-tool-for-accessibility-insights_-Lessons-Learned-from-Arizona-State-University-1_A11Y.pdf
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/CSAT_as_a_Tool_for_Accessibility_Insights.txt
Track: Research / UX
Key Thesis
CSAT (Customer Satisfaction) surveys can be a powerful, low-cost mechanism to surface accessibility insights — if you add one question ("Do you use assistive technologies?") and disaggregate the data. ASU used this to build a panel of 800+ AT users and identify a consistent ~5-point satisfaction gap between AT and non-AT users.
Step 1 — Planning:
- Started as research team of one, limited resources
- Goal: ≥25% of participants are disabled/AT users, benchmark ≥12 products annually
- Three foundations: accessible research tools, accessible incentives, participant pool with AT users identified
- Socialized through "coffee and cake" conversations with influential PMs
Step 2 — Execution:
- Survey: 7-10 questions, 1-3 minutes
- Core CSAT question: "Overall, how satisfied are you with your experience?" (5-point scale)
- Three open-ended questions (likes, dislikes, feedback)
- Key question: "Do you use assistive technologies or devices (e.g., magnifiers, screen reading software, text-to-speech, video captions, etc.)?"
- Distribution: in-app banners below main navigation, 2-3 weeks, dismissible
- Custom-built banners for full accessibility control
- Tested for: automated scans, 200% zoom, keyboard-only, screen reader, color contrast, label clarity
- Avoided: drag-and-drop and slider question types
Step 3 — Analysis:
- Formula: (Satisfied responses ÷ Total responses) × 100 = CSAT %
- Disaggregated by population: students, staff, faculty, AT users vs. non-AT users
- Key finding: AT users consistently rated satisfaction ~5 points lower than non-AT users across 12+ surveys
- One major exception: ASU Personalized Graduate Admissions (simplified process, eliminated letters of rec, fees, personal statements) — AT users rated 5 points HIGHER than non-AT users
- Student quote: "I wouldn't have gone through with the Masters program if I had needed to complete the strenuous task of applying"
Step 4 — Recruitment & insights:
- ~10% of respondents indicated AT use
- 50-80% opted into future research
- 800+ AT users recruited through surveys alone
- Best recruitment: within one month of survey completion
- Partnered with Student Accessibility Services → 600+ additional participants
Consistent Accessibility Themes
- Customization (most common request) — dark mode, text size, spacing, font, color, contrast control
- Law of Proximity — keeping related labels/actions/feedback visually close, critical for low-vision/magnification users
- Flexibility — async/online preferred by students with disabilities, accommodates varying energy levels
Limitations Acknowledged
- AT question doesn't capture all disabled people (not all use AT; not all AT users identify as disabled)
- Surveys identify needs but deeper qualitative research needed for the "why"
- Manual coding of open-ended responses is time-intensive
Quotable Moments
"Nothing About Us Without Us" — disability community mantra, framing the entire approach
"Thank you for creating such a survey... we are often not heard enough." — disabled student feedback
Resources
- Tools: Qualtrics (surveys), Google Forms (screeners), R (analysis), Airtable (data management)
- Northern Arizona University assistive technology certificate program
- Contact: vpolchin@asu.edu
Session 10: Scaling Accessibility in a Complex Enterprise — Wolters Kluwer
Speaker: Ryan Schoch, Director of UX Advisory Services, Wolters Kluwer
Session page: https://www.deque.com/axe-con/sessions/scaling-accessibility-in-a-complex-enterprise-lessons-from-audits-adoption-and-shared-practices/
Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Scaling-Accessibility-in-a-Complex-Enterprise-Lessons-from-Audits-Adoption-and-Shared-Practices_a11y.pdf
Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Scaling_Accessibility_in_a_Complex_Enterprise.txt
Track: Design / Organization
Key Thesis
Scaling accessibility in complex enterprises isn't primarily a knowledge gap — it's a systems problem. The hard part is getting large numbers of teams to interpret accessibility expectations the same way. "Scaling before understanding amplifies variance."
Core Problem
- Organizations aren't greenfield — they're "layered ecosystems" with distributed authority, partially shared foundations, and legacy infrastructure
- Audits revealed: same interaction breakdowns across divisions, shared components implemented differently or bypassed entirely
- Examples: accordions and combo boxes behaving inconsistently, keyboard focus landing unpredictably, unintended reading orders
- These aren't isolated defects but "structural variants"
Why Traditional Approaches Fail
The conventional model (roles + training + tools → conformance) assumes improving parts automatically normalizes the whole. In complex systems:
- Awareness alone is insufficient
- Training doesn't automatically change behavior
- Documentation doesn't guarantee adoption
- Variance is the "enemy of scale"
- "Good intentions don't normalize systems. Structure does."
Solutions: Interaction-First Design System
1. Design system evolution:
- Shifted from visual consistency to responsive behavioral consistency
- Encoded accessible interaction expectations into reusable patterns
- Shared interaction specification templates in Figma
- Reusable annotation libraries for UX community
2. Interaction-first approach:
- Functional HTML prototypes early in design process (not just static visuals)
- Ran automated + manual testing (screen readers) on prototypes
- Used prototypes as shared references rather than debating downstream
3. Explicit interaction expectations:
- Designers annotate during handoff: focus order, keyboard behavior, landmark strategy, meaning/labeling, component API considerations
- Interface designers, design technologists, and engineers work in parallel
4. Reframing:
- "Accessibility is usability and usability is good design"
- Connected to product quality, not just compliance
The Reinforcing Loop
Shared understanding → interaction expectations → consistent use → improved feedback → reduced variance → stronger shared understanding. Patterns normalize "not because they exist, but because they are consistently expected in the process."
Practical Recommendations
- Build where adoption already exists (don't try to cover all divisions at once)
- Use audits as learning tools, not report cards
- Design for uneven adoption — create reusable foundations teams can adopt when ready
- Prioritize interaction architecture before production scaling
- Embed accessibility in everyday work, not quarterly pushes
Cross-session Connections
- Echoes Thomson Reuters (Session 6): both serve regulated industries with hundreds of products, both discovered vendor tools need enterprise wrapping
- Echoes Atlassian (Session 5): design system as accessibility multiplier, specialist communities
- Adds a new dimension: the "structural variant" framing — accessibility failures aren't individual mistakes but system-level variance
- Strongest statement on why training alone fails: "Awareness alone is insufficient. Good intentions don't normalize systems. Structure does."
Quotable Moments
"The hard part is really in large numbers of teams to interpret accessibility expectations in the same way." — Ryan Schoch
"Scaling before understanding amplifies variance." — Ryan Schoch
"Good intentions don't normalize systems. Structure does." — Ryan Schoch
"Patterns normalize not because they exist, but because they are consistently expected in the process." — Ryan Schoch
Session 11: [PLACEHOLDER — more content to follow]
Blog Post Angle (Working Notes)
Working title ideas:
- "From Tab Key to Tactile Charts: What Axe-con 2026 Taught Me About Accessibility at Every Level"
- "Axe-con 2026: Eight Sessions, One Truth — Accessibility Is a Team Sport"
- "The Full Accessibility Stack — Lessons from Axe-con 2026"
Emerging narrative thread (10 sessions in):
-
Session 1 = Individual practitioner (keyboard testing, zero tools)
-
Session 5 = Design team (growth profiles + specialist community — Atlassian)
-
Session 8 = Component engineering (accessible charts at platform scale — Atlassian)
-
Session 7 = Design + Org culture (squad embedding + empathy — Regions Bank)
-
Session 2 = Organization program (VPATs first — Cvent)
-
Session 3 = Platform/Infrastructure (continuous AI for a11y — GitHub)
-
Session 4 = Developer workflow (three-layer testing — Deque)
-
Session 6 = Enterprise at scale (250+ apps — Thomson Reuters)
-
Session 9 = UX Research (CSAT surveys surfacing AT user gaps — ASU)
-
Session 10 = Systems thinking (structural variants + interaction-first design — Wolters Kluwer)
Session 8 adds the deep-technical dimension. Previous sessions covered org strategy, culture, and tooling. This one zooms into a single component type (charts) and shows the depth of work required to make even one UI pattern truly accessible. It's the "how hard this actually is" session — 10 distinct areas, each with specific implementation details.
Cross-session connections (updated with Session 8):
All previous connections remain valid. New additions:
- Two Atlassian sessions, two scales: Session 5 (design program, 72 specialists) and Session 8 (engineering a single component type). Shows how Atlassian's investment in people (Session 5) enables deep technical work (Session 8). The specialists create the demand; the visualization engineers build the solutions.
- "Don't rely on X alone" is a universal principle: Session 1 (don't rely on automated tools alone), Session 8 (don't rely on color alone, don't rely on mouse alone), Session 6 (don't rely on automation for 100% coverage). Every session warns against single-point-of-reliance.
- The Tab key comes full circle: Session 1 taught us Tab testing. Session 8 explains WHY charts should be a single Tab stop with internal arrow navigation — with large datasets, every-element-is-a-Tab-stop is a barrier, not a feature. The nuance from "test with Tab" to "design how Tab works" is the maturity arc.
- Design system as accessibility multiplier: Session 2 (Cvent: framework-first approach), Session 4 (axe-linter.yml for custom components), Session 5 (Atlassian specialists produce design specs), Session 8 (Atlassian platform components expose a11y via props/hooks). The full chain: specialists define standards → platform engineers encode them in components → every product team inherits accessibility.
- Emerging tech: AI + sonification + tactile: Session 8 uniquely surfaces non-visual modalities — sonification (hearing data), tactile charts (touching data), AI insights (understanding data). These map to the "beyond compliance" future that Session 3 (Ed Summers) hinted at.
- Four org case studies + one component deep-dive: The blog now has breadth (7 org/tool sessions) AND depth (Session 8's 10-area chart framework). This gives it technical credibility alongside strategic insight.