Axe-con 2026 — Session Summaries

Conference: Axe-con 2026 Digital Accessibility Conference Date: February 24-25, 2026 Format: Virtual, free — 75+ speakers, multi-track URL: https://www.deque.com/axe-con/


Session 1: Testing Web Experiences with Your Keyboard

Speaker: Greg Gibson, Principal UX Producer (Accessibility Testing & QA), Red Hat Session URL: https://www.deque.com/axe-con/sessions/testing-web-experiences-with-your-keyboard-2/ Demo page: https://hellogreg.org/axe26/ (permanent, reusable for own presentations)

Key Thesis

Keyboard testing is underrated, catches high-impact issues that automated tools miss, and requires zero software beyond a browser. It's the lowest barrier entry point into accessibility testing — anyone with a keyboard can do it.

Core Keyboard Testing Keys

Key Action
Tab Move forward through interactive elements
Shift+Tab Move backward
Arrow keys Scroll page; navigate within complex components (tabs, dropdowns, radio groups)
Space Trigger buttons, toggle checkboxes, open dropdowns
Enter/Return Activate links, trigger buttons
Escape Close modals, reset to default state

What to Test (Checklist from the Session)

  1. Skip links — First element on page should skip repeated nav. Test: Tab once from top → skip link should appear → Enter should jump past nav to main content.
  2. Visible focus — Every interactive element must show a visible focus indicator (recommended: ≥3px solid outline). Flag any element where focus "disappears."
  3. Obscured focus — Sticky headers/cookie banners can hide focus. Fix: CSS scroll-padding on the page.
  4. Inline link visibility — Tabbing through text can reveal invisible links (no underline, same color as text). Especially problematic for color-blind users.
  5. Focus order — Tab order must match logical reading order (not visual left-to-right). Bad example: form coded left-to-right instead of section-by-section.
  6. Buttons vs. links — Buttons (actions): Space + Enter. Links (navigation): Enter only. Space on a link scrolls the page. Flag if wrong element used.
  7. Details/Summary — Native expandable disclosure widget. Space or Enter toggles. Less code than custom implementations.
  8. Tooltips — Must appear on focus, not just hover. Otherwise excludes keyboard + touch users.
  9. Modals — Close button must be keyboard accessible. Focus should return to trigger button after close. Escape should close.
  10. Tabs pattern — Tab list must be focusable. Navigate between tabs with arrow keys (like radio buttons). Tab key enters the active panel.
  11. Scrollable regions — Code blocks and overflow containers must be keyboard focusable to allow arrow key scrolling.
  12. Auto-playing media — Must have keyboard-accessible pause/stop mechanism.
  13. Zoom — Page must work at 200% zoom (Cmd+= / Ctrl+=). Cmd+0 resets.

Case Study: Cloudflare Homepage

Gibson live-tested cloudflare.com, demonstrating real-world issues on a well-designed site:

Prioritization recommendation (from Q&A):

  1. Skip link — quick win, high legal risk, easy to implement
  2. Search functionality — high impact, core site function inaccessible
  3. Navigation focus styles — broad impact across all pages

Quotable Moments

"If everybody building for the web would test their pages by tabbing from top to bottom, the web would be a better place." — Crystal Preston-Watson (cited by Gibson)

"Don't ask how carousel — ask why carousel." — Greg Gibson's colleague at Red Hat

"Technically accessible is the worst kind of accessible." — Audience member in chat, echoed by Gibson

"I would much rather use my own energy than a data center's energy." — Gibson on AI vs. manual keyboard testing

"A page that works well with a keyboard is also likely to work well with a mouse or touch screen."

On Automated Tools vs. Keyboard Testing

On Screen Readers + Keyboard Testing

On AI for Accessibility Testing

Resources from This Session


Session 2: Small Team, Big Shift — Building an Accessibility Program at a Mid-Sized SaaS

Speakers:

Session URL: https://www.deque.com/axe-con/sessions/small-team-big-shift-lessons-learned-from-four-years-of-building-an-accessibility-program-at-a-mid-sized-saas/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Small-Team-Big-Shift-Lessons-learned-from-four-years-of-building-an-accessibility-program-at-a-midsized-SaaS_a11y.pdf

Key Thesis

Building a sustainable accessibility program requires shifting it from a cost center to a revenue driver — and the fastest way to do that is getting VPATs into clients' hands early, even before your engineers know how to fix the issues. Client demand makes the program impossible to kill.

About Cvent

The Typical (Failing) Accessibility Roadmap

Stephen outlined a pattern he's seen across 20+ years:

  1. Person in QA starts doing accessibility "off the side of their desk"
  2. They log defects → developers push back ("not in scope", "no client need", "tech stack doesn't support it")
  3. Program dies at individual contributor level — no executive buy-in

If lucky, some internal interest leads to:

Critical insight: Up until client interest, accessibility is pure cost. Programs that stay in the "cost" phase die. Revenue only appears when clients engage.

The Cvent Shortcut: VPATs First

Instead of the standard path (train engineers → build program → create VPATs → find clients), Cvent flipped the order:

  1. Hired Stephen as first "dash-accessibility" role (Senior Product Manager-Accessibility)
  2. Went straight to VPATs — contracted external auditors, starting with highest-traffic attendee-facing products
  3. Pushed VPATs to sales teams → into client hands immediately
  4. Clients loved them → created demand → leadership couldn't say no
  5. Engineers now requested training and checklists (pull, not push)
  6. Marketing issued press releases → industry awards followed

Revenue shifted "way, way left" — from ~2 years (traditional path) to ~2 months to get first VPAT in a client's hands.

Stephen told Cvent's CEO on his second week: "I'm going to change your company." Brazen, but it set the tone.

Organizing the Program: Hub-and-Spoke Model

Evolution of accessibility groups at Cvent:

Group Model Composition Strengths Weaknesses
Quality A11y Task Force Manager-led task force Rep from each product (mix of volunteers + "voluntolds") Consistent leadership, allocated time, cross-product coverage Some members not genuinely interested
Accessibility Guild Hybrid (champion passion + manager sponsorship) Cross-functional volunteers: QA, dev, design Passion + consistency via specialist lead (Evelyn) Champions have day jobs; limited bandwidth
UX A11y Champions Voluntary champion group Interested designers from UX High motivation, lots of ideas Participation drops off; viewed as "extracurricular"
Cross-Department Champions Network Newest, cross-org Tech + sales, marketing, legal Broad organizational reach Still forming

Key finding: Champions groups vs. task forces:

Training: What Worked and What Didn't

What they did: Bought Deque University licenses, mandatory training tracks for devs, testers, designers, managers. Intensive, detailed course.

What worked:

What didn't work:

Lesson learned: Either:

  1. Start with a shorter intro-level training (spread awareness), OR
  2. Pair technical training with required baseline checks per role ("here is the checklist for your role at this step in your SDLC")

"If it's not required, you have to rely on people wanting to do it AND having a manager who gives them time. Having both on the same team is a treat. But you can't build an accessible platform that way."

Scaling Strategy: Framework First

Started accessibility efforts at the framework level — buttons, tables, forms, base components.

Logic: If the building blocks are accessible, everything built from them gets a head start. Individual feature teams can then use accessible components instead of solving accessibility from scratch.

Change Management: Hard Lessons

The struggle: Accessibility experts know what needs to change but lack authority and change management skills. Change management people have the skills but don't know accessibility.

"There's nobody who knows what needs to change AND knows how to make it happen AND is supposed to do that as part of their role."

What didn't work: Evelyn (individual contributor) spent a year pushing out mandatory developer accessibility tests. Huge slog. Discovered 75% of the way through that a Change Management Team existed that could have helped.

What works better: Collaboration model:

Build vs. Buy Decisions

Area Cvent's Approach Why
VPATs External auditors (for now) Credibility with clients; lacked internal expertise initially
Testing tools JAWS licenses + open axe library (custom wrappers) Company culture prefers building; evaluating paid options
Issue tracking Custom JIRA fields + reporting If issues aren't in JIRA, "they don't exist and nobody looks at them"
Training (generic) Off-the-shelf (Deque University) Good at all levels of depth
Training (advanced) Built in-house Screen reader testing needs instructor feedback; cVent-specific processes
Guidance docs Internal "good, better, best" practices WCAG says what's wrong but is open-ended on how to fix; internal docs standardize solutions (e.g., "2px focus ring fully enclosing the element")

External Evangelism & Industry Positioning

Strategic insight: Becoming an industry accessibility leader creates external pressure that reinforces the internal program.

Current Priorities / What's Still Missing

  1. Direct feedback from people with disabilities — Most feedback comes through event planners, not end users. Have an email address but no in-product feedback mechanism. Created an Employee Resource Group (ERG) for employees with disabilities and allies.
  2. Clear learning paths — Lots of training available but unclear who should do what when. Critical as Cvent acquires companies and onboards entire new teams.
  3. Broader collaboration beyond a11y groups — Need change management expertise, AI expertise, educational scaffolding. Champions have done everything they can alone; now need to plug into existing organizational systems.

Quotable Moments

"Accessibility usually starts in quality. Unfortunately, it also usually dies in quality." — Stephen Cutchins

"The passionate few is never sustainable." — Stephen Cutchins

"Don't ask how carousel — ask why carousel." (also quoted in Session 1!)

"I'm going to change your company." — Stephen Cutchins to Cvent's CEO, two weeks into the job

"Passion is useful. It is motivating. It is inspiring. But it isn't a long-term strategy. It's fragile." — Amanda Bolton

"If it's not required, you have to rely on people wanting to do it. And having a manager who will give them time for it. Having both on the same team is a treat. But you can't build an accessible platform that way." — Evelyn Wightman

"Sometimes you do just have to do things and learn better as you go." — Evelyn Wightman

"I don't even know what I don't know." — CTO of a federal agency, to Stephen

Key Takeaways (from the speakers)

Stephen:

  1. If your clients care about accessibility, they are your best advocates. If they don't care yet, it's your job to get them to care.
  2. Marketing is a very, very good friend to have. Share every win, even small ones. ("We should have done a press release when they posted the job listing for me.")
  3. Take advantage of legislative updates (EAA, ADA Title II) — even if "we have to for legal reasons" doesn't feel great, it moves the needle.

Evelyn: 4. Executive buy-in is necessary, but managers get the work done. Buy-in doesn't trickle down automatically. 5. Support your passionate few so they don't burn out while you're still relying on them.

Amanda: 6. Getting organized increases your impact — clear instructions, a place to ask questions, recognized experts.

Q&A Highlights

On quantifying accessibility revenue (from Matt):

On early VPAT transparency creating fear (from Glenda):

On who should own VPATs without dedicated a11y specialists:

Resources from This Session


Session 3: Building Without Barriers on GitHub

Speaker: Ed Summers, Head of Accessibility, GitHub (blind software developer, decades of experience) Session URL: https://www.deque.com/axe-con/sessions/building-without-barriers-on-github/ Track: Organizational Success with Accessibility

Key Thesis

180 million developers build on GitHub, and we're missing a huge opportunity to improve accessibility across the industry using AI. GitHub's Accessibility Team is dogfooding "continuous AI for accessibility" — injecting accessibility expertise into every step of the development lifecycle through custom instructions, automated scanning, and custom agents. The tools are free, open-source, and available today.

State of Accessibility: Four Observations

1. Strongest regulatory framework in decades

Regulation Deadline Scope
European Accessibility Act (EAA) June 2025 (live) Consumer products
ADA Title II (US) 2026-2027 State/local government, higher ed
Accessible Canada Act 2027-2028 Federally regulated sectors

Evidence of real investment: Ed analyzed ~400 job postings on a11yjobs.com (Dec-Jan) — 31% were from state/local government (ADA Title II scope). Huge shout-out to George Hewitt who maintains the site.

2. AI dev tools are ubiquitous

3. AI has had negligible impact on accessibility (so far)

4. We're missing a huge opportunity

Concept: Continuous AI for Accessibility

GitHub Labs coined "continuous AI" — extending CI/CD thinking to leverage AI across the entire software development lifecycle, not just in the editor.

GitHub's Accessibility Team applies this as "continuous AI for accessibility" — injecting accessibility expertise at every development touchpoint: code completion, chat, agent mode, code review, and automation.

GitHub Building Blocks (Quick Reference)

Concept What it is
Repositories Digital containers for project files, support branching
Pull Requests (PRs) Proposed changes, with discussion/review before merge
Issues Bug/feature tracking, assignable to people or agents
Projects Group issues into sprints/iterations

GitHub Copilot Features for Accessibility

Feature What it does A11y Application
Code Completion Inline code suggestions in editor Suggests accessible patterns if custom instructions set
Copilot Chat Conversational AI about code/repo Ask about accessibility of your codebase
Copilot Coding Agent Assign issues to Copilot → async PR creation Assign 10-20 a11y bugs to Copilot, it creates fix PRs
Copilot Code Review AI reviews every PR with comments + diffs Catches a11y issues at PR time; just-in-time developer education

Code Review is the biggest opportunity for a11y professionals — it educates developers at exactly the right moment, at every PR, with specific suggestions they can accept with one click.

Call to Action 1: Custom Instructions for Accessibility

What: Plain-language instructions that modify Copilot's behavior across all features (completions, chat, agent, code review).

Best practices (from Kendall Gasner's guide):

Example — Markdown accessibility instructions: 5 rules for accessible markdown:

  1. Missing or empty alt text on images
  2. Incorrect heading levels
  3. Non-descriptive link text (e.g., "click here")
  4. Emojis used as bullet points or list markers
  5. Plain language readability improvements

Live demo result: Ed submitted a PR with a "click here" link. Copilot's code review:

Call to Action 2: GitHub's AI-Powered Accessibility Scanner

What: Free, open-source GitHub Action that scans sites for accessibility issues, creates GitHub Issues for each violation, and assigns them to Copilot Coding Agent for automated fix PRs.

How it works:

  1. Scans using axe-core (no false positives)
  2. Creates a GitHub Issue per violation, with descriptions optimized for both humans and AI
  3. Copilot Coding Agent reads the issue, creates a fix PR in the background
  4. Human reviews, modifies, and merges

Key features:

Link: gh.io/a11y-scanner (also: https://github.com/github/accessibility-scanner)

"Custom instructions + scanner = better together" — the scanner finds issues, custom instructions guide the fixes.

Call to Action 3: Build Custom Agents for Accessibility

What: Custom agents are tightly focused on a specific domain problem (vs. custom instructions which cover many topics and can get "watered down").

When to create a custom agent:

Example — Markdown accessibility agent: Goes beyond custom instructions: "Review all markdown in my repo and create a PR that makes accessibility improvements." Uses linters and tools to measure improvements, not just suggest them.

Getting started guide: gh.io/a11y-docs → "Getting Started with Custom Agents for Accessibility" (authored by Roberto Perez from GitHub's Accessibility Team)

Call to Action 4: Automate Your Accessibility Processes

Recipe for automating a11y workflows on GitHub:

  1. Dedicated repository for each accessibility process (audits, user feedback, compliance, etc.)
  2. Issue templates with required fields to ensure right information captured
  3. Automations (AI or deterministic):

GitHub's own examples:

Call to Action 5: Block Time to Experiment

Ed's personal plea: block time in your schedule to experiment with AI, even if you're not a developer. Share what you learn — post on LinkedIn, contribute to the awesome-copilot repo, tag Ed.

Example — Non-developer contribution: Janice Rymer (Program Manager, not a developer) used Copilot Chat to prototype an addition to GitHub's accessibility governance framework via "spec-driven development" — describing what she wanted in plain language, iterating with Copilot. It was then handed to a dev team for production implementation. Blog post on github.com/blog (~mid 2025).

Microsoft's a11y-LLM-eval Report (Key Data Point)

What: Automated accessibility benchmarking by Michael Fairchild (Microsoft) — tests how well LLMs generate accessible HTML with and without custom instructions.

Stunning finding:

Condition Average WCAG Pass Rate
No instructions 10%
Basic accessibility guidance 46% (+36.9 pp)
Detailed instructions 58% (+48.4 pp)

Link: https://microsoft.github.io/a11y-llm-eval-report/

Q&A Highlights

On preventing developers from blindly merging Copilot PRs:

On custom instructions + agent mode:

On non-developers using these tools:

On where to start (1 hour after Axe-con):

  1. Sign up for free GitHub account
  2. GitHub Copilot has a free tier
  3. Create a repo, go to gh.io/a11y-docs
  4. Start with custom instructions — "within an hour or two, you're going to have a lot of fun"
  5. Check Michael Fairchild's a11y-LLM-eval for instruction examples

Quotable Moments

"We are currently experiencing the strongest accessibility regulatory framework that I've seen in several decades." — Ed Summers

"90% of developers are using AI... and we have not seen acceleration [in accessibility]. We are missing a huge opportunity." — Ed Summers, synthesizing DORA + WebAIM data

"Custom instructions + scanner = better together."

"AI can accelerate what we are doing but there is no substitute for great design, thoughtful design, considering the needs of users, and there is no substitute for inclusive user research." — Ed Summers

"If you can express what you want in plain language, the ability to articulate what you want, what good means, what done means, in plain language — that is a superpower." — Ed Summers

"Is it perfect? No. But it's emerging technology and your experience with it is going to help improve it."

Resources from This Session


Session 4: Shift Left Without Shifting Gears — Accessibility in Your Existing Workflow

Speaker: Harris Schneiderman, Director of Product Management, Deque Systems (former software engineer at Deque for ~13 years, ~10 as engineer before switching to product management; always focused on building accessibility tools for dev teams) Moderator: Liz Moore Session URL: https://www.deque.com/axe-con/sessions/shift-left-without-shifting-gears-accessibility-in-your-existing-workflow/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shift-Left-Without-Shifting-Gears_-Accessibility-in-Your-Existing-Workflow_a11y.pdf (8 slides — mostly a live demo session) YouTube: https://www.youtube.com/watch?v=8vmHUgqtndo

Key Thesis

You can bake accessibility testing into every stage of your existing development workflow — from coding in the IDE to browser testing — without switching tools, slowing down, or becoming an accessibility expert. A three-layer testing approach (linter → MCP Server → browser extension) progressively catches more sophisticated issues while keeping the developer in their flow state.

Demo Application

App: "Casey Jones Railway Co" — a fictional train booking website (React + TypeScript + Vite + Tailwind CSS). Named after the real-life train conductor Casey Jones ("famous for always arriving on time"), and also a Grateful Dead reference.

Task: GitHub ticket to add a traveler selection UI — two passenger counter rows (adults defaulting to 1, children defaulting to 0) with increment/decrement buttons. Adults minimum 1, children minimum 0.

Three-Layer Testing Approach

Layer 1: axe Linter (VS Code Plugin) — FREE

What it is: Accessibility linter for VS Code. Works like a spell checker — shows red squiggly lines under code with a11y issues as you type. Completely free.

What it catches (static analysis):

Key feature — Component mapping via axe-linter.yml: Declare how custom React (or other framework) components map to HTML semantics. Example: telling axe Linter that <SearchIcon> renders as an <img> and its label prop maps to aria-label. The linter then understands your design system, not just raw HTML.

Broader than eslint-plugin-jsx-a11y: Supports React Native, Liquid, JS — not just JSX. Rule sets for JSX are "quite similar" but axe Linter has broader language support and the component mapping system.

Linter Server (premium): REST endpoint version for pre-commit hooks and CI checks via GitHub Actions. Returns JSON report of all violations.

Limitations: Static analysis only — can't detect color contrast from runtime CSS, can't test fully rendered apps.

Layer 2: axe MCP Server — PREMIUM (requires axe DevTools for Web subscription)

What it is: MCP (Model Context Protocol) server that connects coding agents (Copilot, Cursor, etc.) to Deque's accessibility testing engine. Runs inside the IDE — no browser context switching.

How it works:

  1. Configured via .vscode/mcp.json — Docker container (deque-systems/axe-mcp-server), API key stored securely
  2. API key authenticates with axe account portal → fetches org-wide config (testing standard, e.g., WCAG 2.2 AA; best practices; axe-core version)
  3. Two tools: #analyze and #remediate (hashtag notation helps coding agents recognize MCP tool calls)
  4. Analyze: Spins up headless browser with axe DevTools Extension pre-installed, runs full analysis on rendered page, returns axe-core JSON
  5. Remediate: For each violation, provides description, paragraph-form remediation guidance, and expected output HTML. Coding agent translates the fix into framework code.

Design philosophy — symbiotic relationship:

"We stay in our lane. Our lane is accessibility testing. We're the best at finding and fixing issues."

Deque provides a11y expertise + remediation guidance. Coding agents (Copilot, Cursor) provide framework translation. Neither needs to learn the other's domain.

Verification loop via Copilot instructions: Add custom instructions: "After applying fixes, you must rerun #analyze to verify all issues are resolved. Confirm zero violations before considering the task complete." Creates an iterative loop — if AI makes a mistake, it catches it on re-scan and keeps trying.

Issues found and auto-fixed in demo:

Issue What happened
Input without label Departure date field had separate <label> but input wasn't associated. Copilot discovered the TextInput component had a built-in label prop — knew the codebase better than the developer
Color contrast Tailwind CSS class with insufficient contrast; Copilot adjusted to meet 4.5:1 ratio
Button without label Reset button (icon-only) missing aria-label
Missing landmark Added <header> around hero banner

After remediation: re-analysis returned zero violations.

Roadmap: Advanced rules and automated IGTs coming to axe MCP Server (before next Axe-con), so devs can run keyboard testing and guided tests from within the IDE.

Layer 3: axe DevTools Browser Extension — PREMIUM

What it adds beyond axe-core:

Feature Description
Advanced Rules Use AI + automation to detect issues standard rules engines can't (has browser-level access: screenshots, longer-running tasks)
Intelligent Guided Tests (IGTs) "TurboTax for accessibility testing" — questionnaire-style yes/no questions about your app
Automated IGTs (new) AI answers the IGT questions automatically — "sit back and sip your coffee"

Advanced Rules findings:

  1. Heading markup not used on headings — a <div> styled to look like a heading ("Plan Your Trip") was not using heading semantics. Fix: <div><h2>
  2. Color contrast on gradient backgrounds — Text over gradient header, contrast ratio ranged from 3.75:1 to 3.89:1 (threshold: 4.5:1). Standard axe-core can't test gradients reliably because of hundreds of foreground-background combinations. Advanced Rules handles this.

Automated IGT findings:

Fix: Updated PassengerCounter component to use template strings: `decrease number of ${label}`. Re-ran: zero automatic issues, zero IGT failures.

The Complete Demo Workflow

  1. Pick up GitHub ticket — enhancement to add traveler selection
  2. Review existing code — axe Linter immediately flags pre-existing issues
  3. Fix linter issues while browsing (campsite rule: "leave it cleaner than you found it"):
  4. Write feature code — import PassengerCounter, wire up state, implement increment/decrement with min values
  5. Run axe MCP Server from IDE — prompt: "Analyze localhost:5173 for accessibility issues. Remediate any violations found."
  6. Review auto-fixes — accept, reject, or modify each change
  7. Verification scan — re-run analyze, confirm zero violations
  8. Switch to browser — axe DevTools Extension, full scan with Advanced Rules
  9. Run Automated IGT on just the new elements (scoped to branch changes)
  10. Fix remaining issues from Advanced Rules and IGT
  11. Final verification — zero issues across all layers
  12. Commit and create PR

Issue Tally Across Layers

~11+ issues found total — each layer caught things the previous one missed:

Quotable Moments

"Casey Jones was famous for always arriving on time. And today we're going to make sure our feature arrives on time. But it doesn't leave anybody behind." — Harris Schneiderman

"Linters are essentially like spell checkers for your code." — Harris Schneiderman

"We stay in our lane. Our lane is accessibility testing. We're the best at finding and fixing issues." — Harris Schneiderman on the division of labor between Deque's tools and coding agents

"I'm happy, I'm lazy, so I'm happy I don't have to do anything." — Harris Schneiderman on staying in the IDE while MCP Server tests in the background

"It's a tough battle trying to be better at things than ChatGPT and Claude, but I think we have the expertise baked into our models that helps us outshine them." — Harris Schneiderman

"Don't just focus on the negative." — Harris on Automated IGT showing reasoning for both passes and failures

"Always review the changes that it's making on your behalf." — Harris Schneiderman

Q&A Highlights

On design system compatibility with AI fixes:

On axe MCP Server vs. asking Claude/ChatGPT directly:

On listing all linter errors at once:

Resources from This Session


Session 5: The Accessible Design Specialist's Playbook

Speaker: Pawel Wodkowski, Lead Designer / Design System Accessibility, Atlassian (based in Sydney; 9.5 years at Atlassian; also a Zumba instructor) Moderator: Catherine Jordan (Deque) Session URL: https://www.deque.com/axe-con/sessions/the-accessible-design-specialists-playbook/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The-Accessible-Design-Specialists-Playbook_a11y.pdf Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/The_Accessible_Design_Specialists_Playbook.txt Track: Design

Key Thesis

Accessibility scales through building local community, not through relying on a central expert. Atlassian grew from 5 accessibility design specialists to ~60 across four cohorts by embedding expectations into growth profiles, creating a structured 12-week specialist program, and measuring what matters (confidence + impact, not attendance). The playbook: embed expectations → build local capability → formalize and measure.

Context: Atlassian's Accessibility Journey

Three-Chapter Playbook

Chapter 1: Adding Accessibility to Design Growth Profiles

What: Embedded accessibility knowledge expectations across all designer levels, from Grad Designer through Senior Principal Designer.

Level-by-level expectations:

Level Accessibility Expectations
Grad Designer Understand accessible design fundamentals, WCAG P.O.U.R. principles, 5 major disability cohorts. Understand accessible use of color.
Designer Understand accessible form design, use of images. Basic understanding of accessible information architecture, text resizing, basic keyboard interaction (focus order, reading order). Consistently use available accessible design resources.
Senior Designer Strong understanding of motion design principles and impact on UX. Implement complex architectures, create accessible interfaces, optimize navigation systems. Participate in accessibility design reviews.
Senior/Lead (Specialist) Set quality bar by producing accessible design specs. Give accessibility feedback and review design specs. Be the go-to person for a11y questions on the team. Lead local a11y workshops. Cooperate on global a11y initiatives.

Why it matters: Growth profiles make accessibility a career expectation, not an optional interest. Every designer knows what's expected at their level.

Chapter 2: Creating the "Accessible Design Specialists" Community

Core message: "Accessibility is a team sport."

Specialization Program vs. Mentorship Program (Circle):

Aspect Specialization Program Mentorship Circle
Motivation Company needs + participant growth interest Peer-to-peer, general interest
Curriculum Based on official growth profiles Built by mentors and mentees together
Assessment Self-assessment + formal graduation Informal, self-assessed
Expectations Formal expectations from growth profiles after graduation None formal

Program structure — 7 live sessions over ~12 weeks:

  1. Kick-off
  2. Being a specialist
  3. Annotations Workshop
  4. Accessible design review
  5. Giving feedback on a11y
  6. Discussion Panel, Q&A
  7. Graduation

Growth timeline:

Date Event Cumulative Specialists
Oct 2023 New growth profiles + specializations 5
Mar-Jun 2024 First cohort 27 (+22)
Oct-Dec 2024 Second cohort 61 (+34)
Oct-Dec 2025 Third cohort 72 (+11)

Current ratio: ~10:1 designer-to-specialist ratio across the design org.

Chapter 3: Evolving the Program — Formalization, Pulse, Buddy Program

Three pillars of formalization:

  1. Clear structure — Program lead, reinforced expectations, leadership backing. Being "real about my time commitment as program lead."
  2. Regular rhythms — Fortnightly Jam Sessions, Specialists Pulse (quarterly surveys), checkpoints, office hours, feedback loops.
  3. Learning & development — Specialists Buddy mentoring program (90-day head start for new specialists pairing with experienced ones).

Specialists Pulse — measurement system:

Component Timing Details
Specialists Survey Quarterly, month 2, open ~6 weeks Anonymous. Measures confidence, support, motivation (Likert scale). "In one word, describe your experience."
Managers Survey Quarterly, month 3, open ~3 weeks Tracks completion rates. Measures specialist's contribution to team a11y capability, awareness impact. "Share one example of how your specialist made a difference."
Report + Action Plan After surveys close Synthesizes findings into actions
Specialists Retro Month 3, multiple time zones Bright spots, Frictions, 5 Whys columns

The Critical Insight: Engagement ≠ Impact

What Pawel assumed: Specialists weren't showing up to Jam Sessions → they must be demotivated and don't want to be specialists.

What the data showed:

Motivation (Specialists Survey): Consistently high — most responses at 4-5 on a 5-point scale across both Jul-Sep and Oct-Dec 2025 quarters.

Manager-reported impact on team awareness: Strong and improving — majority of managers rated specialist influence at 4-5 on a 5-point scale.

Confidence (Specialists Survey): Very high — overwhelming majority at 4-5, with improvement from Jul-Sep to Oct-Dec 2025.

Conclusion: Low Jam Session attendance reflected scheduling friction, not declining motivation. "Engagement is not the same as impact." This distinction refocused the program on outcomes rather than attendance metrics.

Scaling Recommendations (Any Org Size)

  1. Make accessibility visible — through growth profiles, checklists, shared language
  2. Build community — mentorship circles, champion programs, or specialist roles tailored to your org culture
  3. Start small and formalize gradually — begin with "one or two people interested," add structure as impact becomes evident

For small organizations: "It can be you and five other people interested in accessibility as a mentorship circle and then you start to add it to growth profiles." Formal specialization isn't required — embedding a11y entirely in growth profiles can work.

Specialist's Toolbox (from slides)

The program equipped specialists with practical tools:

Quotable Moments

"Accessibility is a team sport." — Pawel Wodkowski

"Engagement is not the same as impact." — Pawel Wodkowski, on low Jam Session attendance vs. high survey motivation

"Start with your need. Finish with theirs." — Pawel Wodkowski (playbook principle)

"Start small and scrappy, but be ready to formalize it." — Pawel Wodkowski

"Accessibility scales through building a local community." — Pawel Wodkowski

"From checkbox to culture — the shared habits, assumptions, and routines that make accessible outcomes the default." — Pawel Wodkowski

"It can be you and five other people interested in accessibility as a mentorship circle." — Pawel Wodkowski, on getting started at any org size

Q&A Highlights

On scaling to smaller organizations:

On specialization vs. baseline:

On single specialist leadership:

On future measurement:

Resources from This Session


Session 6: Integrating Axe for Automated Testing in a Distributed Engineering Environment

Speakers:

Session URL: https://www.deque.com/axe-con/sessions/integrating-axe-for-automated-testing-in-a-distributed-engineering-environment/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-Axe-for-automated-testing-in-a-distributed-engineering-environment_A11Y.pdf Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Integrating-axe-for-automated-testing-in-a-distributed-environment.txt Track: Development

Key Thesis

Scaling accessibility automation across a large, decentralized enterprise (100+ engineering orgs, 250+ applications, products ranging from brand new to 50 years old) requires a hybrid model, pragmatic compromises, and custom tooling to bridge the gap between what vendor platforms provide and what enterprise oversight demands.

About Thomson Reuters

The Core Challenge

How do we scale accessibility across a diverse, complex, and decentralized organization?

Approach: Manual vs. Automated vs. Hybrid

Approach Pros Cons
Manual Shift-left mindset, a11y as first-class citizen, deep expertise at all stages Resource constraints, process bottlenecks, cost
Automated Scalable, cost effective, empowers individual contributors, consistency Covers only 30-60% of issues, introduces a11y debt, technically conformant ≠ accessible
Hybrid (TR's choice) Best of both Requires careful orchestration

TR's hybrid strategy:

  1. Accessibility-first mindset, shift left as far as possible
  2. Empower designers and developers with training + materials
  3. Adopt AI a11y assistants for quick questions
  4. Provide dedicated accessibility expertise where most needed
  5. Automated a11y testing early and often:
  6. Manual evaluations and audits for major releases

Technical Reality: Diverse Stacks

Test automation tools in use:

CI/CD platforms:

This diversity is the defining challenge — no single integration path works for everyone.

Axe Tools Adopted

Six Challenges and Solutions

1. Limits of Automation

Problem: Automated tools cover only 30-60% of possible a11y issues. Some applications can't be easily tested. Tools lack subjective understanding (though AI rules are improving this).

Solution: Hybrid manual/automated model. Developer/specialist/tester training. Comprehensive library of a11y courses (Deque University).

2. Technical Implementation — Axe Developer Hub

Problem: Axe Developer Hub relies on existing UI automation tests. Integration was often challenging across diverse stacks.

Solution:

3. Technical Implementation — Axe Watcher

Problem: Not all products use technologies or processes natively compatible with Axe Watcher.

Solution:

4. Technical Implementation — Axe Linter

Problem: Axe Linter sends code to external systems. Security and IP concerns at enterprise scale.

Solution:

5. Organization and Oversight — Enterprise Management

Problem: Axe Developer Hub doesn't provide out-of-the-box enterprise management tools. Only basic per-project metrics.

Solution:

6. One Size Does Not Fit All

Problem: Program assumptions (e.g., "test every PR") don't match organizational realities. Complex product families had best coverage in regression test suites, which have real costs and are deliberately scoped.

Solution:

Results

Metric Value
Applications integrated with Axe Developer Hub 250+ across 23 business units
Individual test runs performed 12,000+
Issue trend Overall reduction in reported issues in majority of tested applications
Compliance milestones Met several regulatory milestones
Cultural impact Increased engineering team awareness of accessibility

2026 Roadmap

  1. Integrate Axe Developer Hub into remaining products and applications
  2. Mobile apps (iOS and Android)
  3. AI-powered remediation with human in the loop
  4. Internal Developer Portal (IDP) to increase visibility of axe workflows

Key Takeaways

  1. Automated testing is a starting point, not gospel — covers 30-60% of issues, manual testing remains essential
  2. Enterprise reality requires custom tooling — vendor dashboards don't provide org-wide visibility; build your own aggregation layer
  3. Security concerns are real — axe Linter's external code transmission required an internal self-hosted server
  4. Flexibility beats dogma — "test every PR" is ideal but complex products may need periodic cadence instead; define minimum standards with flexibility
  5. Training and cultural buy-in matter as much as tools — tools find issues, culture fixes them
  6. Balance deadline pressure with accessibility — the hybrid model acknowledges this tension

Quotable Moments

"Is it perfect? Nope. (But nothing is.)" — Slide on axe tools adoption

"Automated tools cover between 30-60% of possible accessibility issues." — Thomson Reuters team

"Technically conformant but inaccessible experiences" — on the risk of automation-only approaches

"Many organisations are waking up to the fact that embracing accessibility leads to multiple benefits — reducing legal risks, strengthening brand presence, improving customer experience and colleague productivity." — Paul Smyth, Barclays (cited)

"Accessibility is a core value... something we view as a basic human right." — Sarah Herrlinger, Apple (cited)

Resources from This Session


Session 7: Shifting Left — Building an Ecosystem to Scale Accessibility

Speakers:

Moderator: Jon (Deque) Session URL: https://www.deque.com/axe-con/sessions/shifting-left-building-an-ecosystem-to-scale-accessibility/ Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shifting_left_Building_an_ecosystem_to_scale_accessibility.txt Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Shifting-left_-Building-an-ecosystem-to-scale-accessibility_a11y.pdf (note: PDF contains different session content — "Shift-up Accessibility" by D2L/AT&T — appears to be a mis-upload by Deque) Track: Organizational Success with Accessibility

Key Thesis

Accessibility at a top-20 US bank required a 14-year cultural transformation — from a single passionate front-end developer asking "What about accessibility?" to an ecosystem where accessibility analysts are embedded in cross-functional squads, 40% of staff have completed a11y training, and WCAG 2.2 is in vendor contracts. "Shift left" means starting at discovery, not development. "Better is better."

About Regions Bank / Regions XD

14-Year Journey: Three Maturation Phases

Phase 1: Introduction (2013-2014)

Phase 2: Growth (4-year span)

Phase 3: Ecosystem Building (recent)

Key Organizational Insight: Where A11y Analysts Belong

Initial assumption: Accessibility expertise belongs in technology/development teams.

What they learned: Greater value in positioning accessibility analysts within the design organization, closer to product strategy and user research. This is the literal "shift left" — moving a11y expertise from the end of the pipeline (development/QA) to the beginning (design/discovery).

Org Structure Evolution

Cultural Alignment

Connected accessibility to Regions' core corporate values:

  1. Put people first
  2. Do what is right
  3. Focus on your customer
  4. Reach higher
  5. Enjoy life

This framing moved accessibility from "compliance obligation" to "core value expression."

Training and Upskilling

Audit Results

Partnered with Deque for reviews of three strategically selected platforms:

  1. High-visibility website
  2. Greenfield product
  3. Enterprise authentication system

Key finding: ~50% of identified issues traced to third-party components — validating the need for vendor management and contractual WCAG requirements.

Accessibility Analyst Team

Overcoming Resistance

"Accessibility slows us down" concern:

Leadership viewing a11y as pure compliance:

  1. Start with empathy exercises involving engineers and business leaders
  2. Connect accessibility to customer-focused organizational mission
  3. Share documented successes with peers and leadership
  4. Emphasize usability improvements benefit ALL users
  5. Tell stories connecting accessibility work to actual customer experiences

Future Initiatives

Quotable Moments

"You start with empathy. Until someone knows and understands the value, you cannot have empathy for it." — Todd Keith

"Better is better." — Todd Keith

"Accessibility is always a work in progress — that is our theme today and our takeaway." — Katrina Lee

"What about accessibility?" — The original question from a single passionate front-end developer that started Regions' 14-year journey

Key Takeaways

  1. Shift left = move a11y into design, not just development — accessibility analysts belong closest to product strategy and user research, not at the end of the pipeline
  2. Connect a11y to existing corporate values — framing accessibility as "put people first" and "do what is right" resonates more than compliance arguments
  3. 50% of audit issues came from third-party code — vendor management and contractual WCAG requirements are essential
  4. Squad-based embedding > separate a11y team — daily collaboration with designers and developers amplifies influence; leaders start requesting a11y review proactively
  5. Empathy first, then compliance — empathy exercises and customer stories move people more than legal arguments
  6. "Better is better" — perfectionism is the enemy; continuous improvement beats waiting for perfection

Resources from This Session


Session 8: Making Platform React Chart Components Accessible

Speaker: Ambika Yadav, Visualization Engineer, Atlassian Visualization Platform (based in Seattle; MA in Media, Arts and Technology from UC Santa Barbara) Session URL: https://www.deque.com/axe-con/sessions/making-platform-react-chart-components-accessible/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Accessible-Platform-Chart-Components_a11y.pdf Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Making_Platform_React_Chart_Components_Accessible.txt Track: Development

Key Thesis

Charts shape decisions across democracy, health, climate, and business — and when they're inaccessible, entire populations are excluded from information that affects their lives. Building accessible charts at platform scale (React components used across Atlassian products) requires addressing 10 distinct areas: design context, colors, pattern fills, focus management, screen readers, data tables, AI insights, tactile charts, and sonification.

Why Accessible Charts Matter

Charts influence decisions in:

Who is excluded when charts aren't accessible:

Barrier Population Affected Approximate Prevalence
Blind/low vision users can't read data 1 in 4 people WHO/CDC
Colorblind users can't see differences 1 in 20 people
Cognitive disabilities can't interpret 1 in 10 people
Motion sensitive users must avoid it 1 in 100 people
Non-mouse users can't explore effectively 1 in 7 people

Framework: Chartability Heuristics (built on WCAG POUR)

Principle Chart Application
Perceivable Clear titles/context, meaningful alt text, non-visual data access (table/summary), don't rely on color alone
Operable All content reachable via keyboard (not mouse-only), motion/animation doesn't trap or harm
Understandable Logically ordered, clearly labeled, consistent AT announcements, predictable interactions
Robust Solid semantics + ARIA, works across browsers and screen readers, validated by actual testing

10 Areas for Accessible Charts

1. Design — Context and Labeling

A chart without context is useless. Required elements:

Platform requirement: React chart components must expose customization for ALL chart elements via props, hooks, or composable patterns.

2. Colors — Contrast and Palettes

Tools: Color Brewer (colorbrewer2.org), Chroma.js (gka.github.io/palettes), Viz Palette (projects.susielu.com/viz-palette)

3. Pattern Fills — Beyond Color

Patterns (stripes, dots, crosshatching) provide secondary encoding for colorblind users.

Implementation:

Demonstrated across color blindness types:

Each type shown before/after pattern fills — dramatic improvement.

4. Focus Management — Keyboard Navigation

Critical design decisions:

Implementation:

Must include: Visible helper text describing keyboard navigation ("Use arrow keys to navigate between data points, Escape to exit chart")

5. Screen Reader Interface

Chart container:

Chart marks (individual data points):

Reference for alt text: medium.com/nightingale/writing-alt-text-for-data-visualization-2a218ef43f81

6. Data Table Alternative

Always provide a data table alongside the chart — accessible, searchable, and sortable. Tables and visual charts serve different accessibility needs; tables don't replace labels, both are necessary.

7. AI Insights

AI-generated text descriptions of chart patterns and trends — helps screen reader users understand the "story" the chart tells without navigating every data point.

8. Tactile Charts

Prototype designs with smart defaults for tactile graphics — physical representations for blind users. Reference: vis.csail.mit.edu/pubs/tactile-vega-lite.pdf

9. Sonification

Map data values to audio frequencies so users can hear patterns and trends. A rising line chart becomes a rising pitch.

10. Mobile Accessibility (mentioned in Q&A)

Mobile chart accessibility requires different navigation paradigms than desktop — an area for future exploration.

Q&A Highlights

On label density and overwhelm:

On pattern "busyness" in bar charts:

On data tables vs. labels:

On enabling patterns:

Quotable Moments

"Charts shape decisions across almost every part of life." — Ambika Yadav

"If charts are not accessible, 1 in 4 people can't read the data, 1 in 20 can't rely on colors, 1 in 7 can't explore without a mouse." — Ambika Yadav (combining WHO/CDC data)

"Don't add tab stops to every chart element. Treat the chart as a single Tab stop and use internal navigation." — Ambika Yadav

"These solutions aren't perfect, but researchers, designers and engineers are steadily pushing the boundaries of chart accessibility." — Ambika Yadav

Resources from This Session


Session 9: CSAT as a Tool for Accessibility Insights — Arizona State University

Speaker: Victoria Polchinski, Lead UX Researcher, Arizona State University Session page: https://www.deque.com/axe-con/sessions/csat-as-a-tool-for-accessibility-insights-lessons-from-arizona-state-university/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/CSAT-as-a-tool-for-accessibility-insights_-Lessons-Learned-from-Arizona-State-University-1_A11Y.pdf Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/CSAT_as_a_Tool_for_Accessibility_Insights.txt Track: Research / UX

Key Thesis

CSAT (Customer Satisfaction) surveys can be a powerful, low-cost mechanism to surface accessibility insights — if you add one question ("Do you use assistive technologies?") and disaggregate the data. ASU used this to build a panel of 800+ AT users and identify a consistent ~5-point satisfaction gap between AT and non-AT users.

The Recipe (4-step framework, baking metaphor)

Step 1 — Planning:

Step 2 — Execution:

Step 3 — Analysis:

Step 4 — Recruitment & insights:

Consistent Accessibility Themes

  1. Customization (most common request) — dark mode, text size, spacing, font, color, contrast control
  2. Law of Proximity — keeping related labels/actions/feedback visually close, critical for low-vision/magnification users
  3. Flexibility — async/online preferred by students with disabilities, accommodates varying energy levels

Limitations Acknowledged

Quotable Moments

"Nothing About Us Without Us" — disability community mantra, framing the entire approach

"Thank you for creating such a survey... we are often not heard enough." — disabled student feedback

Resources


Session 10: Scaling Accessibility in a Complex Enterprise — Wolters Kluwer

Speaker: Ryan Schoch, Director of UX Advisory Services, Wolters Kluwer Session page: https://www.deque.com/axe-con/sessions/scaling-accessibility-in-a-complex-enterprise-lessons-from-audits-adoption-and-shared-practices/ Slides PDF: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Scaling-Accessibility-in-a-Complex-Enterprise-Lessons-from-Audits-Adoption-and-Shared-Practices_a11y.pdf Transcript: https://www.deque.com/axe-con/wp-content/uploads/2025/11/Scaling_Accessibility_in_a_Complex_Enterprise.txt Track: Design / Organization

Key Thesis

Scaling accessibility in complex enterprises isn't primarily a knowledge gap — it's a systems problem. The hard part is getting large numbers of teams to interpret accessibility expectations the same way. "Scaling before understanding amplifies variance."

Core Problem

Why Traditional Approaches Fail

The conventional model (roles + training + tools → conformance) assumes improving parts automatically normalizes the whole. In complex systems:

Solutions: Interaction-First Design System

1. Design system evolution:

2. Interaction-first approach:

3. Explicit interaction expectations:

4. Reframing:

The Reinforcing Loop

Shared understanding → interaction expectations → consistent use → improved feedback → reduced variance → stronger shared understanding. Patterns normalize "not because they exist, but because they are consistently expected in the process."

Practical Recommendations

Cross-session Connections

Quotable Moments

"The hard part is really in large numbers of teams to interpret accessibility expectations in the same way." — Ryan Schoch

"Scaling before understanding amplifies variance." — Ryan Schoch

"Good intentions don't normalize systems. Structure does." — Ryan Schoch

"Patterns normalize not because they exist, but because they are consistently expected in the process." — Ryan Schoch


Session 11: [PLACEHOLDER — more content to follow]


Blog Post Angle (Working Notes)

Working title ideas:

Emerging narrative thread (10 sessions in):

Session 8 adds the deep-technical dimension. Previous sessions covered org strategy, culture, and tooling. This one zooms into a single component type (charts) and shows the depth of work required to make even one UI pattern truly accessible. It's the "how hard this actually is" session — 10 distinct areas, each with specific implementation details.

Cross-session connections (updated with Session 8):

All previous connections remain valid. New additions: