Copilot Accessibility Principles
Designing generative and agentic AI experiences that work well for everyone.
These principles define what it means for Copilot to be an accessible and inclusive AI collaborator for people with disabilities. They’re grounded in learnings from user research and real‑world use, and describe what good looks like when AI assists, responds, and collaborates with users: clear, predictable, flexible, and supportive of a wide range of access needs.
Generative and agentic AI experiences introduce new accessibility challenges and opportunities. Unlike static interfaces, Copilot can generate content, take actions, and adapt its behavior over time. As AI becomes more capable, GitHub users may find themselves working with several agents at once, each doing different things and responding in different ways. Without good design, that can quickly become confusing. When agents behave inconsistently or change direction without clear feedback, it’s easy to lose track of what’s happening or why.
Accessible design helps keep that complexity in check. It ensures interactions stay clear, consistent, and easy to follow, no matter how many agents are involved. When experiences are designed to be understandable and forgiving, they support everyone’s ability to stay focused, spot mistakes early, and remain in control.
Accessibility best practices are assumed throughout. These principles build on established standards and guidelines such as Web Content Accessibility Guidelines (WCAG), and should be implemented in ways that support assistive technologies and other access needs. For details of implementation, refer to the complementary Copilot Accessibility Practices.
How to use these principles
Use these principles throughout the design process, especially when planning new features.
- Ask Copilot to evaluate feature specs or scenarios against the principles.
- Annotate designs with relevant principles using the Annotation Toolkit.
- Critique UI designs and flows against the principles during reviews.
Principle 1: Clear and understandable
Copilot should use language, visuals, and actions that everyone can easily understand.
Why this matters
When language or visuals are hard to interpret, people with cognitive disabilities or low vision can’t participate fully. Large or unstructured blocks of text can also make content difficult to follow for deaf users. Clarity lowers barriers for everyone and is essential for people who rely on predictable, structured information to navigate and understand content.
Examples
- Use plain language and avoid unexplained jargon.
- Provide accessible alternatives for any non‑text or multimodal outputs.
- Ensure Copilot’s responses are visually perceivable and easy to read.
- Structure Copilot’s output with clear headings, labels, or formatting when relevant.
- Show essential details by default, with options to expand for more information.
- Offer accessible help and guidance when users need it.
Principle 2: Transparent and predictable
Users should always know what Copilot is doing, why it’s doing it, and what will happen next.
Why this matters
AI systems can act in ways that are invisible or ambiguous. When users do not know what is happening or why, they can lose trust and control. This is especially true for people using assistive technology, who may not see subtle visual cues. Clear communication about actions, reasoning, and authorship helps users follow the conversation.
Examples
- Announce Copilot’s actions and intentions in plain language.
- Make it clear who is speaking and when content is generated by Copilot.
- Differentiate between user messages, Copilot’s replies, Copilot’s reasoning or internal dialogue, and any generated content.
- Identify each agent clearly, including its role and transitions between agents.
- Provide clear status and outcome updates in ways that work reliably with assistive technologies.
Principle 3: Adaptable to the user’s way of working
Copilot should adjust to the user’s pace and preferred ways of interacting, not the other way around.
Why this matters
People interact with technology differently. Some process information quickly, others need more time to review and respond. Some prefer to work with the keyboard, others with voice or touch. When users can control how they work with Copilot, the interaction becomes more inclusive and effective.
Examples
- Allow users to control speed, level of detail, and format of output.
- Allow users to choose how much autonomy Copilot has.
- Avoid time limits on taking action or reviewing information.
- Support multiple input and interaction methods.
- Allow users to set and save their own preferences.
- Provide icebreaker prompts or suggestions to help users get started.
- Be proactive in offering help or adaptations based on the way the user is working.
- Allow users to share Copilot sessions so they can get support or collaborate with others when needed.
Principle 4: Flexible and forgiving
Copilot should make it easy to experiment, iterate, and recover from mistakes.
Why this matters
Trying new things means making mistakes. For some users, recovering from those mistakes is harder, especially when AI makes complex changes or behaves unexpectedly, or when they rely on assistive technologies or have memory or processing differences. Copilot should make it easy for everyone to experiment, iterate, and get back on track.
Examples
- Allow users to review, correct, or reverse actions as needed.
- Preserve a record of interactions so users can retrace or restore previous steps.
- Make changes clear so users can understand what has been updated or affected.
- Explain problems clearly, help users recover, and suggest next steps.
- Preserve progress so users can easily pick up where they left off.
- Make experimentation safe, reversible, and low risk.
- Help users regain context after a break or interruption.
Principle 5: Consistent and reliable
Copilot should look, sound, and behave consistently so users always know what to expect.
Why this matters
Predictable and stable behavior is essential for many users, particularly those who rely on assistive technologies or build up mental models of how a system works. When tone, output, or behavior varies across contexts or sessions, users may need to relearn patterns or reorient themselves. Inconsistency increases cognitive load and can disrupt established workflows.
Examples
- Use a consistent voice and style that work well with assistive technologies.
- Use Primer components where possible and follow platform conventions.
- Keep behaviors and outputs predictable across contexts, sessions, and devices.
- Avoid sudden or unexplained changes.