Rive Blog

Accesible web animations: ARIA live regions

Implement accessibility features with dynamically changing content

-

Thursday, June 9, 2022

Rive Blog

Accesible web animations: ARIA live regions

Implement accessibility features with dynamically changing content

-

Thursday, June 9, 2022

Rive Blog

Accesible web animations: ARIA live regions

Implement accessibility features with dynamically changing content

-

Thursday, June 9, 2022

Welcome to our series of accessible web animations! We'll talk about various topics around accessibility (a11y), how it applies to animations, and how you can set up Rive animations for a better user experience for all types of users. If you're building web applications, whether your role is a developer or designer or anywhere in-between, this series is for you, friend.

In this blog, we'll chat in particular about ARIA live regions! ARIA stands for Accessible Rich Internet Applications and is essentially a toolset to help make web content more accessible to users with disabilities.

What are ARIA live regions?

Today, many web applications have content that dynamically changes or is dynamically inserted, possibly via some button click, some API call response, etc. While this may visually draw the user's attention, screen readers may not pick up on the dynamic content and miss announcing it: enter ARIA live regions to the rescue!

Important content changes for users can be announced via screen readers by defining a live region. A live region can be set via HTML attributes on any element that encompasses the dynamic content:

There are two values of note you can set on aria-live:

  • polite - Does not interrupt the screen reader's live announcements and instead waits until it is idle to announce the new content

  • assertive - Interrupts live screen reader announcements and is generally reserved for time-sensitive or critical notifications that need to be announced at the moment

With live regions, you can manipulate the content within these blocks, and screen readers in most browsers should be able to reflect that to the user at the appropriate time, just the same for users experiencing web applications without screen readers. This pattern helps bridge the gap in a consistent user experience for all users.

How does this apply to animations?

Glad you asked (or maybe you didn't, but I'll tell you anyway)! Animations are increasingly used today in many applications to help create unique experiences, especially interactive ones. The tricky part is, how do we create accessible experiences for users that use screen readers to assist their experience in traversing web applications?

Scenario: Imagine you have a screen showing a loader of sorts through an animation of a glass filling up. If this loading sequence is long enough, you might convey the loading progress through the animation. You may want to let screen readers know about the dynamic loading progress too so they can announce the activity to users. This way, users that utilize screen readers and users that can visually see the animation on a screen can both interpret loading progress.

First off, I highly recommend checking out this CSS tricks article about making the actual animations accessible and some other guidelines.

In this blog, let's focus on pairing an animation with some kind of description about what is happening in the animation, a label for what is playing on the screen. We want to use live regions to describe animations when they start playing at the appropriate time when the user experiences them, rather than describing the animations in a top-down manner when the screen reader finds the description. Let's look at an example in the next section where we announce a description of an animation only when it scrolls into view within the screen’s viewport.

Using live regions with Rive at runtime

Check out the following Rive creation by JcToon in the community: https://rive.app/community/1738-3431-raster-graphics-example/

To describe the above to users experiencing a web app with this animation, we might include a description for screen readers that reads:

Image of a character skydiving and screaming as they descend through an infinite sky

Ok, maybe this description isn't the most critical content to raise to the user as an assertive alert, but imagine in our web app, ✨ it is ✨. To make this happen, we want to do two things:

  1. Add the role= "img" to the <canvas> playing the animation in HTML (see more in this article on why that is)

  2. Create the live region that inserts the descriptive text when the animation is in screen view (imagine it is off-screen and further down the page to start)

When using the React runtime with Rive animations, the actual render loop of an animation does not start unless the <canvas> is within view. We can utilize similar logic to only show descriptive text for this animation when the canvas is in view using an Intersection Observer API.

For step one, see the following snippet below that sets the role and an aria-describedby attribute that we'll use to connect the animation to the descriptive text:

For step two, you can create a live region with the aria-live attribute and give it either the polite or assertive value, depending on when you want to interrupt the screen reader to read the animation description content. We'll set up this attribute and the logic for dynamically showing the animation text in the example below. isPlaying represents a React variable that is true when the animation is in the screen viewport and false otherwise.

Check out the video below for the result! This example video uses VoiceOver for Mac and demonstrates the screen reader reading the page's content and politely announcing animation content as it scrolls into view. Check the Github project link below to run it yourself, or see the source code.

Example app using an ARIA live region with a Rive animation

What we have in that simplistic application is an experience similar for users that require screen readers to navigate web applications that mirrors that of users who navigate web applications without them, displaying and describing animations at the appropriate time they come into view.

With Rive's state machines that drive even more interactive states, you can imagine all the dynamic content you want to relay to users with screen readers. With ARIA live regions, this creates a more accessible experience for all!

Resources

Github project: https://github.com/zplata/rive-live-regions
Rive Community Post: https://rive.app/community/1738-3431-raster-graphics-example/

Welcome to our series of accessible web animations! We'll talk about various topics around accessibility (a11y), how it applies to animations, and how you can set up Rive animations for a better user experience for all types of users. If you're building web applications, whether your role is a developer or designer or anywhere in-between, this series is for you, friend.

In this blog, we'll chat in particular about ARIA live regions! ARIA stands for Accessible Rich Internet Applications and is essentially a toolset to help make web content more accessible to users with disabilities.

What are ARIA live regions?

Today, many web applications have content that dynamically changes or is dynamically inserted, possibly via some button click, some API call response, etc. While this may visually draw the user's attention, screen readers may not pick up on the dynamic content and miss announcing it: enter ARIA live regions to the rescue!

Important content changes for users can be announced via screen readers by defining a live region. A live region can be set via HTML attributes on any element that encompasses the dynamic content:

There are two values of note you can set on aria-live:

  • polite - Does not interrupt the screen reader's live announcements and instead waits until it is idle to announce the new content

  • assertive - Interrupts live screen reader announcements and is generally reserved for time-sensitive or critical notifications that need to be announced at the moment

With live regions, you can manipulate the content within these blocks, and screen readers in most browsers should be able to reflect that to the user at the appropriate time, just the same for users experiencing web applications without screen readers. This pattern helps bridge the gap in a consistent user experience for all users.

How does this apply to animations?

Glad you asked (or maybe you didn't, but I'll tell you anyway)! Animations are increasingly used today in many applications to help create unique experiences, especially interactive ones. The tricky part is, how do we create accessible experiences for users that use screen readers to assist their experience in traversing web applications?

Scenario: Imagine you have a screen showing a loader of sorts through an animation of a glass filling up. If this loading sequence is long enough, you might convey the loading progress through the animation. You may want to let screen readers know about the dynamic loading progress too so they can announce the activity to users. This way, users that utilize screen readers and users that can visually see the animation on a screen can both interpret loading progress.

First off, I highly recommend checking out this CSS tricks article about making the actual animations accessible and some other guidelines.

In this blog, let's focus on pairing an animation with some kind of description about what is happening in the animation, a label for what is playing on the screen. We want to use live regions to describe animations when they start playing at the appropriate time when the user experiences them, rather than describing the animations in a top-down manner when the screen reader finds the description. Let's look at an example in the next section where we announce a description of an animation only when it scrolls into view within the screen’s viewport.

Using live regions with Rive at runtime

Check out the following Rive creation by JcToon in the community: https://rive.app/community/1738-3431-raster-graphics-example/

To describe the above to users experiencing a web app with this animation, we might include a description for screen readers that reads:

Image of a character skydiving and screaming as they descend through an infinite sky

Ok, maybe this description isn't the most critical content to raise to the user as an assertive alert, but imagine in our web app, ✨ it is ✨. To make this happen, we want to do two things:

  1. Add the role= "img" to the <canvas> playing the animation in HTML (see more in this article on why that is)

  2. Create the live region that inserts the descriptive text when the animation is in screen view (imagine it is off-screen and further down the page to start)

When using the React runtime with Rive animations, the actual render loop of an animation does not start unless the <canvas> is within view. We can utilize similar logic to only show descriptive text for this animation when the canvas is in view using an Intersection Observer API.

For step one, see the following snippet below that sets the role and an aria-describedby attribute that we'll use to connect the animation to the descriptive text:

For step two, you can create a live region with the aria-live attribute and give it either the polite or assertive value, depending on when you want to interrupt the screen reader to read the animation description content. We'll set up this attribute and the logic for dynamically showing the animation text in the example below. isPlaying represents a React variable that is true when the animation is in the screen viewport and false otherwise.

Check out the video below for the result! This example video uses VoiceOver for Mac and demonstrates the screen reader reading the page's content and politely announcing animation content as it scrolls into view. Check the Github project link below to run it yourself, or see the source code.

Example app using an ARIA live region with a Rive animation

What we have in that simplistic application is an experience similar for users that require screen readers to navigate web applications that mirrors that of users who navigate web applications without them, displaying and describing animations at the appropriate time they come into view.

With Rive's state machines that drive even more interactive states, you can imagine all the dynamic content you want to relay to users with screen readers. With ARIA live regions, this creates a more accessible experience for all!

Resources

Github project: https://github.com/zplata/rive-live-regions
Rive Community Post: https://rive.app/community/1738-3431-raster-graphics-example/

Welcome to our series of accessible web animations! We'll talk about various topics around accessibility (a11y), how it applies to animations, and how you can set up Rive animations for a better user experience for all types of users. If you're building web applications, whether your role is a developer or designer or anywhere in-between, this series is for you, friend.

In this blog, we'll chat in particular about ARIA live regions! ARIA stands for Accessible Rich Internet Applications and is essentially a toolset to help make web content more accessible to users with disabilities.

What are ARIA live regions?

Today, many web applications have content that dynamically changes or is dynamically inserted, possibly via some button click, some API call response, etc. While this may visually draw the user's attention, screen readers may not pick up on the dynamic content and miss announcing it: enter ARIA live regions to the rescue!

Important content changes for users can be announced via screen readers by defining a live region. A live region can be set via HTML attributes on any element that encompasses the dynamic content:

There are two values of note you can set on aria-live:

  • polite - Does not interrupt the screen reader's live announcements and instead waits until it is idle to announce the new content

  • assertive - Interrupts live screen reader announcements and is generally reserved for time-sensitive or critical notifications that need to be announced at the moment

With live regions, you can manipulate the content within these blocks, and screen readers in most browsers should be able to reflect that to the user at the appropriate time, just the same for users experiencing web applications without screen readers. This pattern helps bridge the gap in a consistent user experience for all users.

How does this apply to animations?

Glad you asked (or maybe you didn't, but I'll tell you anyway)! Animations are increasingly used today in many applications to help create unique experiences, especially interactive ones. The tricky part is, how do we create accessible experiences for users that use screen readers to assist their experience in traversing web applications?

Scenario: Imagine you have a screen showing a loader of sorts through an animation of a glass filling up. If this loading sequence is long enough, you might convey the loading progress through the animation. You may want to let screen readers know about the dynamic loading progress too so they can announce the activity to users. This way, users that utilize screen readers and users that can visually see the animation on a screen can both interpret loading progress.

First off, I highly recommend checking out this CSS tricks article about making the actual animations accessible and some other guidelines.

In this blog, let's focus on pairing an animation with some kind of description about what is happening in the animation, a label for what is playing on the screen. We want to use live regions to describe animations when they start playing at the appropriate time when the user experiences them, rather than describing the animations in a top-down manner when the screen reader finds the description. Let's look at an example in the next section where we announce a description of an animation only when it scrolls into view within the screen’s viewport.

Using live regions with Rive at runtime

Check out the following Rive creation by JcToon in the community: https://rive.app/community/1738-3431-raster-graphics-example/

To describe the above to users experiencing a web app with this animation, we might include a description for screen readers that reads:

Image of a character skydiving and screaming as they descend through an infinite sky

Ok, maybe this description isn't the most critical content to raise to the user as an assertive alert, but imagine in our web app, ✨ it is ✨. To make this happen, we want to do two things:

  1. Add the role= "img" to the <canvas> playing the animation in HTML (see more in this article on why that is)

  2. Create the live region that inserts the descriptive text when the animation is in screen view (imagine it is off-screen and further down the page to start)

When using the React runtime with Rive animations, the actual render loop of an animation does not start unless the <canvas> is within view. We can utilize similar logic to only show descriptive text for this animation when the canvas is in view using an Intersection Observer API.

For step one, see the following snippet below that sets the role and an aria-describedby attribute that we'll use to connect the animation to the descriptive text:

For step two, you can create a live region with the aria-live attribute and give it either the polite or assertive value, depending on when you want to interrupt the screen reader to read the animation description content. We'll set up this attribute and the logic for dynamically showing the animation text in the example below. isPlaying represents a React variable that is true when the animation is in the screen viewport and false otherwise.

Check out the video below for the result! This example video uses VoiceOver for Mac and demonstrates the screen reader reading the page's content and politely announcing animation content as it scrolls into view. Check the Github project link below to run it yourself, or see the source code.

Example app using an ARIA live region with a Rive animation

What we have in that simplistic application is an experience similar for users that require screen readers to navigate web applications that mirrors that of users who navigate web applications without them, displaying and describing animations at the appropriate time they come into view.

With Rive's state machines that drive even more interactive states, you can imagine all the dynamic content you want to relay to users with screen readers. With ARIA live regions, this creates a more accessible experience for all!

Resources

Github project: https://github.com/zplata/rive-live-regions
Rive Community Post: https://rive.app/community/1738-3431-raster-graphics-example/

Start building beautiful interactive graphics

Get Started

© 2022 Rive, Inc. All rights reserved.

All trademarks, logos, and brand names are the property of their respective owners.

© 2022 Rive, Inc. All rights reserved.

All trademarks, logos, and brand names are the property of their respective owners.