Apple has previewed software features for cognitive, vision, hearing, and mobility accessibility, along with innovative tools for individuals who are nonspeaking or at risk of losing their ability to speak.
Apple has been collaborating with community groups representing users with disabilities to develop accessibility features. The updates draw on advances in hardware and software, including on-device machine learning to ensure user privacy, the company said on Tuesday. Coming later this year, users with cognitive disabilities can use iPhone and iPad with greater ease and independence with Assistive Access.
Nonspeaking individuals can type to speak during calls and conversations. With Live Speech, people can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends. For users who are blind or have low vision, Detection Mode in Magnifier offers Point and Speak, which identifies text users point toward and reads it out loud, according to Apple.
“We’re excited to share incredible new features that build on our long history of making technology accessible so that everyone has the opportunity to create, communicate, and do what they love,” said Tim Cook, Apple’s CEO.
“The intellectual and developmental disability community is bursting with creativity, but technology often poses physical, visual, or knowledge barriers for these individuals,” said Katy Schmid, senior director of National Program Initiatives at The Arc of the United States. The cognitively accessible experience “means more open doors to education, employment, safety, and autonomy. It means broadening worlds and expanding potential,” she added.
Assistive Access uses innovations in design to distill apps and experiences to their essential features in order to lighten cognitive load. The feature reflects feedback from people with cognitive disabilities and their trusted supporters — focusing on the activities they enjoy — and that are foundational to iPhone and iPad: connecting with loved ones, capturing and enjoying photos, and listening to music.
Assistive Access includes a customised experience for Phone and FaceTime®, which have been combined into a single calls app, as well as messages, camera, photos, and music. The feature offers a distinct interface with high contrast buttons and large text labels, as well as tools to help trusted supporters tailor the experience for the individual they support. For example, for users who prefer communicating visually, Messages includes an emoji-only keyboard and the option to record a video message to share with loved ones. Users and trusted supporters can also choose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for users who prefer text.
With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations. Users can also save commonly used phrases to chime in quickly during lively conversation with family, friends, and colleagues. Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.
For users at risk of losing their ability to speak — such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability — Personal Voice is a simple and secure way to create a voice that sounds like them.
Users can create a Personal Voice by reading along with a randomised set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.
“At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, board member and ALS advocate at the Team Gleason nonprofit, who has experienced significant changes to his voice since receiving his ALS diagnosis in 2018. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”
Point and Speak in Magnifier makes it easier for users with vision disabilities to interact with physical objects that have several text labels. For example, while using a household appliance — such as a microwave — Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad. 2 Point and Speak is built into the Magnifier app on iPhone and iPad, works great with VoiceOver, and can be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment.
Cameroon Postal Services (Campost) has partnered with FindMe, an address management start-up, to modernize mail…
In a bid to bridge Nigeria's digital divide, industry leaders gathered in Lagos for the…
Visa has made strategic investments in four African fintech startups—Oze, Workpay, OkHi, and ORDA—that recently…
Chinese tech giant Huawei has announced plans to host a hackathon in early December, aiming…
A system malfunction during a key data migration allowed customers at KCB Group, Kenya’s largest…
Mozambique's social media platforms have been restricted as Venâncio Mondlane, a main opposition figure in…