Apple Announces New Features For Users With Cognitive Disabilities

These features are expected to be on iPhones and iPads later this year.

Subscribe to our Telegram channel for the latest stories and updates.

New software features for cognitive, speech, and vision accessibility are coming later this year for Apple’s iPhones and iPads.

The company previewed software features for cognitive, vision, hearing, and mobility accessibility, along with innovative tools for individuals who are nonspeaking or at risk of losing their ability to speak.

These updates, they say, draw on advances in hardware and software, including on-device machine learning.

Assistive Access

Coming later this year, users with cognitive disabilities will able to use Assistive Access – a set of assistive tools which helps people to communicate better on their iPhones and iPads.

With the new feature, nonspeaking individuals can type to speak during calls and conversations with Live Speech; and those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends.

For users who are blind or have low vision, Detection Mode in Magnifier offers Point and Speak, which identifies text users point toward and reads it out loud to help them interact with physical objects such as household appliances.

At Apple, we’ve always believed that the best technology is technology built for everyone. Today, we’re excited to share incredible new features that build on our long history of making technology accessible, so that everyone has the opportunity to create, communicate, and do what they love.

Apple CEO Tim Cook

How assistive access supports users with cognitive disabilities

Apple says that Assistive Access uses innovations in design to distill apps and experiences to their essential features in order to lighten cognitive load based on feedback from people with cognitive disabilities and their trusted supporters.

The feature includes a customized experience for Phone and FaceTime, which have been combined into a single Calls app, as well as Messages, Camera, Photos, and Music.

It offers an interface with high-contrast buttons and large text labels, as well as tools to help trusted supporters tailor the experience for the individual they support. For example, for users who prefer communicating visually, Messages includes an emoji-only keyboard and the option to record a video message to share with loved ones.

Users can also choose between a more visual, grid-based layout for their Home Screen and apps or a row-based layout for users who prefer text.

Credit: Apple

Live Speech and Personal Voice Advance Speech Accessibility

With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations. Users can also save commonly used phrases to chime in quickly during conversations with family, friends, and colleagues.

Apple says that Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.

For users at risk of losing their ability to speak, Personal Voice is a simple and secure way to create a voice that sounds like them. 

To create their own Personal Voice, users can record their voice by reading along with a randomized set of text prompts to record 15 minutes of audio on an iPhone or iPad.

This speech accessibility feature uses on-device machine learning, which, according to Apple, keeps user information private and secure and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.

Credit: Apple

Detection Mode in Magnifier Introduces Point and Speak for Users With Vision Impairments

Point and Speak in Magnifier makes it easier for users with vision disabilities to interact with physical objects that have several text labels.

For example, while using a household appliance — such as a microwave — Point and Speak combines input from the camera, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their fingers across the keypad.

Point and Speak is built into the Magnifier app on iPhone and iPad. It works with VoiceOver and can be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment.

Additional Features that are also coming out soon

  • Deaf or hard-of-hearing users can pair Made for iPhone hearing devices directly to their Macs and customize them for their hearing comfort.
  • Voice Control adds phonetic suggestions for text editing so users who type with their voice can choose the right word out of several that might sound alike, like “do,” “due,” and “dew.” Additionally, with Voice Control Guide, users can learn tips and tricks about using voice commands as an alternative to touch and typing across their iPhone, iPad, and Mac.
  • Users with physical and motor disabilities who use Switch Control can turn any switch into a virtual game controller to play their favorite games on iPhone and iPad.
  • For users with low vision, Text Size is now easier to adjust across Mac apps such as Finder, Messages, Mail, Calendar, and Notes.
  • Users who are sensitive to rapid animations can automatically pause images with moving elements, such as GIFs, in Messages and Safari.
  • For VoiceOver users, Siri voices sound natural and expressive even at high rates of speech feedback; users can also customize the rate at which Siri speaks to them, with options ranging from 0.8x to 2x.

Share your thoughts with us via TechTRP's Facebook, Twitter and Telegram channel for the latest stories and updates.

Previous Post

Google To Delete Inactive Accounts Soon, Users Warned To Activate Or They Will Be Deleted

Next Post

DNB’s 5G Network Will Continue To Be Used Even After Moving To Dual Network Model

Related Posts
Total
0
Share