Apple is beefing up the accessibility section of its devices with the introduction of new features in iOS 17. This includes the new “Personal Voice,” which users can train in just 15 minutes to produce synthetic voices.
The idea behind the feature is to allow people at risk of losing their voices a tool they can use to immortalize them. According to Apple, this will be ideal for those suffering from certain conditions like amyotrophic lateral sclerosis and others that can soon take away users’ voices.
The process is pretty simple, and it will only require a total of 15-minute voice input from users by reading a series of prompts on iPhones or iPads. Doing so will record the users’ voice, which will then be processed to generate the synthetic voice copy that can be used with Apple’s Live Speech. Through this integration, the company explained several use cases of the features.
“With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations,” the company said. “Users can also save commonly used phrases to chime in quickly during lively conversation with family, friends, and colleagues. Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.”
While Personal Voice is good news for people who might find this useful in the future, some might find the tool a concern, especially in these times as criminals try to use AI advances to aid them in their crimes. Nonetheless, Apple stresses that synthetic voice processing in Personal Voice will employ on-device machine learning, assuring the voice data of users won’t be used or accessed by others.