Apple’s Voice Control improves accessibility OS-wide on all its devices
Apple is known for fluid, intuitive user interfaces, but none of that matters if you can’t click, tap, or drag because you don’t have a finger to do so with. For users with disabilities the company is doubling down on voice-based accessibility with the powerful new Voice Control feature on Macs and iOS (and iPadOS) devices.
Many devices already support rich dictation, and of course Apple’s phones and computers have used voice-based commands for years (I remember talking to my Quadra). But this is a big step forward that makes voice controls close to universal — and it all works offline.
The basic idea of Voice Control is that the user has both set commands and context-specific ones. Set commands are things like “Open Garage Band” or “File menu” or “Tap send.” And of course some intelligence has gone into making sure you’re actually saying the command and not writing it, like in that last sentence.
But that doesn’t work when you have an interface that pops up with lots of different buttons, fields, and labels. And even if every button or menu item could be called by name, it might be difficult or time-consuming to speak everything out loud.
To fix this Apple simply attaches a number to every UI item in the foreground, which a user can show by saying “show numbers.” Then they can simply speak the number or modify it with another command, like “tap 22.” You can see a basic workflow below, though of course without the audio cues it loses a bit:
Remember that these numbers may be more easily referenced by someone with little or no vocal ability, and could in fact be selected from using a simpler input like a dial or blow tube. Gaze tracking is good but it has its limitations, and this is a good alternative.
For something like maps, where you could click anywhere, there’s a grid system for selecting where to zoom in or click. Just like Blade Runner! Other gestures like scrolling and dragging are likewise supported.
Dictation has been around for a bit but it’s been improved as well; You can select and replace entire phrases, like “Replace ‘be right back’ with ‘on my way.’ ” Other little improvements will be noted and appreciated by those who use the tool often.
All the voice processing is done offline, which makes it both quick and robust to things like signal problems or use in foreign countries where data might be hard to come by. And the intelligence built into Siri lets it recognize names and context-specific words that may not be part of the base vocabulary. Improved dictation means selecting emoji and adding dictionary items is a breeze.
Right now Voice Control is supported by all native apps, and third party apps that use Apple’s accessibility API should be able to take advantage of it easily. And even if they don’t do it specifically, numbers and grids should still work just fine, since all the OS needs to know are the locations of the UI items. These improvements should appear in accessibility options as soon as a device is updated to iOS 13 or Catalina.
from Mobile – TechCrunch https://tcrn.ch/2Km8fVQ
via Blogger http://bit.ly/2ESLq8G
June 04, 2019 at 05:08AM
via Blogger http://bit.ly/2KnVme8
June 04, 2019 at 05:33AM
0 Comments