-
Notifications
You must be signed in to change notification settings - Fork 0
Resources . Articles . The Future of Accessibility
Alanna Burke edited this page May 9, 2019
·
3 revisions
Why gesture-based interfaces haven’t lived up to the hype
- Using Oblong’s g-speak at MIT
- Problems:
- To get accurate motion tracking, the ceiling of the lab was filled with at least a dozen cameras.
- I also had to wear black gloves with white dots.
- Imagine if my hands were smaller or larger.
- How would a child use the system?
- Would different gloves need to be made for them?
- learning curve for interactions:
- Many people can click a mouse, but not everyone has the fine motor control to learn a variety of precise gestures and perform them with accuracy.
- In addition, the system must process gestures while filtering out false positives.
- This takes up more computing resources than a simple button or multi-touch system.
- arms were getting tired.
- A lot of my movements involved gesturing with my hands above my heart.
- Less blood was pumping into my wrists and hands; I was putting in more effort than necessary.
- This kind of motion is great for gaming (for instance, I loved the Nintendo Wii), but for a professional setting, I couldn’t see myself using it for more than an hour a day.
- To get accurate motion tracking, the ceiling of the lab was filled with at least a dozen cameras.
- “A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.” — Mark Weiser , 1993
- Business machines look boring for a reason. Any extra steps to complete an objective quickly become tedious.
- Does a gestural control interface invite imagination? No. It demands precision.
- Accuracy issues: Unlike pressing a physical button, gesture control is never 100% accurate.
- Computational issues: Gestural user experiences require more processing power than necessary.
- Reduced inclusivity: Gesture control interfaces can be less accessible. With a controller or set of physical buttons, people can use whatever limb or set of button presses to set the information in place. Imagine someone with a smaller hand than the information the system was originally trained on, or someone with a skin color that the machine wasn’t properly trained on.
- Motion requirements: Gesture control interfaces are great for short periods of time, but they quickly fail under repeated use and long timelines.
- What happened to the foot pedal?
- The foot pedal was the original hands-free interface. It provides an antibacterial way of interacting with the environment, without the necessity of light, sensors, or machine learning.
- The foot pedal does not discriminate. It can be pressed with a cane as easily as it can a foot. Children can stomp on it, and people can take their anger out on it.
- We need simpler solutions, not more complex ones, and sometimes, we can learn from the past.
- So the next time you’re in a bathroom trying to get the “smart” sink or air dryer to understand your flailing limbs, consider how easy it would be to just stomp on a foot pedal.
Five Ways in Which Artificial Intelligence Changes the Face of Web Accessibility
- Image recognition to fix alt text issues?
- Facebook dynamically describes images to blind and visually impaired people. The feature makes it possible for Facebook’s platform to recognize the various components making up an image. Powered by machine learning and neural networks, it can describe each one with jaw-dropping accuracy.
- Facial recognition, as the long-awaited CAPTCHA killer?
- The end goal of facial recognition to unlock phones? Eradicating the need for passwords, which we know most humans are pretty terrible at managing.
- The replacement of CAPTCHA images is one area in which people with disabilities might benefit the most from facial recognition. Once the system recognizes a person interacting with it as a human through the camera lens, the need to weed out bots should be a thing of the past.
- Lip-reading recognition to improve video captions
- Did you know that AI is already beating the world’s top lip-reading experts by a ratio of 4 to 1? Again, through massive exposure to data, building blocks of AI have learned to recognize patterns and mouth shapes over time. These systems can now interpret what people are saying.
- The Google DeepMind project ran research on over 100 000 natural sentences
- Researchers had some of the world’s top experts try to interpret what people on screen were saying.
- They then ran the same collection of videos against the neural networks of Google DeepMind.
- While the best experts interpreted about 12.4% of the content, AI successfully interpreted 46.8%. Enough to put any expert to shame!
- Automated text summarization to help with learning disabilities?
- Salesforce, among others, has been working on an abstractive summarization algorithm. The algorithm uses machine learning to produce shorter text abstracts.
- The human language is one of the most complex aspects of human intelligence to break down for machines.
- This building block holds great promises for people who have learning disabilities such as dyslexia, and people with attention deficit disorders, memory issues, or low literacy skill levels.
- They are now leveraging AI to move from an extractive model to an abstractive one.
- Extractive models draw from pre-existing words in the text to create a summary.
- With an abstractive model, computers have more options. They can introduce new related words and synonyms, as long as the system understands the context enough to introduce the right words to summarize the text.
- Real-time translation as the fabled Babelfish?
- In November of 2016, Google launched its Neural Machine Translation (GNMT) system, which lowered error rates by up to 85%.
- Gone are the days where the service would translate on a word-by-word basis. Now, thanks to GNMT, translations are globally operated. Sentences per sentences, ideas per ideas.
- The more AI is exposed to a language, the more it learns about it, and the more accurate translations become.