Yelp updates app with AI-powered alt-text for images and new accessibility identifiers for businesses

Yelp is rolling out an app update to include more accessibility identifiers for businesses, improved screen-reader experiences, and AI-powered alt-text for images. The company said that from 2020 to 2023, there has been an average rise of 40% in searches for “wheelchair accessible” places. With the new update, the company is adding eight more attributes […]

© 2024 TechCrunch. All rights reserved. For personal use only.

How European disability tech startups are leveraging AI

Making life better for people with disabilities is a laudable goal, but accessibility tech hasn’t traditionally been popular among VCs. In 2022, disability tech companies attracted around $4 billion in early-stage investments, which was a fraction of fintech’s intake, for example. One reason is that disability tech startups are often considered too niche to attain […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Microsoft’s new Adaptive accessibility accessories include an Atari-style joystick

Microsoft has long garnered plaudits for its focus on accessibility. It’s a large segment of the population that is too often disregarded as an afterthought when it comes to product design. The company has offered accessibility-focused Xbox peripheral for some time and has introduced the Adaptive line of computing peripherals roughly this time last year. […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Promova’s new feature helps people with dyslexia learn a new language

Promova, the language learning app with over 11 million users, today launched “Dyslexia Mode,” which uses a specialized typeface called Dysfont so people with dyslexia can learn a new language more easily. In addition, the new mode reduces color brightness and implements multi-sensory teaching techniques. Dysfont was developed by designer Martin Pysny, who was diagnosed […]

Reddit makes an exception for accessibility apps under new API terms

Reddit says it will update its newly revised API terms to carve out an exception for accessibility apps, which allow users, including people who are blind or visually impaired, a way to browse and use Reddit. The carve-out comes after Reddit announced new API terms that would put most third-party app developers out of business, as they could no longer afford the high fees that come with the new pricing. One developer of the popular Reddit app Apollo, for example, said it would cost him $20 million per year to continue running his business — money the app doesn’t make. As backlash over the changes ensued, several Reddit communities said they would go dark in protest of Reddit’s new policy.

The news of the exception to Reddit’s API pricing was first reported by The Verge, citing comments from Reddit spokesperson Tim Rathschmidt.

In a statement also shared with TechCrunch, Rathschmidt said Reddit has “connected with select developers of non-commercial apps that address accessibility needs and offered them exemptions from our large-scale pricing terms.”

The announcement follows the news of the planned protest across Reddit, which includes support from community moderators like those in the /Blind subreddit, who have stressed that Reddit’s new terms would impact apps they use to be able to access the site, like Reddit for Blind and Luna for Reddit and other screen readers. They said they would participate in the protest for 48 hours from June 12th to June 14th, as a result of Reddit’s changes. Other top subreddits are also participating, including  r/aww, r/videos, r/Futurology, r/LifeHacks, r/bestof, r/gaming, r/Music, r/Pics, r/todayilearned, r/art, r/DIY, r/gadgets, r/sports, r/mildlyinteresting and many others. Several of these communities are in the double-digit millions in terms of size.

After The Verge published the article noting the new exception would be made for accessible apps, a /Blind moderator, MostlyBlindGamer, shared that they’ve received no clarification from Reddit about how they’re defining “accessibility-focused apps” or any process around having apps qualify under the new guidelines. However, they did tell us they had a call with Reddit where they were asked about apps that provide accessibility features, but were not told why they were asking about this.

“We have strong concerns that Reddit lacks expertise to consider the varying access needs of the blind and visually impaired community,” MostlyBlindGamer wrote, adding they had also reached out to Reddit for further comment. They also noted that, over the past three years, r/blind and another moderator, u/rumster, had reached out to Reddit repeatedly over accessibility concerns and had “received no substantive response.”

The r/Blind community is now organizing a list of apps that would qualify for an exception under the new exception, which includes screen readers Reddit for Blind, Luna for Reddit, Dystopia, BaconReader, but also other general-purpose apps that take advantage of iOS accessibility APIs or add accessible features, like the ability to adjust the text size, contrast, color and more.

This list includes Apollo, one of the most popular Reddit clients, which the list says “works with most iOS accessibility technology, unlike the official app.”

However, we understand Reddit will not likely consider general-purpose apps like Apollo for exemption and will only focus on apps designed to address accessibility needs.

Until now, Reddit has held firm to its new API pricing, with no exceptions. Over the past weekend, a Reddit employee discussing the changes in the r/redditdev community accused Apollo of being “less efficient than its peers and at times has been excessive –probably because it has been free to do so.” Apollo developer Christian Selig asked for clarity over the inefficiencies — was the app inefficient or “just being used more?,” he wanted to know. He did not receive a direct response.

Reddit has previously stressed the need for the new pricing, as spokesperson Rathschmidt said that “access to data has impact and costs involved, and in terms of safety and privacy we have an obligation to our communities to be responsible stewards of data,” and that the intention was not to “kill” third-party apps.

Reddit makes an exception for accessibility apps under new API terms by Sarah Perez originally published on TechCrunch

Apple reveals new accessibility features, like custom text-to-speech voices

Apple previewed a suite of new features today to improve cognitive, vision and speech accessibility. These tools are slated to arrive on the iPhone, iPad and Mac later this year. An established leader in mainstream tech accessibility, Apple emphasizes that these tools are built with feedback from disabled communities.

Assistive Access, coming soon to iOS and iPadOS, is designed for people with cognitive disabilities. Assistive Access streamlines the interface of the iPhone and iPad, specifically focusing on making it easier to talk to loved ones, share photos and listen to music. The Phone and FaceTime apps are merged into one, for example.

The design also is made more digestible by incorporating large icons, increased contrast and clearer text labels to make the screen more simple. However, the user can customize these visual features to their liking, and those preferences carry across any app that is compatible with Assistive Access.

As part of the existing Magnifier tool, blind and low vision users can already use their phone to locate nearby doors, people or signs. Now Apple is introducing a feature called Point and Speak, which uses the device’s camera and LiDAR scanner to help visually disabled people interact with physical objects that have several text labels.

Image Credits: Apple

So, if a low vision user wanted to heat up food in the microwave, they could use Point and Speak to discern the difference between the “popcorn,” “pizza” and “power level” buttons — when the device identifies this text, it reads it out loud. Point and Speak will be available in English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese and Ukrainian.

A particularly interesting feature from the bunch is Personal Voice, which creates an automated voice that sounds like you, rather than Siri. The tool is designed for people who may be at risk of losing their vocal speaking ability from conditions like ALS. To generate a Personal Voice, the user has to spend about fifteen minutes reading randomly chosen text prompts clearly into their microphone. Then, using machine learning, the audio is processed locally on your iPhone, iPad or Mac to create your Personal Voice. It sounds similar to what Acapela has been doing with its “my own voice” service, which works with other assistive devices.

It’s easy to see how a repository of unique, highly-trained text to speech models could be dangerous in the wrong hands. But according to Apple, this custom voice data is never shared with anyone, even Apple itself. In fact, Apple says doesn’t even connect your personal voice with your Apple ID, since some households might share a log-in. Instead, users must opt in if they want a Personal Voice they make on their Mac to be accessible on their iPhone, or vice versa.

At launch, Personal Voice will only be available for English speakers, and can only be created on devices with Apple silicon.

Whether you’re speaking as Siri or your AI voice twin, Apple is making it easier for non-verbal people to communicate. Live Speech, available across Apple devices, lets people type what they want to say so that it can be spoken aloud. The tool is available at the ready on the lock screen, but it can also be used in other apps, like FaceTime. Plus, if users find themselves often needing to repeat the same phrases — like a regular coffee order, for example — they can store preset phrases within Live Speech.

Apple’s existing speech-to-text tools are getting an upgrade, too. Now, Voice Control will incorporate phonetic text editing, which makes it easier for people who type with their voice to quickly correct errors. So, if you see your computer transcribe “great,” but you meant to say “grey,” it will be easier to make that correction. This feature, Phonetic Suggestions, will be available in English, Spanish, French and German for now.

Image Credits: Apple

These accessibility features are expected to roll out across various Apple products this year. As for its existing offerings, Apple is expanding access to SignTime to Germany, Italy, Spain and South Korea on Thursday. SignTime offers users on-demand sign language interpreters for Apple Store and Apple Support customers.

Apple reveals new accessibility features, like custom text-to-speech voices by Amanda Silberling originally published on TechCrunch

SLAIT pivots from translating sign language to AI-powered interactive lessons

Millions of people use sign language, but methods of teaching this complex and subtle skill haven’t evolved as quickly those for written and spoken languages. SLAIT School aims to change that with an interactive tutor powered by computer vision, letting aspiring ASL speakers practice at their own rate like in any other language learning app.

SLAIT started back in 2021 as the sign language AI translator (hence the name): a real-time video chat and translation tool that could recognize most common signs and help an ASL speaker communicate more easily with someone who doesn’t know the language. But early successes slowed as the team realized they needed more time, money, and data than they were likely to get.

“We got great results in the beginning, but after several attempts we realized that, right now, there just is not enough data to provide full language translation,” explained Evgeny Fomin, CEO and co-founder of SLAIT. “We had no opportunity for investment, no chance to find our supporters, because we were stuck without a product launch — we were in limbo. Capitalism… is hard.”

“But then we thought, what can we do with the tech we’ve got from R&D? We realized we needed to do an education solution, because for education our technology is definitely good enough,” he continued. Not that there are lower standards for people learning, but the fluidity and subtlety of fluent and mature sign language is orders of magnitude more difficult to capture and translate than one or two words at a time.

“We found an extremely talented guy who helped us develop a product, and we made SLAIT School. And now we have our first customers and some traction!”

Existing online sign language courses (here’s a solid list if you’re curious) are generally pretty traditional. You have lectures and demonstrations, vocabulary lists, illustrations, and if you pay for it online, you can have someone review your work over video. It’s high quality and much of it is free but it isn’t the kind of interactive experience people have come to expect from apps like Duolingo.

SLAIT School uses an updated version of the gesture recognition tech that powered the translator demo app to provide instant feedback on words and phrases. See it in video form, then try it until you get it. Currently it’s for desktop browsers only but the team is planning a mobile app as well.

“We have some room for improvement, but it’s exactly what we planned to deliver. Students can access the platform, do practice, make signs, interact with the AI tutor, and it costs them same as one hour with an in person tutor,” said Fomin. “The mobile apps we aim to do for free.”

Users can do the first few lessons in order to see that the system works, then it’s $39 monthly, $174 half-yearly, and $228 yearly. The price may seem high compared with large-scale language apps supported by capital and other business models, but Fomin emphasized this is a new and special category, and real-world tutors are their primary competition.

Image Credits: SLAIT School

“We actively communicate with users and try to find the best prices and economic model that makes subscription plans affordable. We would really like to make the platform free, but so far we have not found an opportunity for this yet. Because this is a very niche product… we just need to make a stable economic model that will work in general,” he said.

The traction is also a flywheel for the company’s tech and content. By collecting information (with express opt-in consent, which he says the community is quite happy to provide) they can expand and improve the curriculum, and continue to refine their gesture recognition engine.

“We see two directions for growing in,” Fomin said. “The first is to cover more language groups, like British sign language and Japanese sign language. Also we would like to make the curriculum more adaptive, or provide a curriculum for medical and scientific signs. If we have enough investment to grow and scale we can be a leading platform globally to automate talking to doctor in sign language.”

After that, he said, “Maybe we can finally develop a translator. We can break through this barrier!”

SLAIT pivots from translating sign language to AI-powered interactive lessons by Devin Coldewey originally published on TechCrunch