Promova’s new feature helps people with dyslexia learn a new language

Promova, the language learning app with over 11 million users, today launched “Dyslexia Mode,” which uses a specialized typeface called Dysfont so people with dyslexia can learn a new language more easily. In addition, the new mode reduces color brightness and implements multi-sensory teaching techniques. Dysfont was developed by designer Martin Pysny, who was diagnosed […]

Reddit makes an exception for accessibility apps under new API terms

Reddit says it will update its newly revised API terms to carve out an exception for accessibility apps, which allow users, including people who are blind or visually impaired, a way to browse and use Reddit. The carve-out comes after Reddit announced new API terms that would put most third-party app developers out of business, as they could no longer afford the high fees that come with the new pricing. One developer of the popular Reddit app Apollo, for example, said it would cost him $20 million per year to continue running his business — money the app doesn’t make. As backlash over the changes ensued, several Reddit communities said they would go dark in protest of Reddit’s new policy.

The news of the exception to Reddit’s API pricing was first reported by The Verge, citing comments from Reddit spokesperson Tim Rathschmidt.

In a statement also shared with TechCrunch, Rathschmidt said Reddit has “connected with select developers of non-commercial apps that address accessibility needs and offered them exemptions from our large-scale pricing terms.”

The announcement follows the news of the planned protest across Reddit, which includes support from community moderators like those in the /Blind subreddit, who have stressed that Reddit’s new terms would impact apps they use to be able to access the site, like Reddit for Blind and Luna for Reddit and other screen readers. They said they would participate in the protest for 48 hours from June 12th to June 14th, as a result of Reddit’s changes. Other top subreddits are also participating, including  r/aww, r/videos, r/Futurology, r/LifeHacks, r/bestof, r/gaming, r/Music, r/Pics, r/todayilearned, r/art, r/DIY, r/gadgets, r/sports, r/mildlyinteresting and many others. Several of these communities are in the double-digit millions in terms of size.

After The Verge published the article noting the new exception would be made for accessible apps, a /Blind moderator, MostlyBlindGamer, shared that they’ve received no clarification from Reddit about how they’re defining “accessibility-focused apps” or any process around having apps qualify under the new guidelines. However, they did tell us they had a call with Reddit where they were asked about apps that provide accessibility features, but were not told why they were asking about this.

“We have strong concerns that Reddit lacks expertise to consider the varying access needs of the blind and visually impaired community,” MostlyBlindGamer wrote, adding they had also reached out to Reddit for further comment. They also noted that, over the past three years, r/blind and another moderator, u/rumster, had reached out to Reddit repeatedly over accessibility concerns and had “received no substantive response.”

The r/Blind community is now organizing a list of apps that would qualify for an exception under the new exception, which includes screen readers Reddit for Blind, Luna for Reddit, Dystopia, BaconReader, but also other general-purpose apps that take advantage of iOS accessibility APIs or add accessible features, like the ability to adjust the text size, contrast, color and more.

This list includes Apollo, one of the most popular Reddit clients, which the list says “works with most iOS accessibility technology, unlike the official app.”

However, we understand Reddit will not likely consider general-purpose apps like Apollo for exemption and will only focus on apps designed to address accessibility needs.

Until now, Reddit has held firm to its new API pricing, with no exceptions. Over the past weekend, a Reddit employee discussing the changes in the r/redditdev community accused Apollo of being “less efficient than its peers and at times has been excessive –probably because it has been free to do so.” Apollo developer Christian Selig asked for clarity over the inefficiencies — was the app inefficient or “just being used more?,” he wanted to know. He did not receive a direct response.

Reddit has previously stressed the need for the new pricing, as spokesperson Rathschmidt said that “access to data has impact and costs involved, and in terms of safety and privacy we have an obligation to our communities to be responsible stewards of data,” and that the intention was not to “kill” third-party apps.

Reddit makes an exception for accessibility apps under new API terms by Sarah Perez originally published on TechCrunch

Apple reveals new accessibility features, like custom text-to-speech voices

Apple previewed a suite of new features today to improve cognitive, vision and speech accessibility. These tools are slated to arrive on the iPhone, iPad and Mac later this year. An established leader in mainstream tech accessibility, Apple emphasizes that these tools are built with feedback from disabled communities.

Assistive Access, coming soon to iOS and iPadOS, is designed for people with cognitive disabilities. Assistive Access streamlines the interface of the iPhone and iPad, specifically focusing on making it easier to talk to loved ones, share photos and listen to music. The Phone and FaceTime apps are merged into one, for example.

The design also is made more digestible by incorporating large icons, increased contrast and clearer text labels to make the screen more simple. However, the user can customize these visual features to their liking, and those preferences carry across any app that is compatible with Assistive Access.

As part of the existing Magnifier tool, blind and low vision users can already use their phone to locate nearby doors, people or signs. Now Apple is introducing a feature called Point and Speak, which uses the device’s camera and LiDAR scanner to help visually disabled people interact with physical objects that have several text labels.

Image Credits: Apple

So, if a low vision user wanted to heat up food in the microwave, they could use Point and Speak to discern the difference between the “popcorn,” “pizza” and “power level” buttons — when the device identifies this text, it reads it out loud. Point and Speak will be available in English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese and Ukrainian.

A particularly interesting feature from the bunch is Personal Voice, which creates an automated voice that sounds like you, rather than Siri. The tool is designed for people who may be at risk of losing their vocal speaking ability from conditions like ALS. To generate a Personal Voice, the user has to spend about fifteen minutes reading randomly chosen text prompts clearly into their microphone. Then, using machine learning, the audio is processed locally on your iPhone, iPad or Mac to create your Personal Voice. It sounds similar to what Acapela has been doing with its “my own voice” service, which works with other assistive devices.

It’s easy to see how a repository of unique, highly-trained text to speech models could be dangerous in the wrong hands. But according to Apple, this custom voice data is never shared with anyone, even Apple itself. In fact, Apple says doesn’t even connect your personal voice with your Apple ID, since some households might share a log-in. Instead, users must opt in if they want a Personal Voice they make on their Mac to be accessible on their iPhone, or vice versa.

At launch, Personal Voice will only be available for English speakers, and can only be created on devices with Apple silicon.

Whether you’re speaking as Siri or your AI voice twin, Apple is making it easier for non-verbal people to communicate. Live Speech, available across Apple devices, lets people type what they want to say so that it can be spoken aloud. The tool is available at the ready on the lock screen, but it can also be used in other apps, like FaceTime. Plus, if users find themselves often needing to repeat the same phrases — like a regular coffee order, for example — they can store preset phrases within Live Speech.

Apple’s existing speech-to-text tools are getting an upgrade, too. Now, Voice Control will incorporate phonetic text editing, which makes it easier for people who type with their voice to quickly correct errors. So, if you see your computer transcribe “great,” but you meant to say “grey,” it will be easier to make that correction. This feature, Phonetic Suggestions, will be available in English, Spanish, French and German for now.

Image Credits: Apple

These accessibility features are expected to roll out across various Apple products this year. As for its existing offerings, Apple is expanding access to SignTime to Germany, Italy, Spain and South Korea on Thursday. SignTime offers users on-demand sign language interpreters for Apple Store and Apple Support customers.

Apple reveals new accessibility features, like custom text-to-speech voices by Amanda Silberling originally published on TechCrunch

SLAIT pivots from translating sign language to AI-powered interactive lessons

Millions of people use sign language, but methods of teaching this complex and subtle skill haven’t evolved as quickly those for written and spoken languages. SLAIT School aims to change that with an interactive tutor powered by computer vision, letting aspiring ASL speakers practice at their own rate like in any other language learning app.

SLAIT started back in 2021 as the sign language AI translator (hence the name): a real-time video chat and translation tool that could recognize most common signs and help an ASL speaker communicate more easily with someone who doesn’t know the language. But early successes slowed as the team realized they needed more time, money, and data than they were likely to get.

“We got great results in the beginning, but after several attempts we realized that, right now, there just is not enough data to provide full language translation,” explained Evgeny Fomin, CEO and co-founder of SLAIT. “We had no opportunity for investment, no chance to find our supporters, because we were stuck without a product launch — we were in limbo. Capitalism… is hard.”

“But then we thought, what can we do with the tech we’ve got from R&D? We realized we needed to do an education solution, because for education our technology is definitely good enough,” he continued. Not that there are lower standards for people learning, but the fluidity and subtlety of fluent and mature sign language is orders of magnitude more difficult to capture and translate than one or two words at a time.

“We found an extremely talented guy who helped us develop a product, and we made SLAIT School. And now we have our first customers and some traction!”

Existing online sign language courses (here’s a solid list if you’re curious) are generally pretty traditional. You have lectures and demonstrations, vocabulary lists, illustrations, and if you pay for it online, you can have someone review your work over video. It’s high quality and much of it is free but it isn’t the kind of interactive experience people have come to expect from apps like Duolingo.

SLAIT School uses an updated version of the gesture recognition tech that powered the translator demo app to provide instant feedback on words and phrases. See it in video form, then try it until you get it. Currently it’s for desktop browsers only but the team is planning a mobile app as well.

“We have some room for improvement, but it’s exactly what we planned to deliver. Students can access the platform, do practice, make signs, interact with the AI tutor, and it costs them same as one hour with an in person tutor,” said Fomin. “The mobile apps we aim to do for free.”

Users can do the first few lessons in order to see that the system works, then it’s $39 monthly, $174 half-yearly, and $228 yearly. The price may seem high compared with large-scale language apps supported by capital and other business models, but Fomin emphasized this is a new and special category, and real-world tutors are their primary competition.

Image Credits: SLAIT School

“We actively communicate with users and try to find the best prices and economic model that makes subscription plans affordable. We would really like to make the platform free, but so far we have not found an opportunity for this yet. Because this is a very niche product… we just need to make a stable economic model that will work in general,” he said.

The traction is also a flywheel for the company’s tech and content. By collecting information (with express opt-in consent, which he says the community is quite happy to provide) they can expand and improve the curriculum, and continue to refine their gesture recognition engine.

“We see two directions for growing in,” Fomin said. “The first is to cover more language groups, like British sign language and Japanese sign language. Also we would like to make the curriculum more adaptive, or provide a curriculum for medical and scientific signs. If we have enough investment to grow and scale we can be a leading platform globally to automate talking to doctor in sign language.”

After that, he said, “Maybe we can finally develop a translator. We can break through this barrier!”

SLAIT pivots from translating sign language to AI-powered interactive lessons by Devin Coldewey originally published on TechCrunch

Augmental lets you control a computer (and sex toys) with your tongue

Worldwide, about one in six people live with a disability. Whether through injury or illness, disability can manifest itself as an impediment to mobility. There have been fantastic advancements in communication and operational technologies, but there’s always room for improvement, and Augmental, an MIT Media Lab spinoff, thinks that it might be on to something with its newly announced MouthPad.

When you think of assistive devices, you might think of eye-tracking technology, mouth-controlled joysticks or voice recognition assistance. However, as Tomás Vega, one of Augmental’s co-founders, pointed out, many of these devices rely on old tech and — in the case of mouth-controlled joysticks — are obtrusive. They can be exhausting to use, don’t necessarily pay much heed to privacy and they can have other negative impacts, for example, on teeth.

“Augmental has a goal to create an interface that overcomes all those limitations,” says Vega. “Making one that is expressive, and private and seamless and slick. We created a mouth pad which is an internal interface that allows you to control a computer with no hands.”

The Augmental team regards MouthPad’s development as part of the normal progression of technology.

Tongue-controlled gaming is one of the angles Augmental takes into this market. Image Credits: Augmental

“But we can kind of give ourselves the context of just looking at the history of this,” said Corten Singer, co-founder of Augmental. “Mainframe rooms once existed, where the whole room was the computer plugging in actual cables, go to your desktop, or laptops are now with us. Our phones are in our pockets at all times. We have wristwatches that are smart; we’ve got buds in our ears. It’s kind of speaking towards this more seamless human machine integration that is coming in.”

The MouthPad is similar to a dental retainer or plate, but rather than realigning your teeth or holding false ones in place, it provides the opportunity for a wearer to control Bluetooth-connected devices using their tongue. Each MouthPad is created individually using dental scans to ensure that it fits its user perfectly.

“With our super customizable design — the fact that we’re using dental scans — it allows us to be a perfect fit to the user’s anatomy,” said Singer. “We can design a very thin low-profile device that reduces or at least minimizes the impact you have on speech, because in reality, speech interfaces are helpful.”

“Yes, they’re not private, but they’re pretty awesome,” said Singer of speech interfaces. “And we want to design to be complementary with that. Of course we want also to be able to serve as a standalone option, but we don’t want to remove the ability to use the speech interfaces but meet the technology as it currently exists in the landscape, where it’s at and not necessarily interrupting them.” 

Singer described how Augmental’s MouthPad enables eight degrees of freedom to control it. The tongue can operate through the X and Y axes, as well as by deploying pressure. There’re motion sensors that can detect head tracking and activity monitoring. And there’s even a possibility for bites to register as a click.

The wide variety of control options embedded into the MouthPad means that it can be used in conjunction with many different devices. When TechCrunch spoke to the Augmental team, they gave examples from their test users as to how people are making the most of the MouthPad.

“We have gamers that have quadriplegia, and they can only use a joystick,” said Vega. “So in order to play, you need two joysticks, so we were complementing a setup, so they can know strafe and aim at the same time.”

“Then we have another user who is a designer that has issues doing click and drag in very accurate ways,” said Vega. “So we’ve created a clutching gesture so he can click in and click out.”

For work, for play, and for play

But as well as being used for gaming and for work, the MouthPad has also been connected to a sex toy to enable someone with a spinal cord injury to engage in autonomous sex play. Singer explained how some people have been surprised by this use case, but that really, it shouldn’t be regarded as peculiar or extraordinary: it’s a function that helps to make people’s lives better.

“It’s just part of this conversation that we think should be had, you know,” said Singer. “It’s at the core, it’s a theme of universal accessibility and digital equity across the board.” 

So far, the Augmental team has raised pre-seed funding from investors at MIT and Berkeley. It has recently launched its website where it has opened up a waitlist, enabling people to sign up for a MouthPad. There is still a little delay, however.

“We still need to finish our FCC certification before we can ship and sell devices,” said Singer, “putting them into the mouths of our customers.”

The company put together a video demo, in case you want to take a closer lick:

Augmental lets you control a computer (and sex toys) with your tongue by Haje Jan Kamps originally published on TechCrunch

Apple tvOS 16.4 update gives light-sensitive users a ‘Dim Flashing Lights’ feature

Apple released the tvOS 16.4 update to the public yesterday, bringing various improvements to the system, including a new “Dim Flashing Light” feature. The new accessibility option can detect flashes of light or strobe effects and then automatically dim the display of a video.

The “Dim Flashing Light” feature is notable as it will likely benefit Apple TV users with light sensitivity or, possibly, users with epileptic seizures. According to the Epilepsy Foundation, 2.7 million Americans have epilepsy, and approximately 3-5% of them are photosensitive. Photosensitive epilepsy is when seizures are triggered by flashing lights, patterns or color changes. Flashing lights can also cause headaches and migraines.

The tvOS update is available for the Apple TV 4K and Apple TV HD. It can be installed manually by going to “Settings,” “System” and then “Software Update.” If your Apple TV is set to update automatically, then it should be downloaded already.

The other updates weren’t as significant but included some performance and stability improvements.

Apple rolled out the tvOS 16.4 update shortly after tvOS 16.3.3, which fixed a bug that caused some Siri remotes to randomly disconnect from the Apple TV. When Apple released a new version of the Apple TV 4K in November 2022, there were customers that reported connectivity issues with the Siri remote. According to a Reddit user, the remote would disconnect from the Apple TV without explanation and would only work again if they did a full restart of their TV.

Apple tvOS 16.4 update gives light-sensitive users a ‘Dim Flashing Lights’ feature by Lauren Forristal originally published on TechCrunch

The Monarch could be the next big thing in Braille

For many people around the world, braille is their primary language for reading books and articles, and digital braille readers are an important part of that. The newest and fanciest yet is the Monarch, a multipurpose device that uses the startup Dot’s tactile display technology.

The Monarch is a collaboration between HumanWare and the American Printing House for the Blind. APH is an advocacy, education, and development organization focused on the needs of visually impaired people, and this won’t be their first braille device — but it is definitely the most capable by far.

Called the Dynamic Tactile Device until it received its regal moniker at the CSUN Assistive Technology Conference happening this week in Anaheim. I’ve been awaiting this device for a few months, having learned about it from APH’s Greg Stilson when I interviewed him for Sight Tech Global.

The device began development as a way to adapt the new braille pin (i.e. the raised dots that make up its letters) mechanism created by Dot, a startup I covered last year. Refreshable braille displays have existed for many years, but they’ve been plagued by high costs, low durability, and slow refresh rates. Dot’s new mechanism allowed for closely-placed, individually replaceable, easily and quickly raisable pins at a reasonable cost.

APH partnered with HumanWare to adopt this new tech into a large-scale braille reader and writer code-named the Dynamic Tactile Device, and now known as Monarch.

These days one of the biggest holdups in the braille reading community is length and complexity of the publishing process. A new book, particularly a long textbook, may need weeks or months after being published for sighted readers before it is available in braille — if it is made available at all. And of course once it is printed, it is many times the size or the original, because braille has a lower information density than ordinary type.

A woman holds a Monarch braille reader next to a stack of binders making up an “Algebra 1” textbook.

“To accomplish the digital delivery of textbook files, we have partnered with over 30 international organizations, and the DAISY Consortium, to create a new electronic braille standard, called the eBRF,” explained an APH representative in an email. “This will provide additional functionality to Monarch users including the ability to jump page to page (with page numbers matching the print book pages numbers), and the ability for tactile graphics directly into the book file, allowing the text and graphics to display seamlessly on the page.”

The graphic capability is a serious leap forward. A lot of previous braille readers were only one or two lines, so the Monarch having 10 lines of 32 cells each allows for reading the device more like a person would a printed (or rather embossed) braille page. And because the grid of pins is continuous, it can also — as Dot’s reference device showed — display simple graphics.

Of course the fidelity is limited, but it’s huge to be able to pull up a visual on demand of a graph, animal, or especially in early learning, a letter or number shape.

Now, you may look at the Monarch and think, “wow, that thing is big!” And it is pretty big — but tools for people with vision impairments must be used and navigated without the benefit of sight, and in this case also by people of many ages, capabilities, and needs. If you think of it more like a rugged laptop than an e-reader, the size makes a lot more sense.

There are a few other devices out there with continuous pin grids (a reader pointed out the Graphiti), but it’s as much about the formats and software as it is about the hardware, so let’s hope everyone gets brought in on this big step forward in accessibility.

The Monarch could be the next big thing in Braille by Devin Coldewey originally published on TechCrunch