Apple reveals new accessibility features, like custom text-to-speech voices

Apple previewed a suite of new features today to improve cognitive, vision and speech accessibility. These tools are slated to arrive on the iPhone, iPad and Mac later this year. An established leader in mainstream tech accessibility, Apple emphasizes that these tools are built with feedback from disabled communities.

Assistive Access, coming soon to iOS and iPadOS, is designed for people with cognitive disabilities. Assistive Access streamlines the interface of the iPhone and iPad, specifically focusing on making it easier to talk to loved ones, share photos and listen to music. The Phone and FaceTime apps are merged into one, for example.

The design also is made more digestible by incorporating large icons, increased contrast and clearer text labels to make the screen more simple. However, the user can customize these visual features to their liking, and those preferences carry across any app that is compatible with Assistive Access.

As part of the existing Magnifier tool, blind and low vision users can already use their phone to locate nearby doors, people or signs. Now Apple is introducing a feature called Point and Speak, which uses the device’s camera and LiDAR scanner to help visually disabled people interact with physical objects that have several text labels.

Image Credits: Apple

So, if a low vision user wanted to heat up food in the microwave, they could use Point and Speak to discern the difference between the “popcorn,” “pizza” and “power level” buttons — when the device identifies this text, it reads it out loud. Point and Speak will be available in English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese and Ukrainian.

A particularly interesting feature from the bunch is Personal Voice, which creates an automated voice that sounds like you, rather than Siri. The tool is designed for people who may be at risk of losing their vocal speaking ability from conditions like ALS. To generate a Personal Voice, the user has to spend about fifteen minutes reading randomly chosen text prompts clearly into their microphone. Then, using machine learning, the audio is processed locally on your iPhone, iPad or Mac to create your Personal Voice. It sounds similar to what Acapela has been doing with its “my own voice” service, which works with other assistive devices.

It’s easy to see how a repository of unique, highly-trained text to speech models could be dangerous in the wrong hands. But according to Apple, this custom voice data is never shared with anyone, even Apple itself. In fact, Apple says doesn’t even connect your personal voice with your Apple ID, since some households might share a log-in. Instead, users must opt in if they want a Personal Voice they make on their Mac to be accessible on their iPhone, or vice versa.

At launch, Personal Voice will only be available for English speakers, and can only be created on devices with Apple silicon.

Whether you’re speaking as Siri or your AI voice twin, Apple is making it easier for non-verbal people to communicate. Live Speech, available across Apple devices, lets people type what they want to say so that it can be spoken aloud. The tool is available at the ready on the lock screen, but it can also be used in other apps, like FaceTime. Plus, if users find themselves often needing to repeat the same phrases — like a regular coffee order, for example — they can store preset phrases within Live Speech.

Apple’s existing speech-to-text tools are getting an upgrade, too. Now, Voice Control will incorporate phonetic text editing, which makes it easier for people who type with their voice to quickly correct errors. So, if you see your computer transcribe “great,” but you meant to say “grey,” it will be easier to make that correction. This feature, Phonetic Suggestions, will be available in English, Spanish, French and German for now.

Image Credits: Apple

These accessibility features are expected to roll out across various Apple products this year. As for its existing offerings, Apple is expanding access to SignTime to Germany, Italy, Spain and South Korea on Thursday. SignTime offers users on-demand sign language interpreters for Apple Store and Apple Support customers.

Apple reveals new accessibility features, like custom text-to-speech voices by Amanda Silberling originally published on TechCrunch

SLAIT pivots from translating sign language to AI-powered interactive lessons

Millions of people use sign language, but methods of teaching this complex and subtle skill haven’t evolved as quickly those for written and spoken languages. SLAIT School aims to change that with an interactive tutor powered by computer vision, letting aspiring ASL speakers practice at their own rate like in any other language learning app.

SLAIT started back in 2021 as the sign language AI translator (hence the name): a real-time video chat and translation tool that could recognize most common signs and help an ASL speaker communicate more easily with someone who doesn’t know the language. But early successes slowed as the team realized they needed more time, money, and data than they were likely to get.

“We got great results in the beginning, but after several attempts we realized that, right now, there just is not enough data to provide full language translation,” explained Evgeny Fomin, CEO and co-founder of SLAIT. “We had no opportunity for investment, no chance to find our supporters, because we were stuck without a product launch — we were in limbo. Capitalism… is hard.”

“But then we thought, what can we do with the tech we’ve got from R&D? We realized we needed to do an education solution, because for education our technology is definitely good enough,” he continued. Not that there are lower standards for people learning, but the fluidity and subtlety of fluent and mature sign language is orders of magnitude more difficult to capture and translate than one or two words at a time.

“We found an extremely talented guy who helped us develop a product, and we made SLAIT School. And now we have our first customers and some traction!”

Existing online sign language courses (here’s a solid list if you’re curious) are generally pretty traditional. You have lectures and demonstrations, vocabulary lists, illustrations, and if you pay for it online, you can have someone review your work over video. It’s high quality and much of it is free but it isn’t the kind of interactive experience people have come to expect from apps like Duolingo.

SLAIT School uses an updated version of the gesture recognition tech that powered the translator demo app to provide instant feedback on words and phrases. See it in video form, then try it until you get it. Currently it’s for desktop browsers only but the team is planning a mobile app as well.

“We have some room for improvement, but it’s exactly what we planned to deliver. Students can access the platform, do practice, make signs, interact with the AI tutor, and it costs them same as one hour with an in person tutor,” said Fomin. “The mobile apps we aim to do for free.”

Users can do the first few lessons in order to see that the system works, then it’s $39 monthly, $174 half-yearly, and $228 yearly. The price may seem high compared with large-scale language apps supported by capital and other business models, but Fomin emphasized this is a new and special category, and real-world tutors are their primary competition.

Image Credits: SLAIT School

“We actively communicate with users and try to find the best prices and economic model that makes subscription plans affordable. We would really like to make the platform free, but so far we have not found an opportunity for this yet. Because this is a very niche product… we just need to make a stable economic model that will work in general,” he said.

The traction is also a flywheel for the company’s tech and content. By collecting information (with express opt-in consent, which he says the community is quite happy to provide) they can expand and improve the curriculum, and continue to refine their gesture recognition engine.

“We see two directions for growing in,” Fomin said. “The first is to cover more language groups, like British sign language and Japanese sign language. Also we would like to make the curriculum more adaptive, or provide a curriculum for medical and scientific signs. If we have enough investment to grow and scale we can be a leading platform globally to automate talking to doctor in sign language.”

After that, he said, “Maybe we can finally develop a translator. We can break through this barrier!”

SLAIT pivots from translating sign language to AI-powered interactive lessons by Devin Coldewey originally published on TechCrunch

Augmental lets you control a computer (and sex toys) with your tongue

Worldwide, about one in six people live with a disability. Whether through injury or illness, disability can manifest itself as an impediment to mobility. There have been fantastic advancements in communication and operational technologies, but there’s always room for improvement, and Augmental, an MIT Media Lab spinoff, thinks that it might be on to something with its newly announced MouthPad.

When you think of assistive devices, you might think of eye-tracking technology, mouth-controlled joysticks or voice recognition assistance. However, as Tomás Vega, one of Augmental’s co-founders, pointed out, many of these devices rely on old tech and — in the case of mouth-controlled joysticks — are obtrusive. They can be exhausting to use, don’t necessarily pay much heed to privacy and they can have other negative impacts, for example, on teeth.

“Augmental has a goal to create an interface that overcomes all those limitations,” says Vega. “Making one that is expressive, and private and seamless and slick. We created a mouth pad which is an internal interface that allows you to control a computer with no hands.”

The Augmental team regards MouthPad’s development as part of the normal progression of technology.

Tongue-controlled gaming is one of the angles Augmental takes into this market. Image Credits: Augmental

“But we can kind of give ourselves the context of just looking at the history of this,” said Corten Singer, co-founder of Augmental. “Mainframe rooms once existed, where the whole room was the computer plugging in actual cables, go to your desktop, or laptops are now with us. Our phones are in our pockets at all times. We have wristwatches that are smart; we’ve got buds in our ears. It’s kind of speaking towards this more seamless human machine integration that is coming in.”

The MouthPad is similar to a dental retainer or plate, but rather than realigning your teeth or holding false ones in place, it provides the opportunity for a wearer to control Bluetooth-connected devices using their tongue. Each MouthPad is created individually using dental scans to ensure that it fits its user perfectly.

“With our super customizable design — the fact that we’re using dental scans — it allows us to be a perfect fit to the user’s anatomy,” said Singer. “We can design a very thin low-profile device that reduces or at least minimizes the impact you have on speech, because in reality, speech interfaces are helpful.”

“Yes, they’re not private, but they’re pretty awesome,” said Singer of speech interfaces. “And we want to design to be complementary with that. Of course we want also to be able to serve as a standalone option, but we don’t want to remove the ability to use the speech interfaces but meet the technology as it currently exists in the landscape, where it’s at and not necessarily interrupting them.” 

Singer described how Augmental’s MouthPad enables eight degrees of freedom to control it. The tongue can operate through the X and Y axes, as well as by deploying pressure. There’re motion sensors that can detect head tracking and activity monitoring. And there’s even a possibility for bites to register as a click.

The wide variety of control options embedded into the MouthPad means that it can be used in conjunction with many different devices. When TechCrunch spoke to the Augmental team, they gave examples from their test users as to how people are making the most of the MouthPad.

“We have gamers that have quadriplegia, and they can only use a joystick,” said Vega. “So in order to play, you need two joysticks, so we were complementing a setup, so they can know strafe and aim at the same time.”

“Then we have another user who is a designer that has issues doing click and drag in very accurate ways,” said Vega. “So we’ve created a clutching gesture so he can click in and click out.”

For work, for play, and for play

But as well as being used for gaming and for work, the MouthPad has also been connected to a sex toy to enable someone with a spinal cord injury to engage in autonomous sex play. Singer explained how some people have been surprised by this use case, but that really, it shouldn’t be regarded as peculiar or extraordinary: it’s a function that helps to make people’s lives better.

“It’s just part of this conversation that we think should be had, you know,” said Singer. “It’s at the core, it’s a theme of universal accessibility and digital equity across the board.” 

So far, the Augmental team has raised pre-seed funding from investors at MIT and Berkeley. It has recently launched its website where it has opened up a waitlist, enabling people to sign up for a MouthPad. There is still a little delay, however.

“We still need to finish our FCC certification before we can ship and sell devices,” said Singer, “putting them into the mouths of our customers.”

The company put together a video demo, in case you want to take a closer lick:

Augmental lets you control a computer (and sex toys) with your tongue by Haje Jan Kamps originally published on TechCrunch

Apple tvOS 16.4 update gives light-sensitive users a ‘Dim Flashing Lights’ feature

Apple released the tvOS 16.4 update to the public yesterday, bringing various improvements to the system, including a new “Dim Flashing Light” feature. The new accessibility option can detect flashes of light or strobe effects and then automatically dim the display of a video.

The “Dim Flashing Light” feature is notable as it will likely benefit Apple TV users with light sensitivity or, possibly, users with epileptic seizures. According to the Epilepsy Foundation, 2.7 million Americans have epilepsy, and approximately 3-5% of them are photosensitive. Photosensitive epilepsy is when seizures are triggered by flashing lights, patterns or color changes. Flashing lights can also cause headaches and migraines.

The tvOS update is available for the Apple TV 4K and Apple TV HD. It can be installed manually by going to “Settings,” “System” and then “Software Update.” If your Apple TV is set to update automatically, then it should be downloaded already.

The other updates weren’t as significant but included some performance and stability improvements.

Apple rolled out the tvOS 16.4 update shortly after tvOS 16.3.3, which fixed a bug that caused some Siri remotes to randomly disconnect from the Apple TV. When Apple released a new version of the Apple TV 4K in November 2022, there were customers that reported connectivity issues with the Siri remote. According to a Reddit user, the remote would disconnect from the Apple TV without explanation and would only work again if they did a full restart of their TV.

Apple tvOS 16.4 update gives light-sensitive users a ‘Dim Flashing Lights’ feature by Lauren Forristal originally published on TechCrunch

The Monarch could be the next big thing in Braille

For many people around the world, braille is their primary language for reading books and articles, and digital braille readers are an important part of that. The newest and fanciest yet is the Monarch, a multipurpose device that uses the startup Dot’s tactile display technology.

The Monarch is a collaboration between HumanWare and the American Printing House for the Blind. APH is an advocacy, education, and development organization focused on the needs of visually impaired people, and this won’t be their first braille device — but it is definitely the most capable by far.

Called the Dynamic Tactile Device until it received its regal moniker at the CSUN Assistive Technology Conference happening this week in Anaheim. I’ve been awaiting this device for a few months, having learned about it from APH’s Greg Stilson when I interviewed him for Sight Tech Global.

The device began development as a way to adapt the new braille pin (i.e. the raised dots that make up its letters) mechanism created by Dot, a startup I covered last year. Refreshable braille displays have existed for many years, but they’ve been plagued by high costs, low durability, and slow refresh rates. Dot’s new mechanism allowed for closely-placed, individually replaceable, easily and quickly raisable pins at a reasonable cost.

APH partnered with HumanWare to adopt this new tech into a large-scale braille reader and writer code-named the Dynamic Tactile Device, and now known as Monarch.

These days one of the biggest holdups in the braille reading community is length and complexity of the publishing process. A new book, particularly a long textbook, may need weeks or months after being published for sighted readers before it is available in braille — if it is made available at all. And of course once it is printed, it is many times the size or the original, because braille has a lower information density than ordinary type.

A woman holds a Monarch braille reader next to a stack of binders making up an “Algebra 1” textbook.

“To accomplish the digital delivery of textbook files, we have partnered with over 30 international organizations, and the DAISY Consortium, to create a new electronic braille standard, called the eBRF,” explained an APH representative in an email. “This will provide additional functionality to Monarch users including the ability to jump page to page (with page numbers matching the print book pages numbers), and the ability for tactile graphics directly into the book file, allowing the text and graphics to display seamlessly on the page.”

The graphic capability is a serious leap forward. A lot of previous braille readers were only one or two lines, so the Monarch having 10 lines of 32 cells each allows for reading the device more like a person would a printed (or rather embossed) braille page. And because the grid of pins is continuous, it can also — as Dot’s reference device showed — display simple graphics.

Of course the fidelity is limited, but it’s huge to be able to pull up a visual on demand of a graph, animal, or especially in early learning, a letter or number shape.

Now, you may look at the Monarch and think, “wow, that thing is big!” And it is pretty big — but tools for people with vision impairments must be used and navigated without the benefit of sight, and in this case also by people of many ages, capabilities, and needs. If you think of it more like a rugged laptop than an e-reader, the size makes a lot more sense.

There are a few other devices out there with continuous pin grids (a reader pointed out the Graphiti), but it’s as much about the formats and software as it is about the hardware, so let’s hope everyone gets brought in on this big step forward in accessibility.

The Monarch could be the next big thing in Braille by Devin Coldewey originally published on TechCrunch

Wheel the World grabs $6M to offer guaranteed accessibility, price match for hotel rooms

Wheel the World wants to open the world up to people with disabilities and has made it its mission since 2018 to provide travel accommodations and experiences that fit their needs, both in the United States and worldwide.

Today, the company announced two new booking tool features, including guaranteed accessibility for hotel rooms booked from its website and a price match offer.

These new features follow the closing of $6 million in what it is calling “pre-Series A” funding. Kayak Ventures led the investment and was joined by Detroit Venture Partners, REI Co-op Path Ahead Ventures, former CEO Gillian Tans, Dadneo, CLIN Fund, Amarena and WeBoost.

We caught up with founders Alvaro Silberstein and Camilo Navarro again after covering their $2 million seed round in 2021. At the time, Wheel the World had offered travel packages to 50 destinations with experiences specially created for people with disabilities, seniors and their families — think activities like sky-diving, kayaking and snorkeling, but crafted to enable people to do them without limits.

Wheel the World now offers more than 200 destinations, including some new group tours that create itineraries for groups of eight to 10 with guaranteed accessibility, allowing travelers with disabilities to travel together and explore places like Costa Rica, Morocco and Alaska. The company also now price matches hotel rooms from other booking platforms, including, Expedia and Travelocity.

“We have identified those details of accessibility and the rooms that have those details,” Silberstein said. “Through our operations, we can guarantee accessibility that other platforms cannot. If you find something that we didn’t promise, we will give you your money back. As powerful as it sounds, we allowed that a month ago, and it’s working really well.”

The concept has caught on: In the past year, Wheel the World experienced a four-fold increase in the number of travelers booking through its platform over 2021. In addition, 40% of customers booked trips through the company more than twice.

The new capital is part of a SAFE note that will be part of the company’s future Series A round, Navarro told TechCrunch. It enables the company to pursue new partnerships with destination management organizations to identify new accessible travel offerings with a goal of reaching 12,000 travelers booking through its platform by December 2024, CEO Silberstein said in an interview.

Seventy-nine percent of the company’s users come from the U.S., and the founders said over the next two years, they will focus on this audience with additional product development around customer experience in terms of booking multiple trips, places to stay and group tours. In addition, Wheel the World will continue to grow its community of travelers and offer more opportunities for them to get to know each other and travel together.

“We want to transform our product into a community-based product and help them learn how they can contribute,” Silberstein said. “We also want them to have more interaction through a referral program so users can make recommendations to other users and we can start receiving accessibility validations. Those are things that we haven’t done yet.”

Wheel the World grabs $6M to offer guaranteed accessibility, price match for hotel rooms by Christine Hall originally published on TechCrunch

Senator Markey calls on Elon Musk to reinstate Twitter’s accessibility team

After several rounds of layoffs, Twitter’s staff is down from about 7,500 employees to less than 2,000 — and one of the numerous cuts across the company eliminated the platform’s entire accessibility team last year.

In an open letter to Elon Musk, Senator Ed Markey (D-MA) called on the new Twitter owner to bring the accessibility team back.

“Not surprisingly, since you shut down Twitter’s Accessibility Team, disabled users have reported increased difficulty and frustration using Twitter,” Markey wrote.

Like any social platform, Twitter has had its foibles when it comes to accessibility — in 2020, Twitter didn’t even have an accessibility team and only established one after public outcry when the company rolled out voice tweets without captions. But in the few years Twitter did have an accessibility team, the company rolled out features for alt text on images, automatic captioning on videos, and captions for Spaces live audio rooms and voice tweets. Many disabled users found community on Twitter, because its built-in accessibility features made it easier to use than other social platforms.

With no accessibility team at Twitter anymore, it’s not as though the platform’s features have simply remained dormant. Captions on Twitter Spaces have disappeared altogether, making the feature unusable for any user who is Deaf or hard of hearing. To disabled users, this sends the message that accessibility is no longer part of the conversation at Twitter HQ (which, by the way, the company has stopped paying rent for).

Twitter’s accessibility was dealt another blow when the platform cut off access to its API. Now, third-party developers will have to pay yet-to-be-determined monthly fees to build on a platform that previously welcomed them for free. This means that long-beloved apps like Ironfactory’s Twitterrific, which gave users expanded accessibility features, are no longer available.

“I received more than a few emails from Blind users who were upset and outraged because they would most likely have to stop using Twitter without accessible third-party clients like Twitterrific,” Ironfactory co-founder Gedeon Maheux told Forbes.

Markey’s letter poses a number of questions to Musk, to which Markey requested responses by March 17. Markey asks about why Musk eliminated the accessibility team and if he will reinstate it, Twitter’s compliance with ADA and FCC accessibility regulations, why Twitter removed captioning from Spaces, and if the platform will commit to creating user-friendly experiences for all kinds of content.

“All of these changes under your leadership signal a disregard for the needs of disabled people,” Markey wrote to Musk.

Senator Markey calls on Elon Musk to reinstate Twitter’s accessibility team by Amanda Silberling originally published on TechCrunch

MrBeast’s blindness video puts systemic ableism on display

Recently, megastar creator MrBeast posted a video to his YouTube in which he spotlights numerous blind and visually impaired people who have undergone a surgical procedure that “cures” their blindness. As of this writing, the video has been viewed more than 76 million times, and the responses have been visceral in both praise and contempt. For his part, MrBeast has taken to Twitter to publicly bemoan the fact that so many are so angry at him for putting on what amounts to a publicity stunt under the guise of selfless charity.

The truth is straightforward: The video was more ableist than altruistic.

Before delving into the many layers of why the video is troublesome, it’s important to issue a caveat. However problematic MrBeast’s premise in producing the video, the people who participated — the patients and their doctors — should not be vilified. They made the decision to go through with the surgery of their own volition. The reasoning behind making that choice goes far beyond the scope of this article.

In the broadest lens, the biggest problem with wanting to “cure” blindness is that it reinforces a moral superiority of sorts by those without disabilities over those who are disabled. Although not confronted nearly as often as racism and sexism, systemic ableism is pervasive through all parts of society. The fact of the matter is that the majority of abled people view disability as a failure of the human condition; as such, people with disabilities should be mourned and pitied. More pointedly, as MrBeast stated in his video’s thumbnail, disabilities should be eradicated — cured.

On one level, disability being viewed as a failure of the human condition is technically correct. That’s why disabilities are what they are: The body doesn’t work as designed in some way(s). If disabilities were computer software, engineers would be tasked with finding and fixing the bugs.

Yet the human body isn’t some soulless, inanimate machine that requires perfection in order to work properly or have value. I’ve been subject to a barrage of harassment on Twitter since tweeting my thoughts on MrBeast’s video. In between calls for me to imbibe a bottle of bleach, most of them have been hurling retorts at me that question why I wouldn’t want to “fix” or “cure” what prevents people from living what ostensibly is a richer, fuller life because blindness would be gone. A blind person, they said, could suddenly see the stars, a rainbow, a child’s smile or whatever other romantic notion one could conjure.

Elizabeth Barrett Browning would be proud of the way I count the ways in which this myopic perspective lacks perspective.

For one thing, the doctors shown in the video aren’t miracle workers. There is no all-encompassing cure for blindness. If the people who participated in this surgery have had their lives changed for the better by regaining their sight, more power to them.

That said, we know nothing of their visual acuities before the operation, nor do we know what the long-term prognosis is for their vision. That MrBeast proclaims to “cure” blindness is essentially baseless.

At a fundamental level, MrBeast’s video is inspiration porn, meant to portray abled people as the selfless heroes waging war against the diabolical villain known as disability. And it’s ultimately not meant for the disabled person. It’s for abled people to feel good about themselves and about disabled people striving to become more like them — more normal. For the disability community, inspiration porn often is met with such derision because the message isn’t about us as human beings; it’s about a group that’s “less than” the masses. This is where structural ableism again rears its ugly head.

Think about it: If you fell and broke your hand or your wrist, that would indeed be bad. You’d be disabled for some period of time. But the expectation during your recovery time would be that you’re still human, still yourself to reasonably do everything you could do prior. You may find certain things inaccessible for a while and need some forms of assistive technology, but you would expect to be treated with dignity and you wouldn’t expect someone to miraculously reset your broken bone. Yet this is what MrBeast (and his millions of minions) are peddling with this video. They don’t recognize the humanity of blind people; they only recognize the abhorrence of not being able to see.

In other words, abled people have a tendency to think disability defines us.

In many meaningful ways, yes, our disabilities do define us to a large degree. After all, no one can escape their own bodies. But what about our traits as individuals? Our families, our work, our relationships and much more? Surely people are aware of things like the Paralympics and wheelchair basketball leagues, for instance. The point is, disabled people are no different in our personal makeup than anyone else. We shouldn’t be pitied and we certainly don’t require uplifting in ways like MrBeast suggests.

I have multiple disabilities due to premature birth, but most people know me as a partner, a brother, a cousin and a friend who loves sports, likes to cook and listen to rap music, and a distinguished journalist. Everyone in my orbit is well aware of my disabilities, but they do not judge me solely based on them. They know the real me — they know my disabilities aren’t the totality of my being.

My lived experience is unique because I have so much to draw from: I have visual disabilities, physical motor disabilities and speech disabilities, and my parents were both fully deaf. Growing up the oldest of two children, I served as the unofficial in-house interpreter for my parents. As a CODA, I straddled the line between the deaf and hearing worlds. I know firsthand how deaf people look at their culture and their ways of life with immense pride. If someone “cured” deafness, what would happen to the people? Deaf culture is real. The culture would fade away because there’d be no reason for sign language to exist and the experiences derived from it.

I had a mentor my senior year of high school who asked me the day we met in my counselor’s office if I would go back and change things in my life so that I wouldn’t have disabilities. I told him rather unequivocally that I wouldn’t. He was taken aback by my answer, but I explained my rationale was simple: It would change who I am.

Almost a quarter-century later, my feelings are unchanged. Granted, I have my moments. I curse the fact I can’t get in a car and go anywhere I want, anytime I want. Likewise, I often lament the fact that my limited range of motion caused by cerebral palsy prevents me from literally moving as freely as I need or want to sometimes.

All told, however, my disabilities have enabled me to thrive in many respects. The relationships I’ve made, the knowledge I’ve acquired, the journalism career I’ve had for close to a decade — all of this would not have been possible in an alternate universe where I wasn’t a lifelong disabled person. To me, that’s the ultimate silver lining.

I don’t presume to be an oracle when it comes to accessibility and assistive technologies. I know a lot, but I don’t know it all. Similarly, I don’t presume to speak for all blind people or the disability community at large. Blindness in particular is a spectrum, and I proclaim to know only where my eyesight sits on that line. I also know this: A cure is not the answer to “helping” blind people, let alone anyone else with a disability.

Disabled people don’t need pity. We don’t need to be uplifted. We don’t need cures from ourselves. What we desperately do need is some recognition of our basic humanity. We need abled people to start seeing us as the people we are instead of the sorrowful, burdened outcasts society likes to portray us as.

MrBeast (and his defenders) easily fall into the trap of perpetuating that deeply entrenched ableist mindset; as I wrote earlier, ableism is just as pervasive as racism and sexism. Simply put, we need allies — people who see us as real people.

Finding a cure for cancer or a cure for AIDS is one thing. Disabilities need no cure. What truly needs curing is society’s proclivity to view the disability community as little more than real-life characters from a Tod Browning film. Disabled people are not freaks. Disability isn’t a bad word. You can learn a lot from us.

MrBeast’s blindness video puts systemic ableism on display by David Riggs originally published on TechCrunch

Sony aims to make PlayStation more accessible with Project Leonardo controller

Sony has been embracing accessibility options in its games for a few years now, but one place it has lagged behind perennial rival Microsoft is in accessible hardware. They aim to change that with Project Leonardo, a new gaming controller aimed to be customizable to the needs of any person.

The device was described only generally on stage, but it appears to be a hub with swappable parts and plates that let users connect various other items, such as breath tubes, pedals, and switches of all kinds to activate different buttons.

Each UFO-shaped Project Leonardo device can handle an analog joystick plus 8 buttons, and they can be paired with each other or with a traditional controller to complement or offer alternatives to any function. Sony worked with accessibility experts to make sure it was useful to a wide range of people.

It’s similar to how Microsoft’s Xbox Adaptive Controller works — some stuff is built in, some you provide yourself. Everyone’s accessibility needs are a little different, and so it’s important to support the solutions people already have. Besides, that stuff is expensive!

We’re waiting to find out more about this project, when it will be available for people to use, the technical details, and the design process behind it. We’ll update this post with more info when it’s available.

Sony aims to make PlayStation more accessible with Project Leonardo controller by Devin Coldewey originally published on TechCrunch

Amazon’s Echo Show adds more accessibility features, including ‘Gestures’ and text-to-speech

Amazon today is introducing a small handful of new features for its digital assistant Alexa that aim to make the device more accessible. The company is launching two new ways to interact with Alexa without speaking including support for Gestures on Echo Show devices that will users to interact with the device by raising their hand — something that can also come in handy for anyone using Echo while cooking who want to quickly dismiss a timer without having to speak. In addition, Amazon is rolling out text-to-speech options and a way to turn on all closed captioning features at once across devices.

The new features are the latest to arrive in a push to make Alexa a more accessible tool, and follow the fall launch of a “Tap to Alexa” option for Fire tablets that allow users to interact with the voice assistant without speaking.

With Gestures, Amazon says users will be able to hold up their hand — palm facing the camera — to dismiss timers on the Echo Show 8 (2nd Gen.) or 10 (3rd Gen) devices. Beyond enabling nonverbal customers to use the device, Amazon also envisions a common scenario where users in the kitchen are cooking while listening to music and don’t want to have to scream over their tunes to be heard by Alexa or touch the screen with messy hands. The gesture could give them an easier way to interact with Alexa, in that case.

Gestures are not enabled by default — you’ll have to visit Settings, then Device Options to access the option. (Presumably, by calling it “Gestures” and not “Gesture,” Amazon has other plans in store for this feature down the road.)

To work, Gestures uses on-device processing to detect the presence of a raised hand during an active timer, Amazon said. Users will not have to enroll in other visual identification features like Visual ID, the Echo Show’s facial recognition system, to use it.

The company is also launching text-to-speech functionality to the new Tap To Alexa feature, which today provides customers with a dashboard of Alexa commands on the Echo’s screen which they can tap to launch. With text-to-speech, customers will now be able to type out phrases on an on-screen keyboard to have them spoken aloud by their Echo Show. These commands can also be saved as shortcut tiles and customized with their own icon and colors.

The feature aims to help customers with speech disabilities, or who are nonverbal or nonspeaking who can use text-to-speech to communicate with others in their home, for example by typing out “I’m hungry.”

Image Credits: Amazon

The third new addition is called Consolidated Captions, and allows customers to turn on Call CaptioningClosed Captioning, and Alexa Captions at once across all their supported Echo Show devices. This enables customers to turn on captions for things like Alexa calls and captions for Alexa’s responses, which helps those who deaf, hard of hearing, or who are using Alexa in loud or noisy environments, Amazon says.

This feature is enabled by tapping Settings, then Accessibility, and selecting “Captions.”

Image Credits: Amazon

The new features come at a time when Amazon is trying to determine how to proceed with Alexa, whose division at the company saw significant layoffs and, per an Insider report, is said to be on pace to lose Amazon around $10 billion this year as opportunities to monetize the platform, like voice apps known as Skills, have failed to gain traction with consumers. Alexa owners also tend to only use the device for basic tasks, like playing music, operating smart home devices, using timers and alarms, and getting weather information, among other things.

More recently, Amazon has been positioning its Echo Show devices as more of a family hub or alternative to the kitchen TV. Its wall-mounted Echo Show 15, for example, offers widgets for things like to-do lists and shopping lists and just rolled out Fire TV streaming.

Amazon says the new Echo Show features are rolling out now.

Amazon’s Echo Show adds more accessibility features, including ‘Gestures’ and text-to-speech by Sarah Perez originally published on TechCrunch