Exciting New Technology For Those Who Are Visually Impaired Or Have Low Vision

Table of Contents

Are you Suffering with Macular Degeneration?
Learn about our natural treatment

By submitting your information,you agree to receive emails and SMS notifications. Msg&data rates may apply. Text STOP to unsubscribe.

There are few choices for persons with severe vision loss or other visual impairments who want to explore the world on their own. This is incredible given that the American Foundation for the Blind claimed 32.2 million individuals in the United States had a visual loss in 2018. The World Health Organization estimates that 285 million individuals globally had poor eyesight in 2010. There are 39 million blind people among them.

Visual impairment is a broad phrase that encompasses a spectrum of visual functions from “poor vision” to “total blindness.”

However, technology is helping to break down barriers, and ai systems are making significant progress in enhancing accessibility.

AI and machine learning technologies, particularly computer vision, have matured to the point that they may benefit the blind and visually handicapped.

Video magnifiers, accessibility capabilities on computers (such as text magnification), text-to-speech applications, and voice-recognition software may also be used to help those with visual impairment.

With each passing year, technology for those who have lost their eyesight improves.

Apps for people with low vision

APPS

People with disabilities now have access to a range of assistive technologies thanks to smartphones. The Be My Eyes app, which is accessible on Apple iOS and Android, allows blind and low-vision persons to video chat with a sighted person. The software sends a picture from the phone’s camera to sighted volunteers who can aid in identifying colors, read text, or give whatever assistance is required in real-time.

Smartphones have become the ideal enabling instrument for those with low vision and visually impaired due to their widespread availability. According to Pew Research, the number of individuals in the United States who possess smartphones has increased from 35% in 2011 to 81 percent in 2019, while Cisco projects that more than 70% of the worldwide population will have mobile access by 2023.

The LookTel Money Reader software uses a smartphone camera to take a picture of paper money and then reads the bill’s denomination aloud.

Captioning implemented into products like Skype (a Microsoft company) and Google Slides is another example of AI in action. Natural language processing, another use of AI and machine learning, is used to interpret a text in a conversation or on a slide and then read it out. This occurs in real-time, making it easier for the blind and visually handicapped to navigate discussions and presentations.

Microsoft has taken the concept of an assistive app a step further. The Seeing AI software from Microsoft utilizes your phone’s camera to take pictures of your surroundings and then narrates what it sees. The AI can read text, scan barcodes to identify objects, and even recognize individuals.

Walking down the street as a sighted person might entail taking in every aspect that surrounds them. Microsoft Soundscape mimics this behavior by creating a comprehensive audio map that explains what’s going on around a visually impaired individual.

It builds a continually updated 3-D sound map of the local environment using location data, sound beacons, and synthetic 3-D stereo sound to generate layers of context and detail.

Braille

Braille has been used as a tactile means of reading with fingers for approximately 200 years. The upgraded version of Narrator, the screen-reader for Microsoft Windows, now supports digital Braille displays and keyboards, jumping from the page to the screen.

Braille touchscreens that function in the same manner as tablets have previously proven popular among students and instructors outside of Microsoft’s efforts. The BraiBook, a Braille e-reader that fits in the palm of your hand, the Braille Buzz, a cool toy meant to teach Braille to children, were at the Assistive Technology Industry Association’s 2019 conference in Orlando, Florida.

Navigation for people with low vision

Navigation

Researchers at the University of Georgia have developed a novel device to assist blind or low-vision folks in navigating autonomously. What’s the deal with the technology? Intel’s artificial intelligence (AI) software, GPS receiver, and high-definition camera are all included in this backpack.

To see the wearer’s surroundings, the computer uses an AI camera that may be installed in a vest or fannypack. It then communicates with the laptop or PC in the backpack through Bluetooth headphones to alert the user of signs or impediments.

The smartphone applications listed above can link you with volunteers or use artificial intelligence to scan traffic signs from your camera. Audio navigation is available on Google and Apple maps. Some blind people, on the other hand, have trouble using cell phones because they can’t utilize the touchscreen.

For persons who are blind or partly sighted, Bluetooth beacons, like those employed by the business Foresight Augmented Reality, operate as very accurate, individualized guides. While simple GPS technology may direct users to a specific location, beacons installed in a business, restaurant, or public facility can direct them to the appropriate entry. Other beacons may lead the user to the restroom or other key amenities once they are inside.

Researchers at Ajman University are developing a pair of smart glasses that can read, offer navigation information, and even detect faces using artificial intelligence. The method works without an internet connection since the glasses are linked to a smartphone through a processor.

Although these smart glasses are still in the early phases of research, they are believed to have a 95 percent reading accuracy rate.

For persons with limited eyesight, crossing the street might be very risky. The Smith-Kettlewell Eye Research Institute’s James Coughlan, Ph.D., and colleagues have created a smartphone app that provides audio instructions to assist users to determine the safest crossing position and remain inside the crosswalk.

The software combines and triangulates three technologies. The junction where a user is standing is pinpointed using a global positioning system (GPS). The region is then scanned for crosswalks and walk lights using computer vision. This data is combined with a crowdsourced, complete inventory of each intersection’s peculiarities, such as the existence of road construction or uneven pavement, in a geographic information system (GIS) database. The three technologies balance out each other’s flaws. While computer vision may not have the depth perception required to recognize a median in the middle of the road, such information would be provided in the GIS template. While GPS can accurately locate a user at an intersection, it can’t tell which corner they’re standing on. The corner, as well as the user’s location in respect to the crosswalk, what the walk and traffic lights indicate, and the presence of cars, are all determined by computer vision.

Smart Canes

What is now achievable is shown by the WeWALK Smart Cane. The WeWALK resembles the cane that many of the blind and visually impaired persons have used to avoid obstacles while walking for decades, but it has a few contemporary twists.

You may still run into objects that aren’t directly underfoot with a regular cane, such as poles, tree branches, and fences. The WeWALK, on the other hand, detects things above chest level and alarms you vocally if you come too near, perhaps saving you from a severe fall.

Furthermore, while using a normal cane, you must hold a smartphone in one hand to listen to instructions, making navigation much more difficult and risky. The WeWALK works with a smartphone’s navigation software to read out loud instructions, letting you to keep one hand free.

The National Eye Institute (NEI), is spotlighting innovative technology and solutions in the works to aid the 4.1 million Americans who suffer from poor vision or blindness during Low Vision Awareness Month. The developments are aimed at making it easier for those with vision loss to do everyday chores like navigating office buildings and crossing streets. Many of the breakthroughs rely on computer vision, a technique that allows computers to detect and understand a wide range of pictures, objects, and actions in their surroundings.

Here are a few NEI-funded innovations that are in the works to help people with poor vision and blindness.

People with impaired eyesight or blindness may find it difficult to navigate inside. While current GPS-based assistive devices may direct someone to a broad area, such as a building, GPS isn’t very useful when it comes to identifying individual rooms, according to Cang Ye, Ph.D., of the University of Arkansas at Little Rock. Ye has created a co-robotic cane that gives users input on their surroundings.

A computerized 3-D camera in Ye’s prototype cane “sees” for the user. It also includes a motorized roller tip that allows the user to follow the cane’s direction. The user may talk into a microphone along the route, and a speech recognition system analyzes vocal orders and leads the user through the process through a wireless earpiece. Pre-loaded floor blueprints are stored on the cane’s credit card-sized computer. When entering a building, though, Ye anticipates being able to obtain floor blueprints through Wi-Fi. In real-time, the computer analyzes 3-D data and warns the user of hallways and stairs. The cane uses computer vision technology to determine a person’s position in the building by monitoring the camera’s movement. That approach compares the gradually changing views, all relative to a starting point, to determine the user’s position by extracting information from a current picture acquired by the camera and matching them with those from the prior image. In addition to NEI funding, Ye has recently received a grant from the National Institutes of Health’s Coulter College Commercializing Innovation Program to investigate the commercialization of the robotic cane.

Ye discovered that closed doors provide another barrier for those with impaired eyesight and blindness when designing the co-robotic cane. “Finding the door knob or handle and opening the door takes a long time,” he said. He created a fingerless glove gadget to assist people with limited eyesight in detecting and holding tiny things more rapidly.

A camera and a speech recognition system are located on the glove’s rear surface, allowing the user to provide voice instructions such as “door handle,” “mug,” “bowl,” or “bottle of water.” The glove uses tactile hints to direct the user’s hand to the targeted item. “It’s simple to guide the person’s hand left or right,” Ye said. “In a very intuitive and natural manner, an actuator on the thumb’s surface takes care of it.” It’s more difficult to convince a person to move their hand forward or backward and gain a feel for how to hold an item.

Accessibility to the Internet

Are you ready to talk to our low vision specialists all over the country?

Accessibility to the Internet

Many sites and resources online today, according to the World Wide Web Consortium (W3C), have design hurdles that make them hard to use for the blind and visually impaired because they don’t implement accessibility best practices. The Americans with Disabilities Act (ADA) makes it a legal duty in the United States to make websites accessible, however, accessibility regulations are inconsistently implemented.

The World Wide Web Consortium (W3C) has a number of rules for making websites accessible to the blind and visually impaired. For example, offering text alternatives to non-text information so that it may be transformed to big print or braille is one of these ideas. It also entails making a website’s whole functionality accessible using just a keyboard.

As an illustration of how this works in reality, many websites are not properly coded in HTML, preventing text-to-speech software from successfully converting words into audio for the visually challenged. These are usually simple technological solutions, yet they are often missed or ignored.

Apple, for example, makes use of VoiceOver to communicate what’s going on with your Apple device. Visual filters for colorblind people and magnifiers for visually impaired people are also available from the firm. If necessary, Voice Command features on Apple devices may be used to navigate solely by voice commands.

Our ever-evolving world of technology and information is making life a little easier for those with vision impairments. If you or a loved one suffer from low vision and would like more information on the exciting new technologies emerging for the visually impaired, schedule an appointment with one of our low vision specialists near to you.

About the Author:
Picture of Dr. Shaun Larsen

Dr. Shaun Larsen

Dr. Shaun Larsen is an optometrist who specializes in low vision services and enhancing vision with contact lenses. He has a passion for making people's lives better by helping them see well enough to read, write, or drive again. He always keeps up with the latest technology so he can help people regain their independence.

Macular
Degeneration?

Stop It Now...

Related Posts
shop cartShop Best Low-Vision Aids with FREE Doctor Consultation.Yes! Let's Go