Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Dec 7;19(6):10.1088/1741-2552/aca69d.
doi: 10.1088/1741-2552/aca69d.

Towards a Smart Bionic Eye: AI-powered artificial vision for the treatment of incurable blindness

Affiliations

Towards a Smart Bionic Eye: AI-powered artificial vision for the treatment of incurable blindness

Michael Beyeler et al. J Neural Eng. .

Abstract

Objective.How can we return a functional form of sight to people who are living with incurable blindness? Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies.Approach.Rather than aiming to represent the visual scene as naturally as possible, aSmart Bionic Eyecould provide visual augmentations through the means of artificial intelligence-based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind, such as face recognition, outdoor navigation, and self-care.Main results.Complementary to existing research aiming to restore natural vision, we propose a patient-centered approach to incorporate deep learning-based visual augmentations into the next generation of devices.Significance.The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.

Keywords: artificial intelligence; artificial vision; computer vision; visual prosthesis.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Smart Bionic Eye. A visual prosthesis has the potential to provide visual augmentations through the means of artificial intelligence (AI) based scene understanding (here shown for visual search). For example, a user may verbally instruct the Smart Bionic Eye to locate misplaced keys, and the system would respond visually by segmenting the keys in the prosthetic image while the user is looking around the room (room image reprinted under CC-BY from Lin et al., 2014). To guide the development of such a device, we propose to develop a virtual reality prototype supported by simulated prosthetic vision. Figure reprinted under CC-BY from https://doi.org/10.6084/m9.figshare.20092640.v1.
Figure 2.
Figure 2.
Stimulation strategies for visual prostheses that use an external camera. A) In the conventional approach implemented by previously commercialized devices such as Alpha-AMS and Argus II, a fixed and simple (e.g., linear) mapping is used to translate the grayscale value of a pixel in each video frame to a current amplitude of the corresponding electrode in the implant. The same encoding is used for all possible use cases. B) In the proposed approach, visual augmentation modes are task-dependent and informed by qualitative feedback as well as behavioral performance of virtual and real prosthesis patients on real-world tasks. The user is able to switch between modes on demand.
Figure 3.
Figure 3.
Deep learning–based visual augmentations to support scene understanding. A) Segmenting objects of interest from background clutter using detectron2 (Wu, Yuxin et al., 2019). B) Substituting relative depth as sensed from single images for intensity using monodepth2 (Godard et al., 2019). C) Detecting structural edges of indoor environments (Sanchez-Garcia et al., 2020b). D) Visual question answering, where a deep neural network responds to “How many giraffes are drinking water?” visually by drawing bounding boxes around all giraffes by the water hole. (Antol et al., 2015).

Similar articles

Cited by

References

    1. Abbasi B and Rizzo JF (2021). Advances in Neuroscience, Not Devices, Will Determine the Effectiveness of Visual Prostheses. Seminars in Ophthalmology, 0(0):1–8. Publisher: Taylor & Francis eprint: 10.1080/08820538.2021.1887902. - DOI - PubMed
    1. Ahmetovic D, Guerreiro J, Ohn-Bar E, Kitani KM, and Asakawa C (2019). Impact of Expertise on Interaction Preferences for Navigation Assistance of Visually Impaired Individuals. In Proceedings of the 16th International Web for All Conference, W4A ‘19, pages 1–9, New York, NY, USA. Association for Computing Machinery.
    1. Al-Atabany WI, Tong T, and Degenaar PA (2010). Improved content aware scene retargeting for retinitis pigmentosa patients. Biomed Eng Online, 9:52. - PMC - PubMed
    1. Antol S, Agrawal A, Lu J, Mitchell M, Batra D, Zitnick CL, and Parikh D (2015). VQA: Visual Question Answering. pages 2425–2433.
    1. Barnes N (2012). The role of computer vision in prosthetic vision. Image and Vision Computing, 30(8):478–479.

Publication types

-