Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Mar:2022:82-93.
doi: 10.1145/3519391.3522752. Epub 2022 Apr 18.

Immersive Virtual Reality Simulations of Bionic Vision

Affiliations

Immersive Virtual Reality Simulations of Bionic Vision

Justin Kasowski et al. Augment Hum (2022). 2022 Mar.

Abstract

Bionic vision uses neuroprostheses to restore useful vision to people living with incurable blindness. However, a major outstanding challenge is predicting what people "see" when they use their devices. The limited field of view of current devices necessitates head movements to scan the scene, which is difficult to simulate on a computer screen. In addition, many computational models of bionic vision lack biological realism. To address these challenges, we present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to "see through the eyes" of a bionic eye user. To demonstrate its utility, we systematically evaluated how clinically reported visual distortions affect performance in a letter recognition and an immersive obstacle avoidance task. Our results highlight the importance of using an appropriate phosphene model when predicting visual outcomes for bionic vision.

Keywords: retinal implant; simulated prosthetic vision; virtual reality.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Immersive virtual reality simulations of bionic vision. A microelectrode array is implanted in the eye to stimulate the retina → Anatomical data is used to position a simulated electrode array on a simulated retina to create a “virtual patient” → Visual input from a virtual reality environment acts as stimulus for the simulated implant to generate a realistic prediction of simulated prosthetic vision (SPV) → The rendered SPV image is presented to the virtual patient and behavioral metrics are recorded
Fig. 2.
Fig. 2.
A simulated map of retinal NFBs (left) can account for visual percepts (right) elicited by retinal implants (reprinted with permission from [5]). Left: Electrical stimulation (red circle) of a NFB (black lines) could activate retinal ganglion cell bodies peripheral to the point of stimulation, leading to tissue activation (black shaded region) elongated along the NFB trajectory away from the optic disc (white circle). Right: The resulting visual percept appears elongated; its shape can be described by two parameters, λ and ρ.
Fig. 3.
Fig. 3.
Letter recognition task. Top: The lights in the virtual room are turned off and the image seen by the user is passed to the preprocessing shader which performs edge extraction/enhancement before the axon model shader renders SPV. Modeled afer [13]. Bottom: Output of the axon model shader across the various devices and ρ / λ combinations.
Fig. 4.
Fig. 4.
Letter recognition task. Data points represent each subject’s average performance in a block with boxplots displaying median and interquartile ranges. Top: Average F1 score across blocks for each subject within the condition specified by the x-axis. Bottom: Average time across blocks for each subject within the condition specified by the x-axis. Statistical significance was determined using ART ANOVA (*<.05, **<.01, ***<.001).
Fig. 5.
Fig. 5.
Obstacle avoidance task. Left: Layout of the virtual hallway environment modeled after [22]. Empty circles represent the possible locations for obstacles. Right/Top: View of the real environment → participant’s view is passed to the preprocessing shader which performs edge extraction/enhancement before the axon model shader renders SPV. Bottom: Output of the axon model shader across the various devices and ρ / λ combinations.
Fig. 6.
Fig. 6.
Obstacle avoidance. Data points represent each subject’s average performance in a block with boxplots displaying median and interquartile ranges. Top: Average number of collisions across blocks for each subject within the condition specified by the x-axis. Red line represents chance level (1.25 collisions). Bottom: Average time across blocks for each subject within the condition specified by the x-axis. Statistical significance was determined using ART ANOVA (*<.05, **<.01, ***<.001).

Similar articles

Cited by

References

    1. Ayton Lauren N., Barnes Nick, Dagnelie Gislin, Fujikado Takashi, Goetz Georges, Hornig Ralf, Jones Bryan W., Muqit Mahiul M. K., Rathbun Daniel L., Stingl Katarina, Weiland James D., and Petoe Matthew A.. 2020. An update on retinal prostheses. Clinical Neurophysiology 131, 6 (June 2020), 1383–1398. 10.1016/j.clinph.2019.11.029 - DOI - PMC - PubMed
    1. Ayton Lauren N., Blamey Peter J., Guymer Robyn H., Luu Chi D., Nayagam David A. X., Sinclair Nicholas C., Shivdasani Mohit N., Yeoh Jonathan, McCombe Mark F., Briggs Robert J., Opie Nicholas L., Villalobos Joel, Dimitrov Peter N., Varsamidis Mary, Petoe Matthew A., McCarthy Chris D., Walker Janine G., Barnes Nick, Burkitt Anthony N., Williams Chris E., Shepherd Robert K., Allen Penelope J., and for the Bionic Vision Australia Research Consortium. 2014. First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis. PLOS ONE 9, 12 (Dec. 2014), e115239. 10.1371/journal.pone.0115239 Publisher: Public Library of Science. - DOI - PMC - PubMed
    1. Behrend Matthew R., Ahuja Ashish K., Humayun Mark S., Chow Robert H., and Weiland James D.. 2011. Resolution of the Epiretinal Prosthesis is not Limited by Electrode Size. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19, 4 (Aug. 2011), 436–442. 10.1109/TNSRE.2011.2140132 - DOI - PMC - PubMed
    1. Beyeler M, Boynton GM, Fine I, and Rokem A. 2017. pulse2percept: A Python-based simulation framework for bionic vision. In Proceedings of the 16th Science in Python Conference, Huff K, Lippa D, Niederhut D, and Pacer M (Eds.). 81–88. 10.25080/shinma-7f4c6e7-00c - DOI
    1. Beyeler Michael, Boynton Geoffrey M., Fine Ione, and Rokem Ariel. 2019. Model-Based Recommendations for Optimal Surgical Placement of Epiretinal Implants. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (Lecture Notes in Computer Science), Shen Dinggang, Liu Tianming, Peters Terry M., Staib Lawrence H., Essert Caroline, Zhou Sean, Yap Pew-Thian, and Khan Ali (Eds.). Springer International Publishing, 394–402. - PMC - PubMed

LinkOut - more resources

-