GreenmanVR

Simulating eye surgery outcomes for patients using virtual reality
-Seeing is believing-



The vision simulator application was created to improve communication between doctors and patients, allowing patients to experience what they will see BEFORE having cataract surgery.

Eye surgery is a very permanent process and so any efforts that could be made to help ease patients fears and set expectations, while enhancing communication between the doctor and patients offer huge opportunities to enhance patient outcomes. This simulation was done as a collaborative effort between Ghost Productions, the Greenman group (in conjunction with Vance Thompson), with myself as the sole developer and with help from my medical director and tech artist Nic, environment artists, and consultation with Greenman.

This project was very experimental in nature. From the onset of the proposal, nothing quite like this had ever been developed before at least publicly from our research so we were trail blazing. In the process of developing this application I quickly learned so much about how the human eye works from a medical standpoint, but I also gained deeper knowledge of the VR graphics pipeline in the process.

The app uses complex post processing effects within the Unity HD render pipeline to simulate different eye, or inter-ocular lens (IOL) conditions. The app is a tethered experience, where the doctor controls what the patient sees via a desktop UI while the patient wears the headset. During the experience, the patient can look around two realistic environments, an indoor kitchen, and an outdoor city. Each is meant to simulate different real world scenarios for a patient where eye conditions impact their daily lives, be it driving,being able to read text, or seeing at a distance. While the user is looking around the virtual world, the doctor can modify the time of day, the lens profile of the patient with simulated effects such as astigmatism, mono/multi focal, and extended depth of field (EDOF).

From a technical standpoint many different challenges had to be overcome. Not only did we have to figure out an effective way to simulate the eye conditions, but also ensuring that we met performance targets for the application. One of the many technical achievements I worked hard at getting working was having completely different images being rendered for the VR user’s left and right eye. While this sounds trivial, the process to getting there was not.

Over the last decade of VR’s evolution, the trend has been to optimize the rendering process in such a way that allow for the GPU rendering the application to take shortcuts to reduce the per-frame load of the application. With tricks like Split view rendering and others that allow the application to effectively turn the image of the left eye and right eye into one image instead of two, many optimizations have been made over the years (especially on mobile VR headsets) in this direction. Because of this, modern Engine support in applications like Unity and Unreal has moved away from supporting this applications use case of deliberately rendering two completely different images in a performant way. This resulted in deep research by myself to come up with a working and performant solution.

In the end, a series of solutions were found. Modifications to the render pipeline using special render passes, shader trickery, an innovative three camera setup, and modifications to the camera view matrix resulted in a solution that allowed us to render the scene twice (once per eye) but with different post processing effects applied to each view. The devised solution is headset agnostic, allowing for the most flexibility when it came to deploying the application in the field. The performance of the app was maintained using aggressive optimizations where possible and by using Nvidia’s DLSS technology to speed up rendering of the post processing stack. This allowed us to hit a minimum benchmark of 72 frames per second on a system powered by an Nvidia 3080 GPU which was the minimum required graphics card. For majority of the development cycle, we were using HP’s Reverb G2 headset, but after Microsoft ended support for Windows mixed reality headsets, we migrated to using the Meta Quest 3. Efforts were made into exploring using Apple’s Vision Pro headset to run the app as a standalone experience too.

Internal trials of the application with patients by Greenman before and after surgery resulted in incredibly high satisfaction scores and confidence by patients for the outcomes they were seeking and eventually received. Working on this application and having it directly impact patients and their lives in a real world setting has been such a rewarding experience and has truly been a highlight of my career.

Visit their website here: https://greenmanvr.com/

See a full demonstration below: