Making a VR app for neurobiological research

Tomasz Krawczyk
SoftwareMill Tech Blog
7 min readMar 3, 2021

--

A short story about creating a VR application to immerse yourself in a microscopic volume with paint to prepare the data for AI learning.

Issue

The amount of gathered data is essential for every research. If you collect data about cells using a microscope, they have to be extracted somehow.

Slice of astrocytes microscopic image

The analysis of cells on microscopic images starts with segmentation — marking individual units. While it sounds like just pointing out photographed objects, the task is a bit more complex. The microscopic image is ephemeral, distorted, and full of noise. It is based on a dye introduced into the body or other techniques that contain an error factor.

Cell identification using standard method

Experienced specialists take these limits into account. Using essential knowledge of the researched material, they can tell the proper shape emerging from the cloud. Briefly speaking, cell markings are drawings that are painted using a microscopic image as a sketch.

Identifying particular cells while processing slices separately is an additional problem. The parts directed along the z-axis cause many difficulties and can be overlooked. It can take even a couple of days for each preparation. SoftwareMill has taken on the task of accelerating this process.

Challenge

Artificial Intelligence can relieve scientists of this tedious work. We just need to teach it how to do it. If we give it a large set of examples made by specialists — input image and output markings, AI can develop a scheme of action and apply it to all new preparations. Then the timespan of days will be shortened to a few moments.

The first approach by Maciej Adamiak gave very promising results. However, this method consists of mapping the preparation in 2D. We have now decided to go a step further and implement this idea while maintaining the three-dimensional form of the volume. Which is, of course, crucial for the subject of our research.

We work in cooperation with a neurobiologist, Łukasz Szewczyk, Ph.D. from the Centre of New Technologies at the University of Warsaw. His area of research is astrocytes. These are the star-shaped glial cells in the brain and spinal cord. Their tree structure highlights well the aforementioned 2D vs. 3D processing problem.

Still, we need a large set of examples. Additionally, we must speed up the manual labeling process. I was given the task of making a tool that will make the segmentation so convenient that it will also be shortened to a reasonable time.

Idea

To meet these requirements, the preparation should be presented in the form of a 3D volumetric image. This is no longer only to present the results, but also has the technical justification for the segmentation process.

Classic tools provide options for drawing shapes graphically and some methods to distinguish between specific individuals. When introducing this functionality in 3D, we need to implement a kind of 3D Painter.

Standard 3D modeling is not suitable for this task. First, it requires knowledge of this technique from the user. Secondly, the shapes we want to achieve are far from regular and it would be slower than freehand painting anyway.

VR headset and controllers

VR technology gives us a simple and intuitive way to create volumetric images. The brush is assembled to the right-hand controller and shapes can be freely marked in space when the controller’s trigger is pressed.

Head navigation is of course more convenient than all the methods used on flat screens. Additionally, as it turned out later, stereoscopy helps a lot to orientate in the volume which, remember, is an ephemeral cloud.

Implementation

We used the Unity engine to implement the application prototype. It includes the XR Interaction Toolkit, which provides full VR support out of the box. The code is written in C#.

The hardware we chose is Oculus Quest. It is a mobile device that has strong graphic limitations. It forces some restraints, which will be mentioned later. However, its accessibility and portability are of invaluable advantage.

The XR Interaction Toolkit includes components that provide support for the headset and controllers. That allows us to configure the freehand painting feature. The key thing we need and does not have standard support in the Unity ecosystem is the display of volumetric images.

3D graphics are based on vectors. Bitmaps are the responsibility of shaders — programs that calculate the color of vector objects seen by the camera in particular pixels. In Unity, shaders are written in HLSL (High-Level Shading Language). It runs on GPU and due to its specificity, it can calculate large amounts of data at dizzying speeds.

Volumetric shader in real life by graffiti artist Odeith

Ray Marching algorithms are used to display three-dimensional bitmaps. They simulate the forms that are inside a vector object by painting them in perspective on the surface. They rely on appropriate tracking of the camera ray and software calculation of the resulting color for a given pixel.

Ray Marching gradually progresses along the ray into the volume by steps, hence the name. So computation of every pixel is additionally multiplied by the number of steps it takes. That has a strong impact on rendering speed.

On weak mobile GPUs, Ray Marching algorithms are, to put it mildly, not recommended. To implement it due to our hardware restraints we need a big focus on optimization. Picking a color from volume is the most costly operation in the shader program. The goal is to minimize the use of it.

Voxel Intersection Ray Marching

The Voxel Intersection Ray Marching algorithm ensures that no more steps than necessary are taken to collect the density data. It takes the next step to the next intersection on the ray and processes the incoming voxel. Also, knowing the axis of intersection gives us the normal vector of the voxel face. Thanks to this, we can introduce basic lighting to the displayed model.

This made it possible for Oculus Quest to display the stable volumetric image with reasonable effort.

Microscopic image rendered by the application

The shader only works when the 3D object is viewed from the outside. We want to be able to enter between the astrocytes, so we display it on a virtual screen attached to the front of the camera. The screen keeps the volume in a constant position on the world coordinates and projects it on its surface as if it were behind it. Something like mind-blowing inception — VR in VR.

Unfortunately, in order to get the right view range at a satisfactory frame rate, we need to narrow the boundaries of this virtual screen. You’ll see it better in the video below, although it’s smaller here than what you actually see in VR. All these limitations are related to the prototype phase and would disappear with the use of more efficient hardware.

The application in action

The application controls shaders through their programmed parameters that can be changed on the fly. We pass on the configuration of the density filter to our shader. It calculates the entire object in each frame anyway, so we’re taking this opportunity to give the user the option to customize it.

As you can see in the video, the interface offers several other possibilities as well. The user is free to manipulate the volume object as if holding it in their hand. This is a left controller feature that is activated by the trigger button. The size is controlled with the joystick under the left thumb, the color identifying the marked astrocyte is selected with the right thumb, etc.

VR gives a lot of room for UX ideas of this type, but that was not the priority. We limited ourselves to providing convenience for the purposes of working on the project.

Input data — microscopic image
Output data — marked regions

The most important parameter of the shader is the texture containing the volume data. We use two textures here — the first with densities from a microscopic image and the second with markings painted by a user.

Every brush stroke updates the markings' texture. It is created in the form of a bitmap of individual identifiers expressed with a color code.

Shader takes density from the first texture and color from the second one. These two textures are the process inputs and outputs.

The whole system is complemented by a backend in Python by Jarosław Kijanowski. It generates textures from the microscope’s raw format, communicates with the VR application, and stores the input and output sets on the server.

Result

The time needed to process one preparation has been drastically reduced. Segmenting the volume with this application takes about an hour. The quality of the data also improved due to the better projection of the source material.

Application screen with segmented astrocytes

Conclusions are based solely on the use of the application by the team. We also focused on a specific type of cell. Nevertheless, the goal was to create a prototype data collection tool needed for the later stages of the project. The objectives were met and this part of the project can be considered a success.

Kamil Rafałko writes about his part of the project in the story Better neurobiological research with AI. He describes how useful the material obtained in this way was and how it went about teaching artificial intelligence to analyze cells on its own.

--

--