AI-based denoising of CT images
As part of our Innovation services, we also offer technology research to our customers. In general, we start every innovation project with basic research. Based on these expert findings, we can plan the next steps together with the customer.
In this use case, we describe our initial research with a review of state-of-the-art denoising techniques and our project experience on AI-based denoising for applications in cone beam CT.
The idea of this project is shown in the following image that is a detailed region from an X-ray containing a section of three vertebrae: given a noisy input image, the desired output is a denoised image that still contains good contrast and sharpness.
Noisy input image.
Desired output with noise removed.
For the given example, we investigated several DNN-based approaches that are candidates to work with in order to significantly improve image quality. This technical research was supported by an investigation of the noise characteristics and requirements for the project.
An investigation of specific noise patterns that occur within the application.
An overview of state-of-the-art DNN-based denoising techniques like:
- Residual noise image
- Denoising using GANs (generative adversarial networks)
- Auto-encoder networks
Here we identify customer requirements like:
- Data collection and annotation
- Integration and deployment
- Usability and acceptance criteria
- Rules and regulations (certification)
In the given project we first identified noise characteristica for cone beam detectors:
- Quantum noise, which is the main and most significant source of noise in random processes due to fluctuations in the number of photons reaching the detector, known also as Poisson noise
- Scattered radiation, which can contribute substantially without adequate hardware filtering (collimators, anti-scatter grids)
- Electronic noise, which typically occurs at very low exposure times and has a salt and pepper appearance
- Thermal noise, which is caused by fluctuations in temperature of the X-ray tubes and manifests itself as signal-independent Gaussian noise
We provided examples and visualizations for specific noise types and came up with the following observations:
- The visual characteristics of noise can vary related to high or low exposure within different image regions.
- For the given denoising task, the DNN-based approach should be flexible enough to address hybrid noise models.
A typical literature review for our innovation projects consists of an open-minded search for techniques that either address the subject or that may be related in terms of outcome.
For the given task, we identified several base technologies that model noise differently.
Residual Noise Image
We investigated residual noise DNNs with a hybrid noise model. The input is the the noisy image and the model computes the residual noise image, which is usually modelled as an additive intensity component. Noise modeling may also involve the estimation of a noise level from the image, which is then used as weighting component within the residual noise DNN.
The ideal image can then be recovered by weighted subtraction of the noise component.
Here we investigated approaches where a noise-contaminated image is fed into a generator DNN and a discriminator DNN. The training focuses on the discriminator part until it can no longer distinguish between denoised and ideal images.
We investigated symmetrical DNN architectures with skip connections to combine residual image patches and input images inside the DNN.
Data Collection and Training Input
Due to differences in the acquisition protocols, body anatomy, population, dose, exposure, and other factors, it became clear that the model needs to be very generic, as it otherwise might only perform well on cases that are similar to the training data.
As typical annotation is not an option in this type of problem, we investigated different approaches to generate training data through:
- Noise augmentation, where low-dose training images might be augmented by adding Poisson nose
- Phantom data, also with real human anatomy and acquiring low- and high-dose images
- Sequence series, where temporal averaging may be used to reduce noise
- Generate training images via GANs
A major requirement for this project was explainability. As DNNs tend to be rather black-boxes for the developers once they are trained, it is of importance to understand the capabilities of the algorithm and its effects once the noise level changes or objects are contained within the images that have never been seen before. Especially for risk management in healthcare application, this requires a very good validation of the approach.
During development, we apply different techniques to collect requirements:
- Questionnaire on AI in Medical Devices
- FDA regulations on AI and machine learning software used as a medical device
- Investigation of limitations due to data availability
Additional requirements arise from integration and deployment, which is mainly dependent on the existing customer infrastructure:
- Suitable inference framework (e.g. TensorRT)
- Model optimizations for runtime requirements
- Conversions to specific platforms (e.g. ONNX format)
- Cloud or on-premise inference
We investigated usability and acceptance that may help to address certain issues that arise during risk analysis. Additional user parameters could, for instance, define the overall strength of the noise removal as user input. Or specific parameters might be used to fine-tune the image impression, additional acquisition parameters could be integrated, as well.
If possible, we also experiment with initial data to get a feeling for the problem and provide practical feedback on some methods that have been investigated.
In the given example project, initial results consisted of taking high-dose X-ray images as ground truth while adding noise via specific distributions. Some initial DNN models were then trained on this data to provide some results and get a feeling for the capabilities of the different methods.
The following images show some of these preliminary results on an X-ray region with vertebrae.
Desired image quality with much less noise than what is used as input. This image quality can be used as training data.
Input image for the AI-based denoising experimental approach. The additional noise is clearly visible.
This is the denoised image after correction using the calculated noise image.
Noise image computed only from the noisy input image. Note that within the noise, patterns of the contours of the vertebral bodies can be faintly recognized.