Contents

Detection of breast cancer metastasis in lymph nodes

Detection of breast cancer metastasis in lymph nodes webp image

Localized breast cancer has an excellent 5-year survival rate of about 99%. On the other hand, a 5-year survival rate drops to 86% for regional cancer and to 31% for distant cancer. Therefore early, quick, and accurate diagnosis increases patient survival chances significantly.

The lymph nodes in the axilla are the first place breast cancer is likely to spread. The risk of metastases and death increases with both breast cancer size at detection and the number of axillary lymph nodes involved. This is why lymph nodes are surgically removed and examined microscopically. However, the diagnostic procedures for pathologists are tedious and time-consuming. Most importantly, small metastases are difficult to detect and sometimes missed.

AI can assist pathologists by suggesting tumor areas in human tissue in whole slide images (WSI). It allows for faster and more accurate tumor detection and, as a result, boosts the survival rate.

Read: Big data in healthcare

WSI processing

WSI are large microscopic images of the human tissues. They are gigapixel images, often exceeding 100,000x100x000 pixels in width x height. Processing the whole image at once is not possible, therefore usually the image is divided into multiple patches, and tasks are performed on patch-level bases. I can recommend CLAM for WSI manipulations. It’s a Python library with features dedicated to machine learning tasks: tissue area selection, patch extraction, and feature extraction.

WSI segmentation

Why not U-Net

In computer vision, the usual weapon of choice for image segmentation tasks is U-Net. It was developed in 2015 by the University of Freiburg for biomedical image segmentation. Right now it is widely adopted for other tasks as well, ie image style transferring and stable diffusion. However, when it comes to WSI, the image size is about 100,000x100,000, and processing the whole image at once is not possible. The whole image can be split into multiple patches, and the U-Net can be run on a per-patch basis. This approach, however, is not optimal for the human tissue with whole slide imagining.

The U-Net architecture

Figure 1. The U-Net architecture proposed in U-Net: Convolutional Networks for Biomedical Image Segmentation.

Here are a few reasons why U-Net is not the best choice for WSI segmentation:

  • The segmentation task aims to assign a class: foreground-background to each pixel. A single patch of 224x224 contains a few cells. Rather than classify each pixel, it is more visible to classify each cell.
  • The segmentation mask of 100,000x100,000 is not needed. Lower resolution is easier to store and manipulate and is more than enough to assist a pathologist’s work.
  • Binary classification is easier to train and yields better results than image segmentation. Therefore, per-patch classification, tumor-normal, is favored over the U-Net segmentation.
  • Border consistency. Whole patch classification, rather than per-pixel segmentation results in more consistency among patches.

Patch level classification

Instead of U-Net segmentation, the patch-level classification task is widely used for WSI segmentation. The WSI is divided into 224x224 patches. Each patch is a separate training instance. It is fed to the classification model, which predicts 0 for regular, non-cancer patches and 1 for patches with a tumor. During inference time the final segmentation map is built from predictions for all patches from the WSI.

The Breast Cancer Metastasis Detection Pipeline

Figure 2. The Breast Cancer Metastasis Detection Pipeline. The WSI is divided into multiple patches of size 224x224. Each patch is processed through ResNet18, and the prediction is returned.

For the purpose of breast tumor detection, the pre-trained ResNet18 model with the last fully connected layer replaced by a 1x1 convolution layer is used. The input to the model, are WSI patches of size 224x224. The output is a binary class 0 or 1, where 0 means normal patch and 1 means tumor patch. The prediction pipeline is presented in Figure 2. For evaluation purposes WSI from the Camelyon16 challenge were used and a pre-trained model from the Monai model zoo.

Results

The final segmentation map is a collage of per-patch predictions. The results of running the tumor detection algorithm on the breast tissue are presented in Figure 3. We can clearly see that the algorithm detected a tumor in the top-left part of the tissue, and false positive noise in other image areas. The predicted segmentation mask can be further post-processed using morphologic operations to reduce the noise and strengthen the signal in true tumor areas.

Tumor segmentation results

Figure 3. Tumor segmentation results.

eXplainability

We can use FoXAI, the open-source library for explainable AI to verify whether the prediction is based on the correct assumptions. As we can see in Figure 4, the model correctly focuses on tumor cells, when predicting the positive patch. Healthy cells remain unfocused. Using FoXAI, we can confirm that the model is working correctly.

GradCAM explanation of the tumor patch-base prediction

Figure 4. GradCAM explanation of the tumor patch-base prediction. The model is correctly focusing on tumor cells, while healthy cells remain unfocused.

Read: FoXAI for pneumonia

Using FoXAI is straightforward. We just need to wrap the model with FoXaiExplainer context manager and run the inference. The sample code snippet required for running the model explanation is presented in snippet.

from foxai.context_manager import (
   FoXaiExplainer, 
   ExplainerWithParams, 
   CVClassificationExplainers as ClsExpl,
)
from foxai.visualizer import mean_channels_visualization

explainers = [
   ExplainerWithParams(
      explainer_name=ClsExpl.CV_INTEGRATED_GRADIENTS_EXPLAINER
   ),
]

with FoXaiExplainer(
   model=model,
   explainers=explainers,
   target=0,
) as xai_model:
   features, attributes_dict = xai_model(batch)

   for idx in indexes:
      figure = mean_channels_visualization(
         attributions=attributes_dict[
               'CV_INPUT_X_GRADIENT_EXPLAINER_0'
            ][idx], 
            transformed_img=batch[idx], 
            title= f"Mean of channels)",
         )
         figure.savefig(f'xai/vis{idx}.png')

Check how we improved AI-assisted solution to aid in detecting cancer cells in medical images: Cancer Detection with AI 

Summary

In the article, we walked through an approach for breast cancer metastasis in lymph node detection. Using Deep Learning allows for process automatization and can significantly boost the work of pathologists. Moreover, AI can help localize hard-to-detect, small tumor areas, that could be easily missed by the pathologist otherwise. Using FoXAI we can audit our AI model. If you are wondering, how AI can help in your medical use case, feel free to reach out to us. I would be happy to discuss your use case.

Blog Comments powered by Disqus.