Skip to content

Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

License

Notifications You must be signed in to change notification settings

MSKCC-Computational-Pathology/DMMN

Repository files navigation

Deep Multi-Magnification Network

This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi-Magnification Network automatically segments multiple tissue subtypes by a set of patches from multiple magnifications in histopathology whole slide images.

Prerequisites

  • Python 3.6.7
  • Pytorch 1.3.1
  • OpenSlide 1.1.1
  • Albumentations

Training

The main training code is training.py. The trained segmentation model will be saved under runs/ by default.

In addition to config, you may need to update the following variables before running training.py:

  • n_classes: the number of tissue subtype classes + 1
  • train_file and val_file: the list of training and validation patches
    • Slide patches must be stored as /path/slide_tiles/patch_1.jpg, /path/slide_tiles/patch_2.jpg, ... /path/slide_tiles/patch_N.jpg
    • The coresponding label patches must be stored as /path/label_tiles/patch_1.png, /path/label_tiles/patch_2.png, ... /path/label_tiles/patch_N.png
    • train_file and val_file must be formatted as
     /path/,patch_1
     /path/,patch_2
     ...
     /path/,patch_N
    
  • d: the number of pixels of each class in the training set for weighted cross entropy loss function

Note that pixels labeled as class 0 are unannotated and will not contribute to the training.

Inference

The main inference codes are slidereader_coords.py and inference.py. You first need to run slidereader_coords.py to generate patch coordinates to be segmented in input whole slide images. After generating patch coordinates, you may run inference.py to generate segmentation predictions of input whole slide images. The segmentation predictions will be saved under imgs/ by default.

You may need to update the following variables before running slidereader_coords.py:

  • slides_to_read: the list of whole slide images
  • coord_file: an output file listing all patch coordinates

In addition to model_path and out_path, you may need to update the following variables before running inference.py:

  • n_classes: the number of tissue subtype classes + 1
  • test file: the list of patch coordinates generated by slidereader_coords.py
  • data_path: the path where whole slide images are located

Please download the pretrained breast model here.

Note that segmentation predictions will be generated in 4-bit BMP format. The size limit for 4-bit BMP files is 232 pixels.

Other pretrained segmentation models

Please find other pretrained segmentation models using Deep Multi-Magnification Network:

  • Ovarian model here.
  • Osteosarcoma model here.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details. (c) MSK

Acknowledgments

Reference

If you find our work useful, please cite our paper:

@article{ho2021,
  title={Deep Multi-Magnification Networks for multi-class breast cancer image segmentation},
  author={Ho, David Joon and Yarlagadda, Dig V.K. and D'Alfonso, Timothy M. and Hanna, Matthew G. and Grabenstetter, Anne and Ntiamoah, Peter and Brogi, Edi and Tan, Lee K. and Fuchs, Thomas J.},
  journal={Computerized Medical Imaging and Graphics},
  year={2021},
  volume={88},
  pages={101866}
}

About

Deep Multi-Magnification Network for multi-class tissue segmentation of whole slide images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages