Skip to content

Latest commit

 

History

History
94 lines (65 loc) · 4.69 KB

README.md

File metadata and controls

94 lines (65 loc) · 4.69 KB

AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image Classification

arXiv Dataset on HF

This repository contains the PyTorch implementation of our paper "AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image Classification". The code is built upon the ACMIL framework.

Overview

Attention Entropy Maximization (AEM) is a novel plug-and-play regularization technique designed to address attention concentration in Multiple Instance Learning (MIL) frameworks. Key features:

  • Simple yet effective solution for mitigating overfitting in whole slide image classification tasks
  • Requires no additional modules
  • Features just one hyperparameter
  • Demonstrates excellent compatibility with various MIL frameworks and techniques

Dataset Preparation

We provide pre-extracted features to facilitate the reproduction of our results. These features are available through two platforms:

  1. Hugging Face: A comprehensive dataset containing all features

  2. Quark Pan: Individual feature sets for specific models and datasets

CAMELYON16 Dataset Features

Model Feature Set
ImageNet supervised ResNet18 Download
SSL ViT-S/16 Download
PathGen-CLIP ViT-L (336 × 336 pixels) Download

CAMELYON17 Dataset Features

Model Feature Set
ImageNet supervised ResNet18 Download
SSL ViT-S/16 Download
PathGen-CLIP ViT-L (336 × 336 pixels) Download

These pre-extracted features are designed to help researchers quickly implement and validate our results. Choose the dataset and model that best suits your research needs.

For custom datasets, modify and run Step1_create_patches_fp.py and Step2_feature_extract.py. More details can be found in the CLAM repository.

Note: We recommend extracting features using SSL pretrained methods. Our code uses checkpoints provided by Benchmarking Self-Supervised Learning on Diverse Pathology Datasets.

Pretrained Checkpoints

When running Step2_feature_extract.py, you can choose from various feature encoders. Links to obtain their checkpoints are provided below:

Model Website Link
Lunit Website
UNI Website
Gigapath Website
Virchow Website
PLIP Website
Quilt-net Website
Biomedclip Website
PathGen-CLIP Website

Training

Baseline (ABMIL)

To run the baseline ABMIL, use the following command with lamda = 0:

CUDA_VISIBLE_DEVICES=0 python main.py --seed 4 --wandb_mode online --lamda 0 --pretrain medical_ssl --config config/camelyon17_config.yml

AEM

To run AEM, use the following command with lamda > 0:

CUDA_VISIBLE_DEVICES=0 python main.py --seed 4 --wandb_mode online --lamda 0.1 --pretrain medical_ssl --config config/camelyon17_config.yml

BibTeX

If you find our work useful for your project, please consider citing the following paper.

@misc{zhang2023attentionchallenging,
      title={AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image Classification}, 
      author={Yunlong Zhang and Honglin Li and Yuxuan Sun and Jingxiong Li and Chenglu Zhu and Lin Yang},
      year={2024},
      eprint={2406.15303},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}