Skip to content

MLCVLab/SegformerPlusPlus

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SegFormer++

Paper: Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation

image

image

Abstract

Utilizing transformer architectures for semantic segmentation of high-resolution images is hindered by the attention's quadratic computational complexity in the number of tokens. A solution to this challenge involves decreasing the number of tokens through token merging, which has exhibited remarkable enhancements in inference speed, training efficiency, and memory utilization for image classification tasks. In this paper, we explore various token merging strategies within the framework of the SegFormer architecture and perform experiments on multiple semantic segmentation and human pose estimation datasets. Notably, without model re-training, we, for example, achieve an inference acceleration of 61% on the Cityscapes dataset while maintaining the mIoU performance. Consequently, this paper facilitates the deployment of transformer-based architectures on resource-constrained devices and in real-time applications.

Results and Models

Memory refers to the VRAM requirements during the training process.

Inference on Cityscapes (MiT-B5)

The weights of the Segformer (Original) model were used to get the inference results.

Method mIoU Speed-Up config download
Segformer (Original) 82.39 - config model
Segformer++HQ (ours) 82.31 1.61 config model
Segformer++fast (ours) 82.04 1.94 config model
Segformer++2x2 (ours) 81.96 1.90 config model
Segformer (Downsampling) 77.31 6.51 config model

Training on Cityscapes (MiT-B5)

Method mIoU Speed-Up Memory (GB) config download
Segformer (Original) 82.39 - 48.3 config model
Segformer++HQ (ours) 82.19 1.40 34.0 config model
Segformer++fast (ours) 81.77 1.55 30.5 config model
Segformer++2x2 (ours) 82.38 1.63 31.1 config model
Segformer (Downsampling) 79.24 2.95 10.0 config model

Training on ADE20K (640x640) (MiT-B5)

Method mIoU Speed-Up Memory (GB) config download
Segformer (Original) 49.72 - 33.7 config model
Segformer++HQ (ours) 49.77 1.15 29.2 config model
Segformer++fast (ours) 49.10 1.20 28.0 config model
Segformer++2x2 (ours) 49.35 1.26 27.2 config model
Segformer (Downsampling) 46.71 1.89 12.4 config model

Training on JBD

Method [email protected] [email protected] Speed-Up Memory (GB) config download
Segformer (Original) 95.20 90.65 - 40.0 config model
Segformer++HQ (ours) 95.18 90.51 1.19 36.0 config model
Segformer++fast (ours) 94.58 89.87 1.25 34.6 config model
Segformer++2x2 (ours) 95.17 90.16 1.27 33.4 config model

Training on MS COCO

Method [email protected] [email protected] Speed-Up Memory (GB) config download
Segformer (Original) 95.16 87.61 - 13.5 config model
Segformer++HQ (ours) 94.97 87.35 0.97 13.1 config model
Segformer++fast (ours) 95.02 87.37 0.99 12.9 config model
Segformer++2x2 (ours) 94.98 87.36 1.24 12.3 config model

Usage

To use our models for semantic segmentation or 2D human pose estimation, please follow the installation instructions for MMSegmentation and MMPose respectively, which can be found in the documentation of the respective repositories.

Citation

@article{kienzle2024segformer++,
  title={Segformer++: Efficient Token-Merging Strategies for High-Resolution Semantic Segmentation},
  author={Kienzle, Daniel and Kantonis, Marco and Sch{\"o}n, Robin and Lienhart, Rainer},
  journal={IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR)},
  year={2024}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Other 0.8%