Skip to content

Latest commit

 

History

History
165 lines (132 loc) · 8.76 KB

README.md

File metadata and controls

165 lines (132 loc) · 8.76 KB

CascadeTabNet

PWC PWC PWC

CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents
Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave,
CVPR Link of Paper
arXiv Link of Paper
Supplementary file
The paper was presented (Orals) at CVPR 2020 Workshop on Text and Documents in the Deep Learning Era
Virtual Oral Presentation Video

1. Introduction

CascadTabNet is an automatic table recognition method for interpretation of tabular data in document images. We present an improved deep learning-based end to end approach for solving both problems of table detection and structure recognition using a single Convolution Neural Network (CNN) model. CascadeTabNet is a Cascade mask Region-based CNN High-Resolution Network (Cascade mask R-CNN HRNet) based model that detects the regions of tables and recognizes the structural body cells from the detected tables at the same time. We evaluate our results on ICDAR 2013, ICDAR 2019 and TableBank public datasets. We achieved 3rd rank in ICDAR 2019 post-competition results for table detection while attaining the best accuracy results for the ICDAR 2013 and TableBank dataset. We also attain the highest accuracy results on the ICDAR 2019 table structure recognition dataset.

2. Setup

Models are developed in Pytorch based MMdetection framework (Version 1.2)

pip install -q mmcv terminaltables
git clone --branch v1.2.0 'https://github.com/open-mmlab/mmdetection.git'
cd "mmdetection"
python setup.py install
python setup.py develop
pip install -r {"requirements.txt"}

Code is developed under following library dependencies

PyTorch = 1.4.0
Torchvision = 0.5.0
Cuda = 10.0

pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html

If you are using Google Colaboratory (Colab), Then you need add

from google.colab.patches import cv2_imshow

and replace all the cv2.imshow with cv2_imshow

3. Model Architecture

Model Computation Graph

4. Image Augmentation


Codes: Code for dilation transform Code for smudge transform

5. Benchmarking

5.1. Table Detection

1. ICDAR 13

2. ICDAR 19 (Track A Modern)

3. TableBank


TableBank Benchmarking : Leaderboard
TableBank Dataset Divisions : TableBank

5.2. Table Structure Recognition

1. ICDAR 19 (Track B2)

6. Model Zoo

Checkout our demo notebook for loading checkpoints and performing inference
Open In Colab
Config file for the Models
Note: Config paths are only required to change during training
Checkpoints of the Models we have trained :

Model NameCheckpoint File
General Model table detectionCheckpoint
ICDAR 13 table detectionCheckpoint
ICDAR 19 (Track A Modern) table detectionCheckpoint
Table Bank Word table detectionCheckpoint
Table Bank Latex table detectionCheckpoint
Table Bank Both table detectionCheckpoint
ICDAR 19 (Track B2 Modern) table structure recognitionCheckpoint

7. Datasets

  1. End to End Table Recognition Dataset
    We manually annotated some of the ICDAR 19 table competition (cTDaR) dataset images for cell detection in the borderless tables. More details about the dataset are mentioned in the paper.
    dataset link

  2. General Table Detection Dataset (ICDAR 19 + Marmot + Github)
    We manually corrected the annotations of Marmot and Github and combined them with ICDAR 19 dataset to create a general and robust dataset.
    dataset link

8. Training

You may refer this tutorial for training Mmdetection models on your custom datasets in colab.

You may refer this script to convert your Pascal VOC XML annotation files to a single COCO Json file.

Contact

Devashish Prasad : devashishkprasad [at] gmail [dot] com
Ayan Gadpal : ayangadpal2 [at] gmail [dot] com
Kshitij Kapadni : kshitij.kapadni [at] gmail [dot] com
Manish Visave : manishvisave149 [at] gmail [dot] com

Acknowledgements

We thank the following contributions because of which the paper was made possible

  1. The MMdetection project team for creating the amazing framework to push the state of the art computer vision research and which enabled us to experiment and build state of the art models very easily.

  2. Our college ”Pune Institute of Computer Technology” for funding our research and giving us the opportunity to work and publish our research at an international conference.

  3. Kai Chen for endorsing our paper on the arXiv to publish a pre-print of the paper and also for maintaining the Mmdetection repository along with the team.

  4. Google Colaboratory team for providing free high end GPU resources for research and development. All of the code base was developed using Google colab and couldn't be possible without it.

  5. AP Analytica for making us aware about a similar problem statement and giving us an opportunity to work on the same.

  6. Overleaf.com for open sourcing the wonderful project which enabled us to write the research paper easily in the latex format

License

The code of CascadeTabNet is Open Source under the MIT License. There is no limitation for both acadmic and commercial usage.

Cite as

If you find this work useful for your research, please cite our paper:

@misc{ cascadetabnet2020,
    title={CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents},
    author={Devashish Prasad and Ayan Gadpal and Kshitij Kapadni and Manish Visave and Kavita Sultanpure},
    year={2020},
    eprint={2004.12629},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}