Skip to content

Latest commit

ย 

History

History
170 lines (111 loc) ยท 9.35 KB

File metadata and controls

170 lines (111 loc) ยท 9.35 KB

Information Extraction Application

Table of contents

1. Introduction

This Information Extraction (IE) guide introduces our open-source industry-grade solution that covers the most widely-used application scenarios of Information Extraction. It features multi-domain, multi-task, and cross-modal capabilities and goes through the full lifecycle of data labeling, model training and model deployment. We hope this guide can help you apply Information Extraction techniques in your own products or models.

Information Extraction (IE) is the process of extracting structured information from given input data such as text, pictures or scanned document. While IE brings immense value, applying IE techniques is never easy with challenges such as domain adaptation, heterogeneous structures, lack of labeled data, etc. This PaddleNLP Information Extraction Guide builds on the foundation of our work in [Universal Information Extraction] (https://arxiv.org/abs/2203.12277) and provides an industrial-level solution that not only supports extracting entities, relations, events and opinions from plain text, but also supports cross-modal extraction out of documents, tables and pictures. Our method features a flexible prompt, which allows you to specify extraction targets with simple natural language. We also provide a few different domain-adapated models specialized for different industry sectors.

Highlights:

  • Comprehensive Coverage๐ŸŽ“: Covers various mainstream tasks of information extraction for plain text and document scenarios, supports multiple languages
  • State-of-the-Art Performance๐Ÿƒ: Strong performance from the UIE model series models in plain text and multimodal datasets. We also provide pretrained models of various sizes to meet different needs
  • Easy to useโšก: three lines of code to use our Taskflow for out-of-box Information Extraction capabilities. One line of command to model training and model deployment
  • Efficient TuningโœŠ: Developers can easily get started with the data labeling and model training process without a background in Machine Learning.

2. Features

2.1 Available Models

Multiple model selection, satisfying accuracy and speed, and adapting to different information extraction scenarios.

Model Name Usage Scenarios Supporting Tasks
uie-base
uie-medium
uie-mini
uie-micro
uie-nano
For plain text The extractive model of the scene supports Chinese Supports entity, relation, event, opinion extraction
uie-base-en An extractive model for plain text scenarios, supports English Supports entity, relation, event, opinion extraction
uie-m-base
uie-m-large
An extractive model for plain text scenarios, supporting Chinese and English Supports entity, relation, event, opinion extraction
uie-x-base An extractive model for plain text and document scenarios, supports Chinese and English Supports entity, relation, event, opinion extraction on both plain text and documents/pictures/tables

2.2 Performance

The UIE model series uses the ERNIE 3.0 lightweight models as the pre-trained language models and was finetuned on a large amount of information extraction data so that the model can be adapted to a fixed prompt.

  • Experimental results on Chinese dataset

We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance:

financehealthcareinternet
0-shot5-shot0-shot5-shot0-shot5-shot
uie-base (12L768H)46.4370.9271.8385.7278.3381.86
uie-medium (6L768H)41.1164.5365.4075.7278.3279.68
uie-mini (6L384H)37.0464.6560.5078.3672.0976.38
uie-micro (4L384H)37.5362.1157.0475.9266.0070.22
uie-nano (4L312H)38.9466.8348.2976.7462.8672.35
uie-m-large (24L1024H)49.3574.5570.5092.6678.4983.02
uie-m-base (12L768H)38.4674.3163.3787.3276.2780.13
๐Ÿงพ๐ŸŽ“uie-x-base (12L768H)48.8473.8765.6088.8179.36 81.65

0-shot means that no training data is directly used for prediction through paddlenlp.Taskflow, and 5-shot means that each category contains 5 pieces of labeled data for model fine-tuning. Experiments show that UIE can further improve the performance with a small amount of data (few-shot).

  • Experimental results on multimodal datasets

We experimented on the zero-shot performance of UIE-X on the in-house multi-modal test sets in three different domains of general, financial, and medical:

General FinancialMedical
๐Ÿงพ๐ŸŽ“uie-x-base (12L768H)65.0373.5184.24

The general test set contains complex samples from different fields and is the most difficult task.

2.3 Full Development Lifecycle

Research stage

Data preparation stage

  • We recommend finetuning your own information extraction model for your use case. We provide Label Studio labeling solutions for different extraction scenarios. Based on this solution, the seamless connection from data labeling to training data construction can be realized, which greatly reduces the time cost of data labeling and model customization.

Model fine-tuning and closed domain distillation

Model Deployment

2.4 Demo

  • UIE-X end-to-end document extraction industry application example

    • Customs declaration

    • Delivery Note (Need fine-tuning)

    • VAT invoice (need fine-tuning)

    • Form (need fine-tuning)

3. Quick Start

3.1 Taskflow

3.2 Text Information Extraction

3.3 Document Information Extraction