Skip to content

Commit

Permalink
Issue 236
Browse files Browse the repository at this point in the history
  • Loading branch information
DavidBlavid committed Jun 20, 2024
1 parent 0e1e1b4 commit 8ea06bc
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 0 deletions.
2 changes: 2 additions & 0 deletions systems.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,3 +159,5 @@
| Pangu | Shouhui Wang and Biao Qin | [Link](https://aclanthology.org/2024.lrec-main.1074.pdf) | yes | [Link](https://github.com/dki-lab/Pangu) | [Link](https://aclanthology.org/2023.acl-long.270.pdf) | This paper proposes Pangu, a generic framework for grounded language understanding that capitalizes on the discriminative ability of LMs instead of their generative ability. Pangu consists of a symbolic agent and a neural LM working in a concerted fashion: The agent explores the environment to incrementally construct valid plans, and the LM evaluates the plausibility of the candidate plans to guide the search process. | Gu et al. |
| FC-KBQA | Shouhui Wang and Biao Qin | [Link](https://aclanthology.org/2024.lrec-main.1074.pdf) | no | - | [Link](https://aclanthology.org/2023.acl-long.57.pdf) | This paper proposes a Fine-to-Coarse Composition framework for KBQA (FC-KBQA) to both ensure the generalization ability and executability of the logical expression. The main idea of FC-KBQA is to extract relevant finegrained knowledge components from KB and reformulate them into middle-grained knowledge pairs for generating the final logical expressions. | Zhang et al. |
| TFS-KBQA | Shouhui Wang and Biao Qin | [Link](https://aclanthology.org/2024.lrec-main.1074.pdf) | yes | [Link](https://github.com/shouh/TFS-KBQA) | [Link](https://aclanthology.org/2024.lrec-main.1074.pdf) | This study adopts the LLMs, such as Large Language Model Meta AI (LLaMA), as a channel to connect natural language questions with structured knowledge representations and proposes a Three-step Fine-tune Strategy based on large language model to implement the KBQA system (TFS-KBQA). This method achieves direct conversion from natural language questions to structured knowledge representations, thereby overcoming the limitations of existing KBQA methods, such as addressing large search and reasoning spaces and ranking massive candidates. | Shouhui Wang and Biao Qin |
| TSQA | Gao et al. | [Link](https://arxiv.org/pdf/2402.16568) | no | - | [Link](https://arxiv.org/pdf/2203.00255) | In this paper, we propose a timesensitive question answering (TSQA) framework to tackle these problems. TSQA features a timestamp estimation module to infer the unwritten timestamp from the question. We also employ a time-sensitive KG encoder to inject ordering information into the temporal KG embeddings that TSQA is based on. | Shang et al. |
| GenTKGQA | Gao et al. | [Link](https://arxiv.org/pdf/2402.16568) | no | - | [Link](https://arxiv.org/pdf/2402.16568) | This paper first proposes a novel generative temporal knowledge graph question answering framework, GenTKGQA, which guides LLMs to answer temporal questions through two phases: Subgraph Retrieval and Answer Generation. First, we exploit LLM’s intrinsic knowledge to mine temporal constraints and structural links in the questions without extra training, thus narrowing down the subgraph search space in both temporal and structural dimensions. Next, we design virtual knowledge indicators to fuse the graph neural network signals of the subgraph and the text representations of the LLM in a non-shallow way, which helps the open-source LLM deeply understand the temporal order and structural dependencies among the retrieved facts through instruction tuning. | Gao et al. |
4 changes: 4 additions & 0 deletions wikidata/CronQuestions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,20 @@

| Model / System | Year | Hits@1 | Hits@10 | Reported by |
| :------------: | :--: | :----: | :-----: | :---------------------------------------------------------------------------: |
| GenTKGQA | 2024 | 0.978 | 0.981 | [Gao et. al.](https://arxiv.org/pdf/2402.16568) |
| TempoQR-Hard | 2021 | 0.918 | 0.978 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| TSQA | 2024 | 0.831 | 0.980 | [Gao et. al.](https://arxiv.org/pdf/2402.16568) |
| TempoQR-Soft | 2021 | 0.799 | 0.957 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| TMA | 2021 | 0.784 | 0.943 | [Liu et. al.](https://arxiv.org/pdf/2302.12529.pdf) |
|ChatGPT (with TKG)| 2024 | 0.754 | 0.852 | [Gao et. al.](https://arxiv.org/pdf/2402.16568) |
| EntityQR | 2021 | 0.745 | 0.944 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| CronKGQA | 2021 | 0.647 | 0.884 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| EaE | 2021 | 0.288 | 0.678 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| EmbedKGQA | 2021 | 0.288 | 0.672 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| T-EaE-add | 2021 | 0.278 | 0.663 | [Saxena et. al.](https://arxiv.org/pdf/2106.01515.pdf) |
| BERT | 2021 | 0.243 | 0.620 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
| RoBERTa | 2021 | 0.225 | 0.585 | [Mavromatis et. al.](https://arxiv.org/pdf/2112.05785.pdf) |
|ChatGPT (without TKG)| 2024 | 0.151 | 0.308 | [Gao et. al.](https://arxiv.org/pdf/2402.16568) |
| BERT | 2021 | 0.071 | 0.213 | [Saxena et. al.](https://arxiv.org/pdf/2106.01515.pdf) |
| RoBERTa | 2021 | 0.07 | 0.202 | [Saxena et. al.](https://arxiv.org/pdf/2106.01515.pdf) |
| KnowBERT | 2021 | 0.07 | 0.201 | [Saxena et. al.](https://arxiv.org/pdf/2106.01515.pdf) |
Expand Down

0 comments on commit 8ea06bc

Please sign in to comment.