Skip to content
View FanqingM's full-sized avatar
😕
😕

Organizations

@CampusSiteManagementSystem

Block or report FanqingM

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. OpenGVLab/ChartAst OpenGVLab/ChartAst Public

    [ACL 2024] ChartAssistant is a chart-based vision-language model for universal chart comprehension and reasoning.

    Python 107 8

  2. OpenGVLab/MMIU OpenGVLab/MMIU Public

    MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models

    Python 55 2

  3. OpenGVLab/Multitask-Model-Selector OpenGVLab/Multitask-Model-Selector Public

    [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector

    Python 35 1

  4. OpenGVLab/Multi-Modality-Arena OpenGVLab/Multi-Modality-Arena Public

    Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, B…

    Python 470 35

  5. OpenGVLab/PhyGenBench OpenGVLab/PhyGenBench Public

    The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation

    Python 74 1

  6. MMT-Bench MMT-Bench Public

    Forked from OpenGVLab/MMT-Bench

    [ICML2024] MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI

    Python