Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
schwettmann authored Apr 17, 2024
1 parent 6bee670 commit 0f3c6e6
Showing 1 changed file with 14 additions and 1 deletion.
15 changes: 14 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,14 @@
MAIA
# A Multimodal Automated Interpretability Agent #

<img align="right" width="40%" src="/docs/static/figures/maia_teaser.jpg">

### [Project Page](https://multimodal-interpretability.csail.mit.edu/maia) | [Arxiv](https://multimodal-interpretability.csail.mit.edu/maia)

[Tamar Rott Shaham](https://tamarott.github.io/)\\\, [Sarah Schwettmannn](https://cogconfluence.com/)\\\, <br>

[Franklin Wang](https://frankxwang.github.io/), [Achyuta Rajaram](https://twitter.com/AchyutaBot), [Evan Hernandez](https://evandez.com/), [Jacob Andreas](https://www.mit.edu/~jda/), [Antonio Torralba](https://groups.csail.mit.edu/vision/torralbalab/) <br>

\*equal contribution <br><br>
**This repo is under active development, and the MAIA codebase will be released in the coming weeks. Sign up for updates by email using [this google form](https://forms.gle/Zs92DHbs3Y3QGjXG6).**

MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery. It equips a pre-trained vision-language model with a set of tools that support iterative experimentation on subcomponents of other models to explain their behavior. These include tools commonly used by human interpretability researchers: for synthesizing and editing inputs, computing maximally activating exemplars from real-world datasets, and summarizing and describing experimental results. Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior.

0 comments on commit 0f3c6e6

Please sign in to comment.