Download a PDF of the paper titled Can We Edit Multimodal Large Language Models?, by Siyuan Cheng and 6 other authors
Download PDF
HTML (experimental)
Abstract:In this paper, we focus on editing Multimodal Large Language Models (MLLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful consideration in the editing process. To facilitate research in this area, we construct a new benchmark, dubbed MMEdit, for editing multimodal LLMs and establishing a suite of innovative metrics for evaluation. We conduct comprehensive experiments involving various model editing baselines and analyze the impact of editing different components for multimodal LLMs. Empirically, we notice that previous baselines can implement editing multimodal LLMs to some extent, but the effect is still barely satisfactory, indicating the potential difficulty of this task. We hope that our work can provide the NLP community with insights. Code and dataset are available in this https URL.
Submission history
From: Ningyu Zhang [view email]
[v1]
Thu, 12 Oct 2023 16:32:44 UTC (2,295 KB)
[v2]
Fri, 13 Oct 2023 01:12:25 UTC (2,295 KB)
[v3]
Sun, 24 Dec 2023 12:59:17 UTC (2,176 KB)