PUBLICATION     ICLR'26

When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations

Kailin Jiang, Yuntao Du, Yukai Ding, Yuchen Ren, Ning Jiang, Zhi Gao, Zilong Zheng, Lei Liu, Bin Li, and Qing Li

ICLR  ·  2026   ·  arXiv: arxiv.org/abs/2505.24449


Abstract

Large Multimodal Models (LMMs) store vast amounts of pretrained knowledge but struggle to remain aligned with real-world updates, making it difficult to avoid capability degradation when acquiring evolving knowledge. Furthermore, most current work focuses on exploring static textual knowledge injection, neglecting dynamic multimodal evolving knowledge injection, leaving the potential of LMMs for multimodal knowledge injection as an open question. To address this, we first propose a pipeline to construct MMEVOKE, a benchmark for evaluating LMMs’ ability in multimodal evolving knowledge injection. MMEVOKE contains 9,422 samples spanning 159 subtypes. Then, based on extensive experiments with MMEVOKE, we reveal challenges such as poor injection performance and capability degradation in existing knowledge injection methods through knowledge injection tests and general capability tests. Finally, to tackle these challenges, we introduce knowledge augmentation and knowledge retention methods, finding that knowledge-aware augmentation strengthens knowledge injection performance, and that Data Replay and MoE methods effectively mitigate capability degradation.




Citation

@inproceedings{jiang2026when,
    title={When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations},
    author={Kailin Jiang and Yuntao Du and Yukai Ding and Yuchen Ren and Ning Jiang and Zhi Gao and Zilong Zheng and Lei Liu and Bin Li and Qing Li},
    booktitle={The Fourteenth International Conference on Learning Representations},
    year={2026},
    url={https://openreview.net/forum?id=iaPEM00wEs}
}

    Related Publications

  • Qi et al., In-Context Editing: Learning Knowledge from Self-Induced Distributions, in ICLR, 2025.
  • Du et al., MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge, in ICLR, 2025.
  • Wang et al., VideoLLaMB: Long-context Video Understanding with Recurrent Memory Bridges, in ICCV, 2025.
  • Wang et al., OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts, in CVPR, 2025.