AdaLLaVA

Learning to Inference Adaptively for Multimodal
Large Language Models

arXiv 2025
*Equal Contribution
1 University of Wisconsin-Madison 2 Purdue University 3 The University of Hong Kong

Abstract

Multimodal Large Language Models (MLLMs) have shown impressive capabilities in reasoning, yet come with substantial computational cost, limiting their deployment in resource-constrained settings. Despite recent efforts on improving the efficiency of MLLMs, prior solutions fall short in responding to varying runtime conditions, in particular changing resource availability (e.g., contention due to the execution of other programs on the device). To bridge this gap, we introduce AdaLLaVA, an adaptive inference framework that learns to dynamically reconfigure operations in an MLLM during inference, accounting for the input data and a latency budget. We conduct extensive experiments across benchmarks involving question-answering, reasoning, and hallucination. Our results show that AdaLLaVA effectively adheres to input latency budget, achieving varying accuracy and latency tradeoffs at runtime. Further, we demonstrate that AdaLLaVA adapts to both input latency and content, can be integrated with token selection for enhanced efficiency, and generalizes across MLLMs.

Our key contributions are three folds.

  1. We present AdaLLaVA, a novel adaptive inference framework for MLLMs. Our method is among the first to enable dynamic execution of MLLMs based on a latency budget and the input content at inference time.
  2. Our key technical innovation lies in (1) the design of a learning-based, latency-aware scheduler, which reconfigures a base MLLM model during inference; and (2) a probabilistic modeling approach, which incorporates hard latency constraints during MLLM training.
  3. Through extensive experiments, we demonstrate that (1) AdaLLaVA can adapt to a range of latency requirements while preserving the performance of the base model; and (2) AdaLLaVA can be integrated with token selection techniques to further enhance efficiency.

Inference Adaptively based on Latency Budget and Content

During inference, model is given:

  1. Content. Image-query pair.
  2. Latency constraints. The total latency budget model should yield. (FLOPs, time, etc.)

Teaser image
Second figure

AdaLLaVA: Adaptive Multimodal Large Language Models

Overview of AdaLLaVA:

  1. (a) Model architecture: Our latency encoder embeds an input latency budget into a latency token, which is further processed by the early part of the LLM. The resulting embedding is then fed into the scheduler, leading to the output of an execution plan that control individual operations in the remaining part of the LLM. Our latency encoder and scheduler are jointly learned with the MLLM.
  2. (b) AdaLLaVA-L: This design attaches binary switches to entire Transformer blocks. When a switch is off, the corresponding block is bypassed through its residual connection, becoming an identity mapping. The execution plan thus determines whether each layer is computed or bypassed
  3. (c) AdaLLaVA-H: (head/neuron-level): This design introduces binary switches within Transformer blocks, targeting individual attention heads in attention modules and specific neurons in MLP layers. When a switch is off, its computation is skipped, and its contribution is removed. In MLP, switches function similarly to dropout, selectively disabling neuron activations.

Performance

AdaLlava demonstrates competitive performance with notable efficiency improvements across all benchmarks, adhering to the specified latency budgets . AdaLlava also complements to existing token selection approaches.

Latency Adaptivity

Adaptivity to input latency budget. AdaLLaVA exhibits the ability to complete inference under varying latency requirements using a single model.

AdaLLaVA can empower a base MLLM with static compute footprint (i.e., LLaVA-1.5, PruMerge+, or FastV as individual dots) to adapt to varying accuracy-latency tradeoffs (i.e. the corresponding curves). With varying latency budgets from 50% to 100%, AdaLLaVA effectively trades compute with accuracy.

Content Adaptivity

Latency token shows different behavior given different image. The key-query attention scores of the latency token and the input visual tokens with different text questions are different. Our model dynamically adjust its computational focus based on the query type.

Latency token shows different behavior given different image. The key-query attention scores of the latency token and the input visual tokens with different text questions are different. Our model dynamically adjust its computational focus based on the query type.

Generalization of AdaLLaVA

Generalization to other MLLMs. AdaLLaVA can generalize to other MLLMs beyond LLaVA.

AdaLLaVA can empower Mipha-3B, a lightweight MLLM built on Phi-2.7B and achieves similar results as LLaVA 1.5.

Interesting Findings

Visualization for latency token across layers. Evolution of the attention score between latency token and visual tokens from layers 12 to 16

Query: Who is the main character?

The latency token progressively gathers key information from the input visual tokens for scheduling.

BibTeX


        @article{zhuoyan2025adallava,
          title={Learning to Inference Adaptively for Multimodal Large Language Models},
          author={Xu, Zhuoyan and Nguyen, Khoi Duc and Mukherjee, Preeti and Bagchi, Saurab and Chaterji, Somali and Liang, Yingyu and Li, Yin},
          journal={arXiv preprint arXiv:2503.10905},
          year={2025}
        }
  

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. We thank the LLaMA team for giving us access to their models, and open-source projects, including Alpaca and Vicuna.

Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of CLIP, LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

Related Links: [CLIP] [LLaVA] [Instruction Tuning with GPT-4]