--- license: apache-2.0 language: - en base_model: - Lin-Chen/open-llava-next-llama3-8b tags: - remote-sensing datasets: - AdaptLLM/remote-sensing-visual-instructions --- # Adapting Multimodal Large Language Models to Domains via Post-Training This repo contains the **remote sensing MLLM developed from LLaVA-NeXT-Llama3-8B** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) ## 1. To Chat with AdaMLLM ```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch from PIL import Image import requests # Define your input image and instruction here: ## image url = "https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/bRu85CWwP9129bSCRzos2.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") instruction = "What's in the image?" model_path='AdaptLLM/remote-sensing-LLaVA-NeXT-Llama3-8B' # =========================== Do NOT need to modify the following =============================== # Load the processor processor = LlavaNextProcessor.from_pretrained(model_path) # Define image token image_token = "<|reserved_special_token_4|>" # Format the prompt prompt = ( f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n" f"You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." f"<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n" f"{image_token}\n{instruction}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" ) # Load the model model = LlavaNextForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto") # Prepare inputs and generate output inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device) answer_start = int(inputs["input_ids"].shape[-1]) output = model.generate(**inputs, max_new_tokens=512) # Decode predictions pred = processor.decode(output[0][answer_start:], skip_special_tokens=True) print(pred) ``` ## 2. To Evaluate Any MLLM on Domain-Specific Benchmarks Refer to the [remote-sensing-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/remote-sensing-VQA-benchmark) to reproduce our results and evaluate many other MLLMs on domain-specific benchmarks. ## 3. To Reproduce this Domain-Adapted MLLM See [Post-Train Guide](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Post_Train.md) to adapt MLLMs to domains. ## Citation If you find our work helpful, please cite us. [AdaMLLM](https://huggingface.co/papers/2411.19930) ```bibtex @article{adamllm, title={On Domain-Specific Post-Training for Multimodal Large Language Models}, author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang}, journal={arXiv preprint arXiv:2411.19930}, year={2024} } ``` [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024) ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```