VisionLLaMA: A Unified LLaMA Interface for Vision Tasks
VisionLLaMA: A Unified LLaMA Interface for Vision Tasks
Xiangxiang Chu,Jianlin Su,Bo Zhang,Chunhua Shen
TLDR
This paper unveils a LLaMA-like vision transformer in plain and pyramid forms, termed VisionLLaMA, which is tailored for this purpose and believes that VisionLLaMA can serve as a strong new base-line model for vision generation and understanding.
Abstract
Large language models are built on top of a transformer-based architecture to process textual inputs. For example, the LLaMA family of models stands out among many open-source implementations. Can the same transformer be used to process 2D images? In this paper, we answer this question by unveiling a LLaMA-like vision transformer in plain and pyramid forms, termed VisionLLaMA , which is tailored for this purpose. VisionLLaMA is a unified and generic modeling framework for solving most vision tasks. We extensively evaluate its effectiveness using typical pre-training paradigms in a good portion of downstream tasks of image perception and especially image generation. In many cases, VisionLLaMA have exhibited substantial gains over the previous state-of-the-art vision transformers. We believe that VisionLLaMA can serve as a strong new base-line model for vision generation and understanding. Our code will be released at https://github.com/Mei tuan-AutoML/VisionLLaMA .
