UPDF AI

Category-level Text-to-Image Retrieval Improved: Bridging the Domain Gap with Diffusion Models and Vision Encoders

Faizan Farooq Khan,Vladan Stojni'c,2 Authors,Giorgos Tolias

2025 · ArXiv: 2509.00177
0 Citations

TLDR

This work transforms the text query into a visual query using a generative diffusion model and estimates image-to-image similarity with a vision model, and introduces an aggregation network that combines multiple generated images into a single vector representation and fuses similarity scores across both query modalities.

Abstract

This work explores text-to-image retrieval for queries that specify or describe a semantic category. While vision-and-language models (VLMs) like CLIP offer a straightforward open-vocabulary solution, they map text and images to distant regions in the representation space, limiting retrieval performance. To bridge this modality gap, we propose a two-step approach. First, we transform the text query into a visual query using a generative diffusion model. Then, we estimate image-to-image similarity with a vision model. Additionally, we introduce an aggregation network that combines multiple generated images into a single vector representation and fuses similarity scores across both query modalities. Our approach leverages advancements in vision encoders, VLMs, and text-to-image generation models. Extensive evaluations show that it consistently outperforms retrieval methods relying solely on text queries. Source code is available at: https://github.com/faixan-khan/cletir

Cited Papers
Citing Papers