UPDF AI

Exploiting Latent Space Discontinuities for Building Universal LLM Jailbreaks and Data Extraction Attacks

Kayuã Oleques Paim,R. Mansilha,2 Authors,Weverton Cordeiro

2025 · DOI: 10.5753/sbseg.2025.11448
0 Citations

TLDR

This work proposes a novel approach to crafting universal jailbreaks and data extraction attacks by exploiting latent space discontinuities, an architectural vulnerability related to the sparsity of training data that can consistently and profoundly compromise model behavior.

Abstract

The rapid proliferation of Large Language Models (LLMs) has raised significant concerns about their security against adversarial attacks. In this work, we propose a novel approach to crafting universal jailbreaks and data extraction attacks by exploiting latent space discontinuities, an architectural vulnerability related to the sparsity of training data. Unlike previous methods, our technique generalizes across various models and interfaces, proving highly effective in seven state-of-the-art LLMs and one image generation model. Initial results indicate that when these discontinuities are exploited, they can consistently and profoundly compromise model behavior, even in the presence of layered defenses. The findings suggest that this strategy has substantial potential as a systemic attack vector. Disclaimer: This paper contains examples of harmful and offensive language. Reader discretion is advised. Additional supporting materials may be provided upon formal request and are subject to the signing of a liability and ethical use agreement.

Cited Papers
Citing Papers