Prompt Injection Attack against LLM-integrated Applications
Jun 9, 2023ยท
,,,,,,,ยท
1 min read
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
Xiaofeng Wang
Tianwei Zhang
Yang Liu
Haoyu Wang
Abstract
As Large Language Models become increasingly integrated into applications, they face new security threats. This work presents a comprehensive study of prompt injection attacks against LLM-integrated applications, demonstrating how malicious inputs can manipulate LLM behavior, exfiltrate data, and compromise application security.
Type
Publication
arXiv preprint arXiv:2306.05499
This work systematically investigates prompt injection vulnerabilities in LLM-integrated applications, revealing critical security risks and providing recommendations for secure LLM deployment.