Academic
Academic
Home
Posts
Projects
Talks
Publications
Contact
CV
Light
Dark
Automatic
LLM
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots
In this paper, we present Jailbreaker, a comprehensive framework that offers an in-depth understanding of jailbreak attacks and …
Gelei Deng
,
Yi Liu
,
Yuekang Li
,
Kailong Wang
,
Ying Zhang
,
Zefeng Li
,
Haoyu Wang
,
Tianwei Zhang
,
Yang Liu
PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning
Large Language Models (LLMs) have gained immense popularity and are being increasingly applied in various domains. Consequently, …
Gelei Deng
,
Yi Liu
,
Kailong Wang
,
Yuekang Li
,
Tianwei Zhang
,
Yang Liu
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the …
Gelei Deng
,
Yi Liu
,
Víctor Mayoral-Vilches
,
Peng Liu
,
Yuekang Li
,
Yuan Xu
,
Tianwei Zhang
,
Yang Liu
,
Martin Pinzger
,
Stefan Rass
PentestGPT
The first LLM-empowered automatic penetration testing tool, with 6k+ stars on GitHub and active community.
Slides
Paper
Follow
Jailbreaking chatgpt via prompt engineering: An empirical study
Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content …
Yi Liu
,
Gelei Deng
,
Zhengzi Xu
,
Yuekang Li
,
Yaowen Zheng
,
Ying Zhang
,
Lida Zhao
,
Tianwei Zhang
,
Yang Liu
Prompt Injection attack against LLM-integrated Applications
Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant …
Yi Liu
,
Gelei Deng
,
Yuekang Li
,
Kailong Wang
,
Tianwei Zhang
,
Yepang Liiu
,
Haoyu Wang
,
Yan Zheng
,
Yang Liu
Cite
×