Publications

(2024). MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots. In NDSS 2024.

(2024). PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning. AISCC 2024.

(2023). ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers. ASE 2023.

(2023). PentestGPT: An LLM-empowered Automatic Penetration Testing Tool. pre-print.

(2023). Prompt Injection attack against LLM-integrated Applications. pre-print.

(2023). NAUTILUS: Automated RESTful API Vulnerability Detection. In USENIX Security 2023.

(2023). Jailbreaking chatgpt via prompt engineering: An empirical study. pre-print.

(2022). The threat of offensive ai to organizations. In Computers & Security.

(2022). On the (In)Security of Secure ROS2. In ACM CCS 2022.