Publications

(2024). MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots. NDSS 2024.
(2024). A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models. arXiv 2024.
(2024). PANDORA: Jailbreak GPTs by Retrieval Augmented Generation Poisoning. AISCC 2024.
(2024). Digger: Detecting Copyright Content Mis-usage in Large Language Model Training. arXiv 2024.
(2023). NAUTILUS: Automated RESTful API Vulnerability Detection. USENIX Security 2023.
(2023). SoK: Rethinking Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View. EuroS&P 2023.
(2023). Prompt Injection Attack against LLM-integrated Applications. arXiv 2023.
(2023). Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv 2023.
(2023). Automatic Code Summarization via ChatGPT: How Far Are We?. arXiv 2023.
(2023). The Threat of Offensive AI to Organizations. Computers & Security 2023.