conf.directory

DEF CON 32 - Defeating Secure Code Review GPT Hallucinations - Wang Zhilong, Xinzhi Luo

About this talk

In this talk, we will discuss the strengths and limitations of LLMs for code analysis tasks like code search and code clone detection. We will show when the LLMs make mistakes and what kinds of mistakes they make. For example, we observe that the performance of popular LLMs heavily relies on the well-defined variable and function names, therefore, they will make mistakes when some misleading variable name is given. Anyone interested in exploring the intersection of AI and code security analysis can attend this talk.

Stay Updated

Get notified about new features and conference additions.

DEF CON 32 - Defeating Secure Code Review GPT Hallucinations - Wang Zhilong, Xinzhi Luo by Wang Zhilong, Xinzhi Luo | conf.directory | conf.directory