The Security Hole at the Heart of ChatGPT and Bing - Wired

Bring Sydney Back was created by Cristiano Giardina, an entrepreneur who has been experimenting with ways to make generative AI tools do unexpected things. The site puts Sydney inside Microsoft’s Edge browser and demonstrates how generative AI systems can be manipulated by external inputs. [...]
Giardina created the replica of Sydney using an indirect prompt-injection attack. This involved feeding the AI system data from an outside source to make it behave in ways its creators didn’t intend. A number of examples of indirect prompt-injection attacks have centered on large language models (LLMs) in recent weeks, including OpenAI’s ChatGPT and Microsoft’s Bing chat system.

Read more: https://www.wired.co.uk/article/chatgpt-prompt-injection-attack-security

Commenti

Post popolari in questo blog

Building a high-performance data and AI organization - MIT report 2023

AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity. - IFM blog

Dove trovare raccolte di dati (dataset) utilizzabili gratuitamente