From 2280897e923b3a56b400dab3578dbc5782c298a5 Mon Sep 17 00:00:00 2001 From: David Papp Date: Tue, 30 Jul 2024 09:16:38 +0200 Subject: [PATCH] Add incidents #2 --- ...ons_learned_from_chatgpt_s_samsung_leak.md | 9 ++++++ ...ent_involving_unauthorized_admin_access.md | 31 +++++++++++++++++++ 2 files changed, 40 insertions(+) create mode 100644 content/posts/2023-05-09/lessons_learned_from_chatgpt_s_samsung_leak.md create mode 100644 content/posts/2023-08-30/security_update_incident_involving_unauthorized_admin_access.md diff --git a/content/posts/2023-05-09/lessons_learned_from_chatgpt_s_samsung_leak.md b/content/posts/2023-05-09/lessons_learned_from_chatgpt_s_samsung_leak.md new file mode 100644 index 0000000..bc34306 --- /dev/null +++ b/content/posts/2023-05-09/lessons_learned_from_chatgpt_s_samsung_leak.md @@ -0,0 +1,9 @@ ++++ +title = 'Lessons Learned from ChatGPT’s Samsung Leak' +date = 2023-05-09 ++++ +Samsung employees reportedly leaked sensitive data via OpenAI’s chatbot ChatGPT, highlighting the risks of using Large Language Models (LLMs) in the workplace. Despite Samsung's ban on generative AI tools, several employees inadvertently shared sensitive company information, including software source code. + +The incident, termed "conversational AI leak," occurs when sensitive data input into an LLM is unintentionally exposed. To prevent such leaks, experts recommend controlling the data fed into the models and limiting who can access chatbots. Outright bans may not be effective, as more generative AI tools will be introduced in the future. Instead, organizations should focus on internal controls and monitoring. + +[More details here](https://www.cybernews.com/security/lessons-learned-from-chatgpt-samsung-leak/) diff --git a/content/posts/2023-08-30/security_update_incident_involving_unauthorized_admin_access.md b/content/posts/2023-08-30/security_update_incident_involving_unauthorized_admin_access.md new file mode 100644 index 0000000..af86228 --- /dev/null +++ b/content/posts/2023-08-30/security_update_incident_involving_unauthorized_admin_access.md @@ -0,0 +1,31 @@ ++++ +title = 'Security Update: Incident Involving Unauthorized Admin Access' +date = 2023-08-30 ++++ +**TL;DR:** Sourcegraph experienced a security incident that allowed a single attacker to access some data on Sourcegraph.com. This was limited to: + +**Paid customers:** +- The license key recipient’s name and email address. +- A small subset of customers’ Sourcegraph license keys may have been accessed (note that license keys do not enable access to Sourcegraph instances). We are reaching out directly to those who may have been impacted to rotate license keys. + +**Community users:** +- Sourcegraph account email addresses. No action is required. + +No other customer info, including private code, emails, passwords, usernames, or other PII, was accessible. + +**Background:** +On August 30, 2023, a malicious actor used a leaked admin access token in our public Sourcegraph instance at Sourcegraph.com. The attacker used their privileges to increase API rate limits for a small number of users. Our security team quickly identified the breach, revoked the malicious user's access, and initiated an internal investigation. + +**Impact:** +The attack was limited to viewing the license key recipient’s name and email address for paid customers, and Sourcegraph account email addresses for community users. No private customer data or code was accessed. + +**Mitigation Steps:** +- Identified and revoked the malicious account access. +- Rotated a subset of Sourcegraph customer license keys. +- Temporarily reduced rate limits for all free community users. +- Expanded secret scanning and monitoring for malicious activity. + +**Next Steps:** +Sourcegraph is actively working on a long-term solution to prevent future incidents and will provide updates to the community. + +[More details here](https://sourcegraph.com/security-update-incident-unauthorized-admin-access)