Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)Delve did the security compliance on LiteLLM, an AI project hit by malware [View all]
From TechCrunch today:
https://techcrunch.com/2026/03/25/delve-did-the-security-compliance-on-litellm-an-ai-project-hit-by-malware/
-snip-
LiteLLM gives developers easy access to hundreds of AI models and provides features like spend management. Its a breakout hit, downloaded as often as 3.4 million times per day, according to Snyk, one of the many security researchers monitoring the incident. The project had 40K stars on GitHub and thousands of forks (those who used it as a base to alter and make it their own).
The malware was discovered, documented, and disclosed by research scientist Callum McMahon of FutureSearch, a company offering AI agents for web research. The malware slipped in through a dependency, meaning other open source software that LiteLLM relied upon. It then stole the log-in credentials of everything it touched. With those credentials, the malware gained access to more open source packages and accounts to harvest more credentials, and so on.
-snip-
Delve is the Y-Combinator AI-powered compliance startup thats been accused of misleading its customers about their true compliance conformity by allegedly generating fake data, and using auditors that rubber stamp reports. Delve has denied these allegations.
There is one point of nuance here worth understanding. Such certifications are intended to show that a company has strong security policies in place to limit the possibility of incidents like this one. Certifications dont automatically prevent a company, like LiteLLM, from being hit by malware. While SOC 2 is supposed to cover policies surrounding software dependencies, malware can still slip in.
-snip-
LiteLLM gives developers easy access to hundreds of AI models and provides features like spend management. Its a breakout hit, downloaded as often as 3.4 million times per day, according to Snyk, one of the many security researchers monitoring the incident. The project had 40K stars on GitHub and thousands of forks (those who used it as a base to alter and make it their own).
The malware was discovered, documented, and disclosed by research scientist Callum McMahon of FutureSearch, a company offering AI agents for web research. The malware slipped in through a dependency, meaning other open source software that LiteLLM relied upon. It then stole the log-in credentials of everything it touched. With those credentials, the malware gained access to more open source packages and accounts to harvest more credentials, and so on.
-snip-
Delve is the Y-Combinator AI-powered compliance startup thats been accused of misleading its customers about their true compliance conformity by allegedly generating fake data, and using auditors that rubber stamp reports. Delve has denied these allegations.
There is one point of nuance here worth understanding. Such certifications are intended to show that a company has strong security policies in place to limit the possibility of incidents like this one. Certifications dont automatically prevent a company, like LiteLLM, from being hit by malware. While SOC 2 is supposed to cover policies surrounding software dependencies, malware can still slip in.
-snip-
Since LiteLLM is so popular, it's possible some DUers downloaded it.
Cybernews story on the malware infecting LiteLLM:
https://cybernews.com/security/critical-litellm-supply-chain-attack-sends-shockwaves/
Critical Python supply chain compromise: how library used by millions of AI developers got infected with malware
Published: 25 March 2026
Last updated: 4 hours ago
Ernestas Naprys
Senior Journalist
Developers are sounding the alarm bells. If you installed LiteLLM 1.82.7 or 1.82.8, immediately rotate everything: all secrets, every environment variable, SSH key, cloud credential, and API keys present on the system, security researchers warn. You might not even know that you use these packages they often come as dependencies with major AI projects.
AI developers across the world report that their machines suddenly started behaving strangely.
-snip-
Its like a universal adapter allowing you to control LLMs, AI agents, and MCP tools from one place.
This means attackers obtained highly valuable API keys and credentials that could cause significant losses. Moreover, it opens the door to many other repositories that depend on LiteLLM, allowing attackers to snowball the attack even further.
-snip-
Published: 25 March 2026
Last updated: 4 hours ago
Ernestas Naprys
Senior Journalist
Developers are sounding the alarm bells. If you installed LiteLLM 1.82.7 or 1.82.8, immediately rotate everything: all secrets, every environment variable, SSH key, cloud credential, and API keys present on the system, security researchers warn. You might not even know that you use these packages they often come as dependencies with major AI projects.
AI developers across the world report that their machines suddenly started behaving strangely.
-snip-
Its like a universal adapter allowing you to control LLMs, AI agents, and MCP tools from one place.
This means attackers obtained highly valuable API keys and credentials that could cause significant losses. Moreover, it opens the door to many other repositories that depend on LiteLLM, allowing attackers to snowball the attack even further.
-snip-
The TechCrunch story 3 days ago about Delve allegedly falsely telling customers they were compliant with privacy and security regulations:
https://techcrunch.com/2026/03/22/delve-accused-of-misleading-customers-with-fake-compliance/
An anonymous Substack post published this week accuses compliance startup Delve of falsely convincing hundreds of customers they were compliant with privacy and security regulations, potentially exposing those customers to criminal liability under HIPAA and hefty fines under GDPR.
Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations on its blog, calling the Substack post misleading and saying it contains a number of inaccurate claims.
-snip-
Their conclusion? That Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.
DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with fabricated evidence of board meetings, tests, and processes that never happened, then forcing those customers to choose between adopting fake evidence or performing mostly manual work with little real automation or AI.
-snip-
Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations on its blog, calling the Substack post misleading and saying it contains a number of inaccurate claims.
-snip-
Their conclusion? That Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.
DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with fabricated evidence of board meetings, tests, and processes that never happened, then forcing those customers to choose between adopting fake evidence or performing mostly manual work with little real automation or AI.
-snip-
4 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Delve did the security compliance on LiteLLM, an AI project hit by malware [View all]
highplainsdem
Wednesday
OP
Yup - this is a masssive breach. And continuing to spread. These "AI" geniuses are really dolts.
erronis
Wednesday
#1
Did you notice the paragraph in the first TechCrunch article about AI having been used to write the malware?
highplainsdem
Wednesday
#2