Tuesday, October 14, 2025

How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns

Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.

source https://www.techradar.com/pro/how-many-malicious-docs-does-it-take-to-poison-an-llm-far-fewer-than-you-might-think-anthropic-warns

No comments:

Post a Comment

EU Court gives the Dutch the green light to pursue Apple App Store anti-trust case

The European Court of Justice says the Netherlands can go after Apple over its App Store commissions. source https://www.techradar.com/pro...