What Is LLM Poisoning? Interesting Break Through
https://www.anthropic.com/research/small-samples-poison
In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents.
------------------------------------------------------------------------------------------------------
Festive offers are till Diwa…
Watch on YouTube ↗
(saves to browser)
DeepCamp AI