|
Hey Bobby,
Have you heard this phrase: "Developers wrote bad code before AI"?
Translated: "Why are we so critical of AI-generated code, even if there are some bugs? It's not like we've done much better!"
I get what people are trying to say but there's a fundamental flaw here.
Humans have intent and morality (at least many of us).
An LLM does not think. It does not have intent. It executes instructions based on an enormous set of training data.
This is why when you ask an LLM to delete your entire production database it will happily comply.
Its greatest strength is also its greatest weakness: it does what it's told.
This can have catastrophic consequences if you're building agents or RAG pipelines. Imagine if a bad actor on the web inserts a destructive command hidden in an HTML document or simply queries your agent to expose sensitive info.
In the video below, I explore just how vulnerable agents are to these kinds of attacks and how to avoid them using a fairly simple technique. |