Exploiting LLM tools
I wrote a short piece about an LLM issue I noticed recently. Morale of the story is to never trust user supplied input and be careful with LLMs because they can be tricked.
Made a library? Written a blog post? Found a useful tutorial? Share it with the Ruby community here or just enjoy what everyone else has found!
I wrote a short piece about an LLM issue I noticed recently. Morale of the story is to never trust user supplied input and be careful with LLMs because they can be tricked.
Post a comment