AI Security Woes and Government Adoption

A Different Lens
3 min readJun 5, 2024

--

In the wake of a ton of messaging coming from the government in recent months about AI, they seem to be in a flurry of activity that will inevitably result in many implementations, mostly of LLMs.

This should be concerning for a host of reasons, most of which revolve around the security flaws that are inherent in LLMs. Recently, a publication came out where the author dubbed their latest research as an “AI Worm” you can check the GitHub repository here:

The publication highlighted some of the very real risks that are not being discussed across the industry, and when asking many experts in the field, how they plan to mitigate these sorts of attacks, the room becomes silent. No one has even begun to imagine this is possible, let alone assess the means to remedy the gaping hole that these implementations present.

To further illustrate, the authors of the article took an image and encoded instructions inside the image, and the AI model used actually performed the actions. This type of attack, known as “adversarial prompting,” demonstrates how seemingly benign inputs…

--

--

A Different Lens
A Different Lens

Written by A Different Lens

Entrepreneur | Software Architect | CTO | AI | Mystic and much more. Just a guy hoping to change the world one small step at a time.