It seems like every company is scrambling to stake their claim in the AI goldrush–check out the CEO of Kroger promising to bring LLMs into the dairy aisle. And front line workers are following suit–experimenting with AI so they can work faster and do more.
In the few short months since ChatGPT debuted, hundreds of AI-powered tools have come on the market. But while AI-based tools have genuinely helpful applications, they also pose profound security risks. Unfortunately, most companies still haven’t come up with policies to manage those risks. In the absence of clear guidance around responsible AI use, employees are blithely handing over sensitive data to untrustworthy tools.
AI-based browser extensions offer the clearest illustration of this phenomenon. The Chrome store is overflowing with extensions that (claim to) harness ChatGPT to do all manner of tasks: punching up emails, designing graphics, transcribing meetings, and writing code. But these tools are prone to at least three types of risk.
Malware: Security researchers keep uncovering AI-based extensions that steal user data. These extensions play on users’ trust of the big tech platforms (“it can’t be dangerous if Google lets it on the Chrome store!”) and they often appear to work, by hooking up to ChatGPT et al’s APIs.
Data Governance: Companies including Apple and Verizon have banned their employees from using LLMs because these products rarely offer a guarantee that a user’s inputs won’t be used as training data.
Prompt Injection Attacks: In this little known but potentially unsolvable attack, hidden text on a webpage directs an AI tool to perform malicious actions–such as exfiltrate data and then delete the records.
Up until now, most companies have been caught flat-footed by AI, but these risks are too serious to ignore.
At Kolide, we’re taking a two-part approach to governing AI use.
Draft AI policies as a team. We don’t want to totally ban our team from using AI, we just want to use it safely. So our first step is meeting with representatives from multiple teams to figure out what they’re getting out of AI-based tools, and how we can provide them with secure options that don’t expose critical data or infrastructure.
Use Kolide to block malicious tools. Kolide lets IT and security teams write Checks that detect device compliance issues, and we’ve already started creating Checks for malicious (or dubious) AI-based tools. Now if an employee accidentally downloads malware, they’ll be prevented from logging into our cloud apps until they’ve removed it.
Every company will have to craft policies based on their unique needs and concerns, but the important thing is to start now. There’s still time to seize the reins of AI, before it gallops away with your company’s data.
To learn more about how Kolide enforces device compliance for companies with Okta, click here to watch an on-demand demo.
Our thank to Kolide for sponsoring MacStories this week.
Support MacStories and Unlock Extras
Founded in 2015, Club MacStories has delivered exclusive content every week for over six years.
In that time, members have enjoyed nearly 400 weekly and monthly newsletters packed with more of your favorite MacStories writing as well as Club-only podcasts, eBooks, discounts on apps, icons, and services. Join today, and you’ll get everything new that we publish every week, plus access to our entire archive of back issues and downloadable perks.
The Club expanded in 2021 with Club MacStories+ and Club Premier. Club MacStories+ members enjoy even more exclusive stories, a vibrant Discord community, a rotating roster of app discounts, and more. And, with Club Premier, you get everything we offer at every Club level plus an extended, ad-free version of our podcast AppStories that is delivered early each week in high-bitrate audio.