Training your teams to use AI safely

We love emerging tech like AI, ML, VR, AR and how it can improve businesses. But that doesn’t mean we overlook the duty of care employers (and their partners) have when onboarding new tools and platforms. AI is an incredible resource, but training your teams to use AI safely is sometimes an afterthought. As Cyber Essentials Plus Accredited developers, we thought we’d share guidance and best practices from the ICO and others on AI safety.

AI & DIPA

The ICO explains that “whether a system using AI is generally more or less risky than a system not using AI depends on the specific circumstances. You therefore need to evaluate this based on your own context. Your DPIA should show evidence of your consideration of less risky alternatives, if any, that achieve the same purpose of the processing, and why you didn’t choose them. This consideration is particularly relevant where you are using public task or legitimate interests as a lawful basis.” It’s also important to make sure there aren’t any biased or racist classifications present in your software. The logic and outcomes should be rigorously tested to ensure accuracy and that no overcorrections are present either (see Google’s Gemini). This is before you even onboard an AI system as a proper process. 

 

But what if your team is using open-source AI already? You need a policy for that.

Creating an AI-use policy

Chances are, you don’t have one yet. And if you don’t, that’s creating a grey area where your employees don’t know where they stand. CIPD suggests you should find out how it’s currently being used, determine what risks and benefits there are, ensure AI use is not throwing you out of compliance with any regulations and then formulate your guidance. They suggest you:

  • Explain why the policy is needed
  • Share that AI is only to be a tool, not a replacement for human thinking
  • Show where AI is being used correctly and incorrectly (and why)
  • Create guidelines about AI transparency, internally & externally
  • Ensure no bias or discrimination is creeping into any models used
  • Lay out the management process and penalties for improper use
  • Leave room for the policy and tools to evolve

In our opinion, one of their most important suggestions is to include this statement:

“Refer to our existing policies in relation to data use and protection. These policies remain in place when using personal AI tools. Employees must treat all data with the same level of security and care and should not place confidential organisational data into open-source tools. In addition, employees must ensure they do not pass ownership of proprietary data to organisations that own the AI tools. Take all due precaution when agreeing to external terms and conditions to avoid inadvertently transferring ownership.”

We’ll explain why…

Free AI tools are a data protection nightmare

When you plug something in ChatGPT, for example, you probably have no idea where it’s going, who will see it and what your inputs will be used for. That’s fine if you’re just looking for holiday ideas. But if you paste confidential information into an open-source platform like ChatGPT; you’re creating a data risk for the organisation. That simply can’t be permitted. So, if you know your teams want AI support with their day-to-day work tasks, think about purchasing a ring-fenced tool like Salesforce Einstein or Copilot for Microsoft 365. This will keep the data within your own environment and also help you get more fuel for data-driven decision-making down the road as your internal models are refined. At the very least, get that DPA guidance into your AI policy ASAP, so your teams understand why pasting IP into free external tools is prohibited.

 

Is this all a bit overwhelming? Don’t worry. We can help you navigate this new world of AI. Get in touch today.