What new security risks are caused by AI
When we say we’re here to help you keep everything running smoothly, that applies to cybersecurity as well. But what new security risks are caused by A? How can you protect against them? We dive into the topic for today’s quick read. We’ll explain in plain English what you should look out for and how to protect your customers and data.
AI hallucinations
AI can be wrong. It can make things up or produce nonsensical output. This is especially a problem if you’re using data to make business decisions. IBM explains that the best way to protect against AI hallucinations when using these systems in your business is to start with clean, high-quality training data. Then, you want to put guardrails around how the AI model will be used; testing and refining often. Next, you want to limit the responses to acceptable parameters so the AI does not have to be too creative. (Too much freedom can increase the chance of hallucinations.) And last, you’ll want to restrict access to these systems, monitor for incoming threats and use human oversight to check what’s being generated.
Bias and gullibility
Because AI is often trained on a large volume of data, it can often misrepresent or underrepresent minorities or contain cultural biases. That’s why, it shouldn’t be used to make determinations on its own in sectors like recruitment, medical, law or military operations where lives and livelihoods are on the line. It can also be manipulated in conversation by bad actors. These people are looking to get tools like chatbots to make promises on behalf of companies for things like refunds. So, to minimise risk, human oversight and rigorous limitations on output are a necessity. It’s better to put training wheels on your AI or use your own training data than go with risky 3rd party tools or allow unchecked responses.
Prompt injection attacks
NCSC explains that prompt injection attacks are “when an attacker creates an input designed to make the model behave in an unintended way. This could involve causing it to generate offensive content, or reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input.” IBM explains, “Hackers do not need to feed prompts directly to LLMs for these attacks to work. They can hide malicious prompts in websites and messages that LLMs consume. And hackers don’t need any specific technical expertise to craft prompt injections. They can carry out attacks in plain English or whatever languages their target LLM responds to. […] However, organizations can significantly mitigate the risk of prompt injection attacks by validating inputs, closely monitoring LLM activity, keeping human users in the loop, and more.”
All these new security risks caused by AI share a lack of proper oversight and strategy. So, if you want to leverage AI but don’t know how to do it safely, talk to us today. We’ll assess your goals, IT landscape and risks to recommend the best approach for your needs. We’ll also look to reduce your threat surface and suggest AI tools only where they will do the most good. Some use cases are for business insights, customer service and sales but without creating more unnecessary failure points.