GenAI cybercrime and cybersecurity trends for 2025

Our dev, Gabriel Poves again attended a digital workshop with ImmuniWeb, this time on GenAI cybercrime and cybersecurity trends for 2025. As we begin to explore the impact of ML, Artificial Neural Networks, Deep Learning, ANNs, LLMs, OpenAI’s GPT-4o and Facebook’s Llama3; there are some heavy implications for cybersecurity. So let’s dive into some background into AI and predictions for how it will impact cybercrime over the next year from this session.

A bit of context for AI

First conceptualised in the Markov Chain in 1906, in 2017, research by Google scientists “Attention Is All You Need” introduced the Transformer model. This kickstarted the AI renaissance we’re experiencing today. Current Generative AI is only as good as its underlying training data, however. It has no human-like intelligence but merely uses statistics to provide the most probable output based on input. It also cannot be biassed or unethical, but its training data – and thus its output – can be. However, AI has been successfully used in almost all industries for over a decade. This has saved millions of hours of valuable human time and accelerating numerous tasks and processes. It can efficiently automate numerous routine but time-consuming tasks too. This allows people to be more productive and focus on complex tasks that require human intelligence.

Where the cybersecurity concerns lie

According to the State of AI and Security Survey Report 2024 by Cloud Security Alliance (CSA), data quality, lack of transparency, a gap in skills and expertise for managing AI, data poisoning, hallucinations, privacy, data leakage or loss, accuracy and misuse are the key concerns around AI and cybersecurity. And new threats emerge like deepfakes, AI malware, no-code viruses and more. As security, legal action and competition increase and the amount of high-quality training data decreases, expect the costs of general AI systems to rise and task-specific systems to take over. Misuse of AI (e.g. deepfakes), national security and privacy concerns will lead to further restrictions, putting the focus on task-specific use where training data is lawfully available in abundance and AI assistants who will augment and accelerate humans but not replace them.

Regulations for AI

The EU AI Act looks to create Risk categories and tiers including Prohibited, High-risk and Low-risk around AI development and use. Most of the obligation falls to developers but there are still some rules for everyone else. This means that firms that need risk assessments, have transparency obligations and require registration in an EU-wide database will apply to specific AI-enabled applications. Mandatory requirements and prohibitions will only apply to social scoring systems, manipulative AI and the highest-performance general-purpose AIs that entail systemic risks.

Best practices for AI in cybersecurity in 2025

If you want to be ahead of the pack, we can help. Our team is well-versed in the emerging threat landscape and can support you in adding AI-related risks (e.g. deepfakes to bypass authentication) to your risk catalogue. We suggest that you maintain a comprehensive inventory of your in-house and external AI-powered systems and implement company-wide AI governance policies to define the permitted and prohibited use of AI. We can even review the terms of services of all your AI vendors with you to understand how they use your data. Moreover, we can help you ensure that no business-critical tasks are performed by AI without human validation and run a comprehensive cost-benefit analysis for you, prior to implementing any AI systems. Lastly, you can lean on us for regular AI training for your cybersecurity team and other departments.

 

The genAI cybercrime and cybersecurity trends for 2025 are all about reducing risk and empowering people to work smarter, not harder. If you need help to capitalise on genAI in a productive way, please reach out today for our support