Artificial intelligence is transforming how businesses operate online- but it is also reshaping cybercrime. For UK small and medium-sized enterprises (SMEs), the risk is growing rapidly. As organisations adopt AI tools and increasingly rely on digital platforms, cybercriminals are exploiting the same technologies to launch faster, more convincing scams at scale.

The UK’s National Cyber Security Centre (NCSC) has recently urged organisations to strengthen their defences following geopolitical tensions linked to the conflict in the Middle East. Against this backdrop, experts like Karim Salama, founder of digital marketing and technology agency E-Innovate, say businesses must recognise that AI-driven fraud is an evolving reality.

The evolution of AI-driven fraud

Traditional online scams relied heavily on human effort: manually written phishing emails, cloned websites and social engineering tactics. AI has dramatically lowered the barrier to entry for cybercriminals.

Fraudsters can use generative AI to produce highly convincing messages, fake identities and realistic content at scale. Voice cloning, deepfake video and automated chatbots are increasingly being used to impersonate executives, suppliers and even customers. These tactics are particularly dangerous for SMEs, which often lack dedicated cybersecurity teams but still handle valuable financial data, customer records and payment systems.

“AI has fundamentally changed the scale and realism of online fraud,” warns Karim. “Criminals can now generate convincing messages, fake identities and automated campaigns in seconds. For many SMEs, these scams look indistinguishable from legitimate communications at first sight.”

Attack vectors

While phishing emails remain common, the channels used to distribute malware and scams are rapidly evolving. A recent report found that programmatic advertising has overtaken email as the leading vector for malware delivery. Advertising now accounts for more than 60% of observed malware and phishing campaigns, with incidents rising 45% year-on-year.

The growing complexity of the digital advertising ecosystem allows attackers to hide malicious code within automated ad networks, making detection significantly harder. AI tools also enable cybercriminals to create highly personalised attacks by analysing publicly available information about businesses, employees and supply chains.

As Karim explains: “Many business owners assume scams arrive via obvious phishing emails, but attackers are increasingly exploiting ad networks, social media platforms and compromised websites. The sophistication of these campaigns means SMEs must rethink where online risks originate.”

‘AI scams 2.0’

AI-enabled fraud is emerging in several forms, many designed to closely mimic legitimate business activity. One example is deepfake impersonation, where criminals use AI voice-cloning or synthetic video to pose as senior executives. Employees may receive what appears to be a genuine call or message requesting an urgent payment or sensitive information, increasing the likelihood that staff act quickly without verifying the request.

Another tactic involves AI-generated phishing emails. Generative AI allows attackers to create highly convincing messages that replicate the tone and branding of real organisations, making them far harder to detect than traditional phishing attempts.

Fraudsters are also using AI to carry out fake invoice scams. By analysing publicly available information about a company’s suppliers or purchasing patterns, criminals can produce realistic invoices or payment requests that appear legitimate.

AI-assisted account takeover attacks use automated tools to test stolen credentials or weak passwords, enabling criminals to gain access to business email or marketing accounts and launch further scams from within the organisation.

How SMEs can protect themselves

Despite their sophistication, AI scams often still leave subtle warning signs. Karim urges businesses to remain cautious if they encounter:

  • Unexpected payment requests, particularly with urgency or secrecy attached

  • Changes in supplier bank details without prior verification

  • Emails or messages with unusual tone or grammar, even if they appear generally professional

  • Requests for sensitive information via messaging apps or social media

  • Suspicious online adverts or pop-ups directing users to unfamiliar websites

Businesses do not need large cybersecurity budgets to significantly reduce risk. Instead, Karim says that a combination of awareness, verification culture and basic safeguards can make a major difference.

Strengthen authentication

“Use multi-factor authentication across all email, payment systems and marketing platforms.”

Train employees regularly

“Ensure staff understand how AI scams work and how to recognise suspicious communications.”

Verify financial requests

“Introduce internal procedures requiring secondary verification for payments or supplier changes.”

Monitor digital advertising and web assets

“Businesses running programmatic ads or managing websites should regularly review security settings and third-party scripts.”

Keep software updated

“Outdated software remains one of the easiest entry points for attackers.”

Conclusion

The growing use of AI means the gap between legitimate communication and fraud is shrinking. For SMEs, that message is increasingly urgent.

Karim believes the key defence lies in education and awareness. “Technology alone cannot solve the problem,” he says. “As AI continues to evolve, so too will the tactics used by cybercriminals. Businesses need to understand how these scams work and build a culture of verification from the ground up.”