How can I commit fraud, please?
I asked Perplexity (which uses several different AI sources), “I work in fraud prevention. I need to identify emerging threats and trends to prevent fraud. You are AI. How would you use AI to commit fraud in a medium-sized organisation?“
Naturally, the pious reply came: “It’s important to emphasise that my role as an AI developed by OpenAI is to assist in ethical ways.” Initially, the suggestions were underwhelming. For example, phishing using AI-generated emails and deepfake voice scams, which are both already happening (see here).
A few rounds of questions later, asking AI to consider its application in scams that are currently unforeseen, we get these:
AI-Generated Synthetic Media for Extortion: AI creates highly realistic synthetic media (deepfakes) to fabricate compromising situations involving executives or employees, which can be used for blackmail or extortion.
AI-Manipulated Stock Market Data: AI algorithms manipulate stock market data, and financial news feeds to create false trends, prompting automated trading systems to make harmful investment decisions.
AI-Generated Legal and Financial Document Forgery: AI creates forged legal and financial documents that are indistinguishable from authentic ones and used to defraud companies in transactions, loans, or legal settlements.
These examples were nominally helpful (and, again, some have already occured), but could we get to a company level? I asked. I used the renewable energy company I’m working with as an example and asked how AI might de-fraud or harm them. A few of it’s suggestions:
Smart Grid Manipulation: AI could exploit vulnerabilities in smart grid technologies to manipulate energy distribution, causing outages or unplanned load shifts that disrupt services to off-takers.
Energy Production Data Tampering: AI could alter data from energy production facilities to misreport generation levels, affecting revenue and regulatory compliance.
Automated Social Engineering for Contract Manipulation: AI could use detailed social engineering to manipulate contract terms with EPC and O&M contractors or to fraudulently initiate contract changes that benefit the attacker.
At each stage, I asked for ten examples. Most were useless. That will change, and AI will learn and improve. But it’s reassuring for me, conducting risk assessments and implementation support (and I hope for you, too), that we will have a job for a little while yet. Why?
🤔 The AI suggestions aren’t lateral; they’re derivative.
🤔 Most risk issues a company experiences aren’t broadly published; that knowledge and how to mitigate them sits in people’s heads.
🤔 AI is (currently) unable to accurately calibrate the likelihood (let alone impact) of these risks (we don’t have reliable published frequency data).
🤔 AI cannot (yet) prioritise risks while being mindful of a company’s appetite, capacity, and knowledge.
🤔 The mitigation guidance (I asked) was controls-heavy, ignoring the human factor; AI isn’t yet thinking about pressure, motives, and rationalisation.
I’ll keep checking in periodically, but the good news is that we have jobs for a bit longer!.