Skip to main content

Safeguarding the future: Unveiling the interplay of AI and fraud

September 13, 2023

By Natalie Rooney

In the age of rapidly advancing technology, the world is witnessing an unprecedented proliferation of artificial intelligence (AI) and its ubiquitous presence in our daily lives. From powering virtual assistants to revolutionizing healthcare, AI has undoubtedly become an integral part of our interconnected global landscape. Yet, as this technology continues to reshape industries and drive innovation, it also poses a significant challenge: the intricate interplay between AI and fraud.

Artificial intelligence, with its remarkable ability to analyze vast amounts of data, learn from patterns, and make autonomous decisions, has introduced unparalleled efficiency and convenience to numerous sectors. However, as the power of AI expands, so does its potential misuse in the hands of those with malicious intent. Fraudsters and cybercriminals are increasingly leveraging AI algorithms and techniques to perpetrate sophisticated scams, manipulate systems, and exploit unsuspecting individuals and organizations.

Thanks, ChatGPT!

Here’s something that you should know right off the bat about this article: the approximately 150 words up until this point (including the title) were written entirely by ChatGPT in answer to the query, Can you write an introduction for an article about artificial intelligence and fraud?

Even if you haven’t yet experimented with ChatGPT, you’re likely already using or experiencing some type of AI every day, possibly without even realizing it:

Personalized marketing: Have you searched something on your phone or computer and now you can’t stop seeing ads for it in your social media feeds?
Voice assistants: Hey, Siri!
Navigation and traveling: Hey, Siri, where is the nearest gas station?
Facial recognition technologies: Security on your phone or at an airport.
Smart home devices: Alexa, play The Matrix on Max.

It seems like AI is making our lives easier, but if it can be used for all of these “good” things, what “bad” things might AI also be used for?

AI’s take on itself

What does ChatGPT think about the potential use of AI for fraudulent or malicious purposes? When asked, ChatGPT returned:

Yes, artificial intelligence (AI) can be used for fraud purposes. Just like any technology, AI can be misused or exploited by individuals or groups with malicious intent. Here are a few ways AI can be utilized for fraudulent activities:

Phishing and social engineering: AI can be used to generate realistic and convincing phishing emails, messages, or phone calls. By analyzing large amounts of data and personal information, AI algorithms can craft targeted messages to deceive individuals into providing sensitive information or performing certain actions.
Deepfake technology: AI-powered deepfake technology can manipulate images, audio, and videos to create convincing forgeries. This can be used for various fraudulent purposes, such as impersonating individuals, creating fake identities, or generating false evidence.
Malware and cyberattacks: AI algorithms can be used to develop sophisticated malware and cyberattack techniques. AI can analyze network patterns, identify vulnerabilities, and automate attacks, making them more effective and harder to detect.
Data manipulation and fraudulent transactions: AI can be used to manipulate large datasets, altering records, transactions, or financial information to commit fraud. AI algorithms can identify patterns in data and exploit weaknesses in systems to conduct fraudulent activities.
Automated bots and spam: AI-powered bots can be programmed to automatically generate spam messages, comments, or reviews. These bots can be used for various fraudulent purposes, such as spreading misinformation, manipulating public opinion, or promoting fraudulent products or services.

ChatGPT added the following caveat:

It's important to note that while AI can facilitate fraudulent activities, it can also be used to detect and prevent fraud. AI-based fraud detection systems can analyze large amounts of data, detect anomalies, and identify patterns indicative of fraudulent behavior. The ethical use of AI and the development of robust security measures are crucial to minimizing the risks associated with AI-enabled fraud.

When asked specifically if AI could be malicious, ChatGPT returned, “AI itself does not possess intentions or motives, as it is a tool developed by humans. However, AI can be programmed or manipulated by humans to exhibit malicious behavior. In such cases, the AI system is designed with the intent to cause harm or engage in activities that are considered malicious.”

Americans are worried

Even though Americans are using AI in their daily lives, they’re wary of its growing pervasiveness. According to a recent Reuters poll, more than two-thirds of Americans are concerned about the negative effects of AI, and 61% believe it could threaten civilization. Americans also say they want companies like Microsoft, Google, and OpenAI (ChatGPT’s developer) to do something about that threat.

Another survey shows that the majority of Americans worry that people will use AI for criminal activity (84%), spreading false or misleading information (83%), radicalizing people to extremism (77%), and inciting hate and harassment (75%).

Can financial fraudsters hide behind AI?

AI has had a significant impact on the accounting profession by automating numerous tasks and increasing efficiency. For example, AI can create invoices, analyze financial data, generate reports, and identify patterns and anomalies that suggest accounting fraud. However, concerns are growing about the ways in which AI can assist criminals, whether they’re hacking their way in or are already on the payroll.

Sheri Betzer, CPA, CFE, a partner at Betzer Call Lausten Schwartz, LLP, has provided services in litigation support, forensic accounting, and fraud investigations for more than 30 years. She shares her expertise with graduate students at Metropolitan State University of Denver as an instructor, educating the next generation of accounting professionals about fraud.

Betzer says while she hasn’t yet run into a situation where AI has been used to hamper a fraud investigation, she sees how it could happen, especially during the process of interviewing a suspect. She says that detecting when someone is lying comes with experience, and when many interviews became virtual events during the pandemic, it created complications. “I can see how AI could override the success of virtual interviews,” Betzer says. “You need to see the person, look into their eyes, watch their body language, and then feed off that with questions. The threat of AI takes away anything we can rely upon during a virtual interview.”

Betzer says when it comes to financial fraud, it truly takes creativity at different levels on the part of the perpetrator, and yet some of them aren’t all that savvy yet to cover up their actions. The hope is that companies have more internal controls to be able to catch it quickly, even with the onslaught of AI.

“We’ve been trying to train people about fraud and internal controls for decades,” she says. “But we still have the same old fraud surfacing. It requires constant training for the general public as well as the professionals. As AI changes, we have to keep up on our training to know what we’re looking for, understand the changes, and realize that something has been manipulated. Fraudsters are always one step ahead of us, unfortunately.”

Auditing in an AI world

Jeff Cook, CPA, SOC Managing Principal at Fortreum, and owner of Jeff Cook CPA LLC, sees AI as a very sophisticated form of phishing that has growing implications for auditors. With a background in SOC 1 and 2 auditing and other high-level IT audits, Cook has focused on considering fraud and how a client is preventing it. Now that’s translating to fraud prevention with an AI twist. “We’re going to have to put considerations for AI fraud into our audit planning,” he says.

Cook says fraudulent use of AI is in its early stages, but he’s starting to hear more about it. He has specific concerns about the write-ups of testing by auditors and the shortcuts they might take. “Instead of an auditor writing up what procedures they actually did, they could ask AI to create control test write-ups just to get the workpapers done,” he explains.

Cook says auditors need to change their evaluation process to accommodate the massive amounts of data that AI produces. “When you’re essentially sampling 100% of the population, how do you determine and report thresholds for exceptions?” he questions. “Under the old guidance, we did our statistical samples and determined the tolerable error and confidence level. The AICPA is working on it, but there’s no clear guidance yet.”

Another concern: bad data. AI and other automated tools are completely reliant on user inputs. “Bad data inputs mean bad data outputs,” Cook says. “Critical thinking has to come into play to determine if the data going in is even OK.”

The lesson in all of this, says Cook, is that you can’t blindly accept any information from AI or any automated tool, even when your dashboard gives you a “green checkmark.” Has the tool been tested or audited to ensure that it’s processing correctly? Does it even point to the right data to produce the right results?

“We’ve got to make sure someone tested that machine to ensure that the data was calculated properly before the outcome is accepted, especially in cloud environments,” he explains. “This is why talent remains a top issue in fraud prevention. Auditors of the future will have to go beyond the traditional role. They’ll need some level of proficiency in understanding scripting or code so they know what the automation is doing, and they can make their determination that something is configured correctly and used properly.”

Another dangerous facet of some AI tools is the potential exposure of client information by using internet-based AI programs. “Putting client or firm information into a publicly based internet site violates the confidentiality agreement,” Cook cautions. “If staff just want to get something finished quickly and efficiently, they might not consider this. Now the information is out there for the public and the AI has it. We need to be careful about things like this.”

With the advances in technology, Cook envisions higher-level phishing scams that go beyond criminals hacking a CEO’s email to trick someone into clicking a link and transferring money. “Now, AI might be used to make a phone call that sounds exactly like the CEO instructing someone to visit a site and transfer money,” he says. “The individuals go to the website and then it’s game over – a breach. Organizations need to consider additional safeguards – maybe a codeword that AI would never know – for added protection. Remember: trust but verify.”

AI in accounting academia

Higher education was already grappling with how to deal with plagiarism and cheating in a virtual world when ChatGPT exploded onto the scene. “As we started spring semester, there was very little talk about ChatGPT, and then suddenly, it became all of the talk,” says Falene Young, CPA, an accounting instructor and BSBA program coordinator for Colorado State University. “The information about generative AI is coming in quickly from a lot of different sources, including students, faculty, and textbook publishers. It’s information overload. Some educators are embracing generative AI as the new frontier while others find it incredibly threatening.”

“I’ve heard some people say AI is going to end us as educators,” Young says. “There have even been studies naming the most threatened industries, and higher education is often up there, as is accounting, so people are a little freaked out.”

Young says that technology is constantly evolving, and she encourages curiosity over fear. “When we invested in the calculator, it didn’t eliminate the need to understand how to do calculations. When we first used Google Maps, it didn’t end the need to navigate. All AI is doing is taking something that used to be a manual process and allowing us to do our jobs more efficiently. Critical thinking is still necessary.”

Young says she has experimented with ChatGPT, including running the accounting homework assignments she develops through the chat bot. “Right now, it’s not very good at it, but it’s confident in the answers it provides.”

One alarming thing Young noticed: ChatGPT will produce wrong answers when it doesn’t know the actual answer. “I’m learning you have to be really clear on where you want the AI to pull information from,” she says. “It’s always a good idea to check every source that AI provides. The more specific you are with the model, the more specific it is about returning what you want.”

Young says her takeaway is that it’s important to talk about AI because it’s taking the world by storm. “Generative AI is in the disruptor phase right now, and not just in the accounting profession. There’s a lot it can do, but there’s something to be said for the human connection with professors and for higher-level thinking by CPAs. Even though generative AI is going to become more sophisticated, I still think we as a profession aren’t going to be eradicated.”

The more you know

Betzer encourages CPAs to take as much training about AI as they can. “That’s how we’ll develop our own understanding about how to use our personal expertise to overcome the problems that AI might create,” she says. “It’s going to require a community sharing of ideas.”

Whether it’s AI, blockchain, or cryptocurrency, Cook says the technologies are becoming more mainstream. “An auditor must keep up with these areas to stay relevant; otherwise, how can they render an opinion on a financial statement or IT audit?” he asks. “As a profession, we pride ourselves on our reputation. We have to understand the technology so we can maintain quality and retain the trust of the people who rely on our information. It’s at the core of what we do.”

Reprinted with permission of Colorado Society of CPAs.