Blog

How AI could threaten your firm’s data security 

minute read

Last Updated September 29, 2023

Share

*This blog is part of the March 2023 Thought Leader newsletter

Maybe people should have seen this coming back in 1968, when the HAL 9000 computer from the movie “2001: A Space Odyssey” famously refused to open the pod bay doors.

More than five decades later, artificial intelligence (AI) is no longer science fiction but very much fact. So far, it (mostly) hasn’t made its own catastrophic decisions—a few self-driving cars aside—and has proven to be largely cooperative with humans.

And that’s part of the problem. Some of the humans who have access to AI are using it to commit cybercrime—which means that AI, for all the promise it holds, is helping cyberattackers develop increasingly insidious threats to your firm’s cybersecurity.

Fake videos and bogus phone calls lead to big losses

You might be familiar with the idea of a deepfake video, which takes a person’s voice and image and creates a video that looks like the person speaking—but isn’t really that person. The same trick works with audio, which software users can employ to modulate a voice that sounds like a person who isn’t actually speaking.

The applications required for creating deepfakes are cheap—sometimes even free—as well as readily available and easy to use. And they get results for cybercriminals. In one case, bogus audio led to the theft of $35 million. One cryptocurrency executive found that scammers had successfully created a hologram of him to target unsuspecting victims.

Some cybercriminals have played the long game with AI, using deepfakes to apply for, and get, jobs as remote tech-support staff within companies, enabling the attackers to have first-hand access to critical customer data and steal that data at will.

How to avoid getting duped by a deepfake

In general, deepfakes are convincing but not perfect. For instance, deepfake videos don’t tend to work if the subject isn’t looking straight on at the camera. A turn of the head sideways can reveal image distortion. Along those same lines, modulated voice calls tend to sound a little flat compared to the person’s normal voice.

One way to sniff out a deepfake is to be suspicious of unexpected contacts. If your manager, who always texts, suddenly calls you, that might be a red flag. Same for a client or colleague who never uses video but suddenly turns on a camera. Cybercriminals might be able to imitate voices or replicate faces, but they don’t necessarily know somebody’s habits.

If staff suspect anything is amiss, they should arrange to contact the other person via another method. A phone call will usually suffice as long as the staff member uses an existing number for the contact and not one provided by the scammer. If a trusted contact is calling from an unfamiliar number, that’s a major red flag. Staff should offer to call back at the number already stored in their phones.

Deepfakes highlight the need to train firm staff

It’s hard enough to avoid old-school—and still very dangerous—cybersecurity threats such as email phishing and “smishing,” or phishing via text messages. The proliferation of deepfakes and other new forms of cyberattack highlight the need for staff to be more aware than ever of attempted data theft.

Your people are your firm’s best line of defense against cyberattacks. Just about any successful attack has to dupe somebody into accepting it. The most sophisticated AI technology generally can’t make away with your data (at least, not yet) if nobody lets it. One famous statistic from the venerable Verizon Data Breach Investigations Report notes that 82% of data breaches include “the human element,” usually someone clicking on a malicious link.

Training staff to recognize and avoid security threats is essential in a comprehensive cybersecurity plan. You need a training partner that’s up to date on the latest emerging threats and can teach your staff how to steer clear of them. It’s also not a bad idea to look for a security partner that can protect your network and devices as well as mitigate damage if you do suffer a cyberattack. Managing security is a difficult and dangerous task for non-experts to undertake.

Who knows what HAL will do next?

So, what’s the future for AI? More than five decades after HAL caused a fictional catastrophe, emerging products such as ChatGPT and similar offerings from Microsoft and Google have piqued interest in the technology. The potential for AI is massive, of course, but it’s also a little scary. A New York Times reporter’s recent “chat” with an AI bot got creepy pretty quickly.

One thing is for sure, though. As is the case with just about any type of technology, criminals will continue to find ways to use AI for nefarious purposes. It’s happening now and won’t stop anytime soon. Firm leaders and their employees need to be vigilant. Things are not always as they seem.


Learn more about cybersecurity for your accounting firm here.

Subscribe to our blog

Get Rightworks articles delivered straight to your inbox.
Privacy(Required)