A fake video conference managed to net threat actors HK$200 million, or about $25 million, in a first-of-its-kind deepfake scam. Deepfake audio, which is nothing new in the criminal underworld, was one major component of the fraud. But the attackers were also able to simulate video for a virtual room full of conference participants, apparently using nothing but existing public sources.
AI opens up a new world of fraud
AI tools are the wave of the future and have caught the attention of nearly every sector of business, but some of the earliest and most efficient adopters are cyber criminals. ChatGPT makes it trivial to polish rough second-language phishing emails, and similar chatbot tools can already be taught to write malicious code.
Deepfake scams have also benefited, though to date most of these have been focused on fake audio-only phone calls. This was a key component of the Hong Kong fraud, but it took things up a notch by incorporating multiple sources of video to make a supposed online conference look extra convincing. The eye-popping amount of money is also sure to catch the attention of other scammers.
Deepfake scam stresses need for “secrecy”
In retrospect, the deepfake scam had several red flags attached. The first of these was a phishing email that appeared to come from the unnamed company’s CFO, which the employee says that they noted with suspicion. The scammers were able to convince the employee to join a video chat, however, where the fake CFO (accompanied by some employees that the target recognized) stressed the need for “secret” transactions and eventually talked the employee into making 15 separate bank transfers.
The fraud was not detected for a week, until the employee contacted corporate headquarters to follow up on the transactions. Baron Chan, senior superintendent of the Hong Kong police’s cyber security division, believes that the video used in the group chat was taken from prior public appearances or recorded conferences. The audio appears to have been trained with AI to communicate in real-time with the target, however. Synthesized voices such as these are relatively easy to train at this point, with several different software options that can do it in seconds to minutes.
Fraud that involves deepfake scams saw a major uptick in 2023, with technology like this seemingly reaching a level that criminals find realistic enough to make use of. The Hong Kong police have said that fake video has become particularly popular for tricking online facial recognition systems, used for remotely opening financial accounts.
It is unclear how this problem will be solved going forward. Drafting new regulation is unlikely to help much, as deepfake scams are already covered by numerous applicable laws. One line of regulatory thought that is becoming popular is to instead pressure the platforms that deepfakes might be disseminated on by putting a share of responsibility for any negative fallout on them. That could push them to implement the best possible detection and marking systems.