Maybe you’re just now starting to pay attention to the many ways artificial intelligence is invading our personal and professional lives. AI is not necessarily a bad thing. In fact, if you don’t embrace it, you will miss out on one of the biggest technological advances in our lifetime. (Remember those who dismissed the Internet and cell phones as helpful but unnecessary toys?) Just like fire, AI should come with a warning: it can be very helpful but will burn you if you don’t know how to handle it properly.
And now it appears that cybercriminals are using AI technology to create what is known as Deepfakes – audios, photos, text messages, videos and other forms of communication that are almost impossible to detect from the real thing. AI’s powerful modeling systems can produce incredibly convincing images and audio of celebrities, your family members, and the partners of your law firm or board of directors. It's hard to keep up with the evolution of phishing fraud.
Deepfake scammers recently stole $25M from a multi-national company based in Hong Kong. The victimized employee even suspected a phishing scam. But his fears subsided once he joined a video conference call with what appeared to be his CFO and other high-level ranking executives of his company.
The employee thought it sounded suspicious when he received an urgent email from his chief financial officer. He was sent a link to join a discussion on a secret business deal.
Red Flag #1
Always go with your gut. If something feels wrong, slow down and pay attention to your senses.
Red Flag #2
A phishing scam is almost always urgent. Bad actors want you to react quickly without question. The employee proceeded with caution but nonetheless proceeded to join the video conference call. He recognized all the names, faces, voices and office backgrounds on the call.
Red Flag #3
Whenever you’re invited to click on a link, stop and think. Hover your cursor over the link to verify it’s truly from a well-known source of yours. More importantly, don’t act at all.
Pick up the phone. Call the source and verify it. Don’t use the phone number or email provided to you in the suspicious communication you’ve received. Use known contact information you’ve always used successfully. Your superiors should be grateful for your distrust of urgent, unexpected communications from them. Don’t feel bad about questioning authority in a situation like this.
Following the direction of the CFO and the other corporate leaders, the employee initiated a series of 15 bank transfers to five different Hong Kong accounts totaling over $26 million in US dollars. The leaders advised the employee to maintain discretion and to not discuss any of their highly secretive business venture with his coworkers. After a week from the money transfers with no update on the secret deal, the employee reached out to his company’s home office for a status report. No one knew what he was talking about because there really was no secret business deal. That’s when he realized he had been scammed.
In an early February press briefing, the Hong Kong police did not release the identity of the company or scammed employee. But they disclosed that the scammers used fabricated images of real individuals built with AI from photos and videos that were easily available online. The scammers were also able to replicate voices using real, online audio samples as well.
This will be the first of many reported Deepfake scams. In hindsight, it’s easy to point out this pitiful employee’s missteps. But in the heat of the moment, would you react without hesitation if you thought you had received an urgent communication from someone you trust? From someone in a position of power that could reward you for doing a job well done and done quickly?
Comentarios