Generative AI fraud is on the rise

Beware of sophisticated phishing, fraud and scam attacks!!

Criminals may be using deep-fakes of your co-workers, friends and close family members.

As the AI technology is advancing at breakneck speed, it is more important than ever to be vigilant and cautious in our interactions – in business and life.

It is no longer just about suspicious phishing emails, texts or calls, but also about supposedly more human interactions with strangers, as well as people we (think we) know. Like even our close family members!

Generative AI is getting better at creating deep fakes, which are being used by criminals to extort money from those who fail to recognise that something seems off.

Deep fake voice is used to call those susceptible to emotional pleas and – as usual – more vulnerable and cognitively weaker elderly – with dramatic asks for money from their children or grandchildren who are seemingly in a desperate need of quick cash to get out of trouble.

This may be an accident, being arrested, or being held at the edge of a knife. Or it could be your boss commanding you to immediately pay an outstanding invoice.

The situation may be true, or a complete fabrication by criminals who trained AI models to fake the voice, or even the video, of your boss, co-worker, friend, or family member.

How the criminals get to know who they are and where they get their voice samples to train their AI models on? Simple, the internet and social media are full of video clips, podcasts, and other information that suffices to build profiles of many targets, or would be perpetrators of a scam attempt.

For now, asking some follow on deepening questions may suffice to break the illusion, because those recordings are pre-generated. But as the technology gets faster and starts generating the voice and videos on the fly, we can soon be in an even deeper trouble.

A good advice may be to not succumb to panic, but ask some thoughtful questions to verify the identity of the person we are thinking we are talking to. Or say that we will call back. Not always easy under strong emotions!

In the future with a lot of personal information used to built an interactive psychological profile of a person – which is already possible with LLMs such as ChatGPT – even questioning will become more difficult. And the deepfake technology will become more and more realistic and believable.

Stay vigilant, stay safe and try to limit how much personal information you share publicly. Remember that even seemingly private systems can get hacked. Educate your family members, including your children and the older generation.

Strange and interesting times ahead us all!

#fraudprevention #scam #ai #cybersecurity #thinktwice

Share your love
Rafal Bergman
Rafal Bergman
Articles: 8

Leave a Reply

Your email address will not be published. Required fields are marked *