The new trap of fraudsters, audio and video impersonations – Last Minute World News

During the Cyber ​​Security Weekend-META 2023, held in Kazakhstan for the 8th time this year, experts shared developments in the Middle East, Turkey and Africa (META) region and the global digital threat environment.

In a statement at the event, Kaspersky Expert Data Scientist Tushkanov said that many fraudulent activities were carried out using deepfake and voice impersonation applications and several measures should be taken to prevent it.

Emphasizing the need for companies to take precautions in this regard, Tushkanov said: “There may be people who want to scam you with voice impersonation. Large sums, such as 100 thousand dollars, should not be given with just one phone call. There should be different protocols to avoid such pitfalls. In addition, training and awareness “If you know that deepfakes and impersonations are real, you can say to people who try to mislead you with fake sounds and images, ‘I don’t want to talk about these issues on the phone. Give me your business email address.”


Vladislav Tushkanov stated that some of the deepfake videos can be easily detected and made the following evaluations:

“The people in some deepfake videos never blink. We can easily understand them, but there are also some very well prepared deepfake videos. For example, the deepfake video made by Tom Cruise was really professional. The technology developed to deepfake creating videos is evolving, but the technology of deepfake videos is also evolving, so it’s not very healthy to rely entirely on technologies that can detect deepfakes.”


Kaspersky Expert Data Scientist Tushkanov pointed out that there is so much information on the Internet that it is impossible to check and verify. said.

Tushkanov pointed out that machine learning plays a very useful role in the fight against cyber attacks:

“Every day we are faced with many phishing attacks. Every day 400,000 new malicious files are spread over the internet. No human can handle that many attacks. We conducted an experiment where ChatGPT was able to detect phishing attacks. ChatGPT successfully phishing attack. However, we can see that ChatGPT made some mistakes. So while I don’t fully trust these systems, I think they have potential.”


The use of applications that can imitate voices in fraudulent activities in recent years has raised concerns.

The fraud case, revealed by The Wall Street Journal in 2019, is known as “the first major fraud case involving impersonation”.

The CEO of an unnamed energy company headquartered in the United Kingdom transferred $243,000 to the fraudsters as instructed over the phone by the person he thought was his boss. The CEO thought he was on the phone with his boss. The scammers had cloned the voice of the boss through the voice impersonation app.

Leave a Reply

Your email address will not be published. Required fields are marked *