Fraudsters seem to be perpetually ahead of the curve. Early in 2022, research indicated that one in four online accounts was fraudulent, a figure that has only escalated since. In the auto lending sector alone, losses amounted to $7.9 billion due to a 98% rise in synthetic fraud in 2023. Fraudsters, leveraging generative artificial intelligence, are increasing both the complexity and volume of fake accounts, bypassing verification processes and defrauding businesses.
The surge in stolen and synthetic identities has introduced new challenges. Many businesses are now grappling with fake customers within their systems. For example, financial institutions inadvertently extending credit to synthetic identities and educational institutions dealing with applications from non-existent students. However, efforts to combat these fraudulent activities often unintentionally alienate genuine customers.
Advances in AI have given rise to “super-synthetic” identities, which pose an even greater threat than their predecessors. These identities are entirely self-learning and automated. Instead of relying on brute force, they adopt a more sophisticated approach, engaging in small, human-like transactions over extended periods. AI enables these fraudsters to create convincing replicas of an ideal customer, such as a college freshman seeking financial aid. This methodical activity often evades detection, ultimately leading to successful fraudulent applications for credit.
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: