The rise of artificial intelligence (AI) contributed to a surge in job scams last year, according to a recent report by an identity theft watchdog group.
The Identity Theft Resource Center (ITRC) found that reports of job scams increased by 118% in 2023 compared to the prior year. That occurred even as the number of overall scams reported to the ITRC fell by 18%.
“Identity thieves are improving at looking and sounding ‘legitimate,’ thanks in part to generative artificial intelligence, especially when it comes to job postings,” the ITRC wrote.
“In 2023 and continuing into early 2024, we saw an increase in identity thieves creating phony job postings on legitimate networking and job search sites, enticing victims to apply for jobs,” ITRC said.
AI VOICE CLONING SCAMS ON THE RISE, EXPERT WARNS
The watchdog group explained that scammers created profiles on LinkedIn and job sites that appeared professional, as well as functioning websites for phony businesses – or impersonated legitimate companies using a fake name to set up interviews.
Once a victim believes they’ve booked a legitimate interview with the scammers, the interview process “was moved off of the original platform to email, text, video conferencing, or a third-party messaging app” and the would-be applicant was told they needed to fill out paperwork and provide proof of identity.
ITRC noted that providing sensitive personal information for identity verification purposes, like a driver’s license, or proof of the ability to work in the U.S. with a Social Security number and direct deposit information is often part of the onboarding process at a new job, so victims felt secure in sharing that information with the scammer.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
The group said that victims often didn’t become suspicious about a potential scam until after they shared their information and either didn’t immediately get a response from the company after regularly hearing from them, or they were asked to provide login information to ID.me.
“The rapid improvement in the look, feel and messaging of identity scams is almost certainly the result of the introduction of AI-driven tools,” ITRC wrote. “AI tools help refine the ‘pitch’ to make it more believable as well as compensate for cultural and grammar differences in language usage.”
ITRC added that the “primary defense against this advanced tech is effective and decidedly low-tech: pick up the phone and verify the contact directly from the source.”
Read the full article here