Deepfakes and AI-enabled impersonation rank among top recruitment threats, research reveals

The use of artificial intelligence in recruitment is becoming more complex, as growing concerns over deepfakes and AI-enabled impersonation threaten to undermine trust in the hiring process.More than two thirds (69 per cent) of UK hiring leaders say AI-enabled impersonation and deepfake technologies represent the most sophisticated emerging threats to recruitment integrity, according to research from background screening provider First Advantage. Rolf Bezemer, executive vice president and international general manager at First Advantage, warned that advancements in AI meant that “seeing is no longer believing” when hiring candidates online.AI, agents and authenticity: how 2026 will rewrite recruitmentAI driving greater distrust in recruitment, survey finds‘It’s time to rethink recruitment to make it faster and fairer’“The sophistication of AI-generated fraud is accelerating at a pace that outstrips many current screening protocols. Employers must now contend with candidates who can convincingly simulate identities, credentials and even live interactions,” he said. A quarter (23 per cent) of companies reported identity fraud among new hires, according to research from background screening company Checkr, while Gartner predicts that, by 2028, one in four candidate profiles worldwide will be fake.Vijay Balasubramaniyan, co-founder and CEO of Pindrop, a security company specialising in deepfakes and AI fraud, has seen demand for its services increase in response to the growing threat. “These aren’t theoretical threats; attackers are using cloned voices, deepfake video and AI-generated text to impersonate executives, customers, job candidates and vendors,” he said. “As generative AI lowers the barrier to impersonation, deepfakes are becoming one of the fastest-growing initial access vectors for fraud.” Deepfake-related activity increased by 1,300 per cent year on year, according to Pindrop research, with AI-generated or replayed audio appearing in around one in every 106 company meetings it monitored. Deceptive AI use raises suspicionFor businesses, the impact of AI-enabled fraud can be “massive”, spanning financial loss, security exposure and trust breakdown, Balasubramaniyan said. “Beyond direct losses, deepfakes can enable account takeovers, fraudulent wire transfers, hiring fraud, ransomware initiation and long-term insider risk,” he explained. Emi Chiba, HR tech analyst at Gartner, said organisations faced the challenge of maintaining the quality and integrity of their recruiting process, while simultaneously attempting not to damage the candidate experience.“The goal is to prevent the most extreme example of hiring someone who then plants ransomware or steals your intellectual property. However, all candidates are adopting and using AI throughout the recruitment process, making it more difficult to discern intention and actual skill ability from all candidates,” she added. Legitimate applicants are also using AI to covertly assist them during the recruitment process. Nearly all (91 per cent) US hiring managers have encountered or suspected AI-generated interview answers during online meetings, according to Greenhouse’s 2025 AI in Hiring report. In addition, 65 per cent of hiring managers have caught applicants using AI deceptively, such as reading from AI-generated scripts (32 per cent). “People may have good intentions and not consider themselves fraudulent but, if they’re using AI to exaggerate their skills, it can bend the truth,” said Gary Billings, founder of Techtrace Partners, a cyber security recruitment firm. Rob Scott, managing director of recruitment firm Aaron Wallis, added that he has encountered candidates using AI tools in real time to interpret questions and generate answers.“You can see candidates look at another screen and pause before they answer, so it’s quite easy to spot. The answers will often contain a lot of words but not a lot of substance,” he explained. Frequent exposure to this behaviour is making hiring managers more sceptical. “It happens very frequently and is affecting hiring managers’ judgement as they are now more suspicious of all candidates,” said Billings.As the technology becomes more sophisticated, recruiters have also raised concerns that candidates could use AI avatars to conduct interviews on their behalf.“I suspect this will become more of a risk,” Billings continued. “The avatars aren’t perfect but, when you're hiring remotely and you don’t know the person, how can you tell the difference?” Combatting AI threats To tackle rising levels of AI fraud, recruiters should add more steps to the verification process, such as biometric checks and IP address or location verification, Chiba advised. “Recruiting should deter fraud through strong employer branding and communicating acceptable AI use both in the role and during the recruitment process,” she added. Where candidates are not impersonating someone else but are relying heavily on AI support, Chiba suggested hiring managers shift towards more personal or behavioural questioning to assess genuine capability.This is a tactic Scott uses to ensure candidates can think on their feet without using AI. “If I feel an answer is too clinical or contrived, I would ask an emotion-based follow-up question, such as: 'How did that make you feel?' AI bots provide very sanitised responses to emotion-based questions,” he said. Billings added: “It’s important to remember that the AI isn’t the enemy. It does a lot of good but there are also significant risks as the technology develops.” For more information about using AI as a people professional, visit the CIPD’s topic page
AI Article