A new survey by Truecaller finds that 77% of Asian Americans fear AI-powered scams, with nearly one in three already falling victim to deepfake voice fraud, including immigration-based scams.
Recently, Truecaller—a platform at the forefront of combating digital fraud—conducted its 2026 AI and Fraud Scams Survey. The study revealed key insights into how AI scams are reshaping the way Americans view communication. It found that three in four Americans were targeted by a scam call or text in the past year, with nearly one in three receiving a deepfake voice call impersonating a family member.
The findings are even more striking for Asian Americans. A significant 77% of respondents reported concern about the risks of AI-powered scam calls and messages, while 85% expressed worry for their younger and older relatives. Beyond concern, 30% said they had fallen victim to deepfake voice scams.
To better understand how minority communities, including Asian Americans, are responding to the growing threat of advanced scams, The American Bazaar spoke with Clayton LiaBraaten, senior executive adviser at Truecaller. LiaBraaten, a digital transformation evangelist, also outlines the safeguards needed in an increasingly AI-driven, uncharted landscape.
The American Bazaar: Your latest survey shows 77% of Asian-Americans are worried about AI-powered scams. Why do you think this concern may be so pronounced within this
Community?
Clayton LiaBraaten: There is a heightened vulnerability to AI scams for the Asian American population, mainly because many may not have the same fluency or familiarity with AI-generated voices or deepfakes. This makes it much harder to spot fraudulent activity, leading to increased anxiety about the safety of their personal information and the security of their friends and family.
As of late, for example, scammers were exploiting policy changes of H-1B visas to take advantage of immigrants, leading to a spike in skepticism and distrust of communications.
READ: How an AI-generated ‘conservative MAGA influencer’ went viral and made thousands (April 22, 2026)
Nearly 30% of Asian Americans report falling victim to deepfake voice scams. What makes these scams particularly effective among Indian and South Asian families?
Deepfake voice scams are particularly effective and successful within this demographic because of cultural factors like trust and familial bonds. Scammers can use AI-generated voices to impersonate family members or trusted community figures, which is especially concerning in close-knit family structures. They are sophisticated social engineers who know which buttons to push to get you to act on the scam. The authenticity of a voice can easily manipulate individuals, particularly older relatives or those less familiar with the technology, which increases the likelihood of falling victim to these scams.
The survey highlights that non-native English speakers face higher vulnerability. How are scammers leveraging language, accents, and cultural familiarity to build trust?
Scammers are incredibly adept at exploiting cultural nuances, including language barriers. They use AI to learn more about cultures and use that to make more convincing conversations. By using texts or calls in the victim’s native language, or mimicking familiar accents or phrases, fraudsters’ messages seem more legitimate. This sense of familiarity can lower a potential victim’s guard, especially those who may not be as savvy at detecting fraud in their second language or who may be more trusting of calls in their native tongue.
You’ve described this moment as a “communication paralysis,” with people afraid to answer calls. How is this impacting immigrant communities that rely heavily on phone communication for work, healthcare, and family connections?
It, no doubt, creates a double-edged sword. While these communities need to stay connected, they are now in a constant state of uncertainty about whether a call is legitimate, leading to missed opportunities—whether for a job, healthcare updates, or family emergencies. It’s a tragic situation where the fear of fraud is eroding trust in an essential communication tool.
Despite high awareness, 45% of Asian-Americans don’t know what steps to take if they’re scammed. Why does this preparedness gap persist, and how can it be addressed?
That’s a great question. I think it largely comes from a lack of clear communication about recovery steps and the overwhelming nature of these scams. Far too often, people feel embarrassed or ashamed if they become a scam victim. The truth is, it happens to the best of us and we have to talk about it more. The stigma, especially within tight-knit communities, often prevents people from seeking help. To lessen this preparedness gap, we need better public education on how to respond to scams, how to report them as well as an increase in using tools, like Truecaller, that can help detect and prevent scams before they even reach people’s phones.
READ: ‘Change your name’: Indian American influencer Priya Patel trolled after social video goes viral (March 27, 2026)
From your vantage point, are Indian American and broader Asian American communities underreporting scams due to stigma, lack of awareness, or distrust of institutions?
It’s a combination of all three. The shame or embarrassment people feel in admitting they’ve fallen for a scam, especially when it’s something as personal as a family member being impersonated by an AI-generated voice, is completely valid. There might also be a deep-seated distrust of institutions for reporting, particularly for immigrant communities that might not have had positive experiences with authorities, which can make them hesitant to involve agencies or law enforcement. I will say, underreporting of scams only makes it harder to understand the full scale of the problem and effectively protect communications in the future.
For readers who may not be familiar, how does Truecaller actually work in identifying and blocking AI-driven scam calls?
Our system leverages a vast database of user reports, which helps to identify and block phone numbers or messages that match known scammer profiles. So when a phone number you’re unfamiliar with pops up, Truecaller is able to immediately identify whether it is likely a spam caller or a verified phone number that is registered with the FTC. The system continuously updates to stay ahead of evolving scams tactics.
Our real-time monitoring means that as soon as a scam tactic is detected, it is flagged and blocked for other users, which is an evolving, dynamic process that helps keep the protection layer fresh and effective. The Truecaller Family Plan is a great resource to keep everyone secure. Younger, more tech-savvy family members who are more familiar with digital threats can help protect their older family members.
As scams become more sophisticated with AI and deepfakes, how is Truecaller evolving its technology to stay ahead of these threats?
We like to say at Truecaller, we’re fighting AI with AI. We use a mix of machine learning algorithms, crowdsourced data and AI technology to identify unusual call patterns, spoofed caller IDs, AI-generated voices, etc. and use that to intercept malicious call attempts. We also analyze metadata from incoming calls, such as the timing, frequency and the type of request, to flag potential scams.
By continuously gathering and analyzing real-world data from over 500 million users, we can adapt quickly to emerging AI scams. Additionally, we are working on integrating advanced verification tools for calls and messages to ensure that when a legitimate call or text does come through, users can trust it. This level of proactive protection is key in a world where scammers are leveraging AI to make their attacks more convincing.

