America's three greatest fears about AI revealed as FBI issues stark warning
Americans are growing increasingly worried that they could lose their life savings to deceptive AI scams that are becoming more life-like every day. A new poll by the Daily Mail found that falling for AI-enabled frauds and scams was the biggest fear of people in the US, followed by AI leaking their private data online and more jobs being handed over to AI-powered robots.The survey of more than 3,000 people showed that 37 percent of Americans ranked AI-powered fraud as one of their top three concerns.That significantly outpaced highly-debated concerns such as AI showing political bias (18%), AI chatbots impacting education (19%) and intelligent robots lowering human creativity (24%).According to the FBI's latest report on internet crime, Americans have their focus in the right place when it comes to how artificial intelligence could potentially harm them and their families.The FBI Internet Crime Complaint Center (IC3) revealed that just under $900million had been lost to AI-related crimes last year. Over two-thirds of that money stolen was connected to schemes involving phony investment opportunities. The FBI warned: 'Investment clubs employ AI-generated videos and voices of celebrities, CEOs, or trusted figures to create fraudulent, high-stakes opportunities.''These scams often feature fake, professional-looking endorsements on social media or in video calls. This makes it harder for victims to detect they are in a scam.' American voters said their biggest concern about AI is falling for an AI-generated scam that steals their money (Stock Image) AI chatbots have become an everyday tool in the US, but voters told the Daily Mail they have many concerns about their safety and influence (Stock Image)AI tools have helped scammers create more sophisticated fakes than ever before, using tactics such as voice cloning and deepfake videos to convince everyday people to hand over their money or access to their bank accounts.Voice cloning involves scammers taking short public audio clips, often from social media, and using them to recreate the person's voice through advanced AI programs.According to the US Federal Trade Commission (FTC), this has been a common tactic in the 'grandparent scam,' where the AI fakes an urgent call, often to senior citizens, claiming a family member is in trouble and needs money wired immediately.Meanwhile, AI has enhanced the ability to create such perfect deepfake videos that even major companies have fallen victim to the scams. In 2024, UK-based engineering firm Arup lost $25.6 million after a deepfake video call impersonated their chief financial officer and authorized a fraudulent transfer.The new poll, conducted by JL Partners between December 2025 and February 2026, also found that AI's impact on the safety and security of children was a major concern, especially among younger adults between the ages of 18 and 49.Overall, 14 percent of respondents ranked their fear of AI endangering children's safety as their number one concern.According to the National Center for Missing and Exploited Children, a nonprofit group dedicated to protecting children, generative AI has become the new favorite weapon of child predators in recent years.In 2025, the group received more than 1.5 million reports involving generative AI video, images and deepfakes being used for child sex exploitation. A new poll found 14 percent of Americans say the danger of AI on children's safety is their greatest concern (Stock Image)Nearly half of all respondents (48%) believed AI was having a negative impact on children. Voters over the age of 65 were the most likely to believe this, with one in three saying AI was having a 'very negative' impact.Interestingly, adults between 30 and 49 were the least likely to think AI was bad for kids, with only 14 percent calling its impact 'very negative' and another 14 percent actually saying AI's influence was 'very positive' for children.The Daily Mail poll also found that, because of these growing concerns, there was bipartisan support for increased regulation of AI.Although the strongest support came from respondents identifying as Republicans, 58 percent of all voters said there needs to be 'somewhat more' or 'much more' government control over AI.As AI becomes a bigger part of everyday life, more and more space has been taken up by data centers, the power-hungry backbone of artificial intelligence that pack thousands of computers, servers and GPUs into giant facilities.Thousands of these facilities throughout the US provide the immense computing power, storage and cooling needed to train, run and store large AI models, such as OpenAI's ChatGPT, Anthropic's Claude and xAI’s Grok.However, these giant facilities have been accused of pumping out dangerous pollutants that can cause asthma, cancer and even death around the communities they reside.That may be why over one-third of the survey (35%) said there are too many data centers in America. Pictured: An Amazon Web Services data center known as US East 1 in Ashburn, VirginiaAs for what is coming out of those powerful AI chatbots in terms of information, Americans were equally as concerned.Thirty-two percent of voters ranked the inaccuracy of the information coming from chatbots among their top concerns.Recently, a pair of studies by the Massachusetts Institute of Technology and Stanford revealed that AI assistants such as ChatGPT, Claude and Google's Gemini regularly provide overly agreeable answers, which sent users into a 'delusion spiral.'Specifically, researchers found that when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from real people.Other topics Americans rated as being a top concern included surveillance and monitoring using AI (28%) and a lack of transparency from AI companies (19%). With few Americans ranking fears of AI influencing their political beliefs or impacting education among their top concerns, it came as little surprise that only four percent of respondents said they get their news from AI summaries on the internet.More than one in three people (35%) still said they go to local TV news programs for information on current events. Another 20 percent have shifted to social media and 13 percent trusted news websites.Despite those findings, 31 percent of voters told the Daily Mail that AI has weakened their trust in what they see on the news each day.