In our new world of artificial intelligence, the line between reality and fiction has never been more blurred. AI videos on social media are being treated as authentic, authors are being accused of writing books with AI, artists are passing off AI work as their own, and internet users are accusing animated projects of being AI-voiced and created, despite real humans doing the work.
The bottom line? It has become increasingly difficult, if not near impossible in some cases, for people to distinguish AI from the real deal.
And while many are enchanted with all that AI can do, just as with any new invention, there’s a darker use case for it that is currently being weaponized by fraudsters to enhance their techniques and make their lies more believable. According to a 2025 AI Fraud Report, 92% of the financial institutions surveyed indicated that fraudsters use generative AI to target their members.
Long gone are the days when scams were so transparent and poorly done that they were essentially the equivalent of two children in a trench coat masquerading as an adult—perhaps believable at first glance, but after an ounce of inspection, the illusion falls apart. With a little help from scammers’ AI fairy godmother, robotic texts riddled with spelling errors are now eloquent voicemails, indistinguishable from human speech. Romance scams no longer require cheap photos pulled from Google to sell the lie; they have deepfake AI videos to do the work for them.
As fraudsters level up their methods, members and credit unions must do the same. The key is knowing exactly how bad actors are taking advantage of AI, what to look for, and who to reach out to when it’s too hard to tell.
The how: transformation (no polyjuice potion needed)
In Harry Potter and the Chamber of Secrets, Ron, Harry, and Hermione spend months secretly brewing a potion called polyjuice, which would allow them to take on the identity of Slytherin students and interrogate Draco Malfoy. By disguising themselves as his friends, they knew he would be more open to sharing his secrets and private information.
Bad actors have utilized a similar strategy for years, pretending to be loved ones in need of help in order to get their victims to send money or relinquish their private account details. In Japan, scams like this became so commonplace that they dubbed it the “Hey, it’s me” scam, named after the phrase the scammers would open the conversation with. The hope was that the caller would automatically assign the bad actor the identity of a loved one when the caller addressed them with familiarity.
However, there were always flaws in the execution. Over the phone, callers, perhaps save for older ones, could typically distinguish their daughter or cousin’s voice from a stranger’s, and there was usually little to no evidence or personal details that would make the con believable. After all, they didn’t have a vat of polyjuice potion on hand (though it would have done wonders for their business plans). But at the time of the story’s release, the entire concept—the idea of being able to realistically and convincingly transform into someone else—was so foreign and magical, it could be nothing short of impossible.
However, it hasn’t remained so.
Thanks to artificial intelligence, bad actors and fraudsters can essentially transform themselves into someone else—no polyjuice potion required—with tools such as voice replication and deepfake videos. If fraudsters have access to someone’s voice and face—say, through a social media post—they can transform their facial features and their voice into the person they are attempting to disguise themselves as and send a voicemail or video to that person’s loved ones, telling them they need money. What once took Harry Potter’s protagonist months and magic to achieve now takes only minutes with AI.
At a recent conference I attended, one CEO explained that for the low price of $12.99, he purchased one of these AI voice replication apps and used it to mimic his employee’s voice. The result was a near-perfect replication that left even the CEO himself, who knew it was AI, having an extremely hard time distinguishing between his employee’s real voice and the AI. In a real-world situation, where he was unaware AI was being used, he might not have known at all.
Through these tools, fraudsters and bad actors have been able to enhance and perfect their scams, relying on the current inability of many to tell AI and reality apart. So, let’s dive into how exactly these tools are being applied to popular scams.
The methods: phantom hackers, romance scams, and more
Bad actors are now using these transformation techniques to masquerade as members’ loved ones and easily persuade people, especially older people who are likely unfamiliar with AI, to send money or divulge their account information. The fraudster starts by writing their own choppy text, then asks AI tools such as ChatGPT to make it grammatically correct and more human-like, inserting the proper “ums” where needed, and then using AI to create a voicemail that uses the text they provided. In three steps, they’ve created a voicemail that sounds completely human and legitimate.
Additionally, the FBI is currently warning about the rise of the “Phantom Hacker” scam, a three-step scheme in which scammers use AI to write convincing texts or voicemails claiming to be first, a tech support agent who gets the member to download software, then a teller from the individual’s financial institution, saying they’ve been hacked and need to move their money to a safe, government account, then finally, as the government entity itself. The result is the member losing their entire account to the bad actor, assisted by AI.
In the same vein as the scams above, romance scams have also received a significant boost in believability with the help of AI. In one notable incident, scammers were able to trick a man into thinking that actress Jennifer Aniston was in love with him, as they kept sending videos of the actress saying as much directly to the victim. Of course, she eventually asked for money (which should have been a dead giveaway, seeing as she’s worth millions), and the man gave it.
Now, this may sound like an extreme one-off—after all, how many people can be truly convinced they’re talking to a celebrity—but it’s surprisingly not. These scams prey on older or lonely individuals who probably won’t question who’s on the other side of the screen, and if they do, well, their polyjuice potion (aka AI) can take care of that for them, allowing them to look and sound like anyone they need to—not just celebrities—in order to sell the scam.
The key point here is that while none of these scams are completely new, and your credit union and members may think they’re familiar enough with the methods they present, the rules of the game have changed. Credit unions need to update their educational materials and share this information with their members through social media, newsletters, and email to let members know how to recognize these scams in their new forms.
So, the question then becomes, how can they?
How to help your members spot a fake
With how authentically and convincingly fraudsters are able to pull off these scams, it can be nearly impossible to spot the wolf in sheep’s clothing. It’s not as simple as ignoring a poorly-written text message asking you to pay a toll for a road you never drive on anymore. It requires users to question voicemails or videos from people who look like their loved ones, sound like their loved ones, and claim to need help. So how can members spot the difference?
First, while the voice and face may look like a certain individual, these are not perfect copies, and it is possible to hear or see the difference upon close inspection. But in the case that the fake is too close to the original to tell, members should check the phone number or method with which the voicemail or video came through. A call from a member’s daughter, for example, should say it came from their registered contact name and number. If they’re coming from an unknown number (even if the area code is the same), members should call the person claiming to need help back with the number in their contacts to confirm.
The same method goes for phantom hacking scams. If a member gets a text or voicemail from someone claiming to be the credit union, either saying they’ve been hacked or the credit union needs information of some kind, they should ignore the text (remembering not to follow any links in the message) and reach out to the credit union directly via a registered phone number. Credit unions should also be sure to consistently remind their members when and how they will reach out and what kind of information they will or will never ask for.
Half the reason these scams work is that the fraudster places a sense of urgency on the victim, telling them their bank account is being hacked, their loved one is in danger, or they might be in violation of the law. This panic causes people to miss key indicators that they’re being scammed and prevents them from stopping long enough to ask questions. The best thing the credit union and the member can both do is slow down, double-check what they’re being told, and follow up with trusted and verified numbers.
Fighting the phantom
Unfortunately, there is no method to stop the phantom in the phone from reaching out and attempting to fool members with its ghostly apparitions and magical transformation spells (I’ve received three in the time it took me to write this article).
By educating them on how these scams work—particularly in regard to use of AI—they’ll know how to spot friend from phantom, or at least, they’ll know what steps to take in order to do so if they can’t tell on their own.