The Rise of A.I. and the Need for Even Greater Vigilence

52 views
0

Welcome back to Financial Literacy Month, and today I will go a bit off script to discuss the more nefarious side of artificial intelligence, and how important educating our members and staff alike is to avoid falling for increasingly sophisticated attacks. Because educating our communities on how to be responsible with money is a great cause, but ensuring it does not get wiped out in one fell swoop matters just as much.

A.I. and large language models have created incredible opportunities

Before I go any further, I want to make it clear that this is not an attack on artificial intelligence and recent developments in large language models (think ChatGPT, Bard, Gemini, etc.), nor the applications credit unions are coming up with to improve member interactions.

Credit unions have employed these tools to help with improving communications with members, providing more efficient access to information and answers, giving members more control to accomplish tasks on their own, and even using A.I. to detect member sentiment to identify areas of frustration and room for improvement.

And sure, like many emerging technologies, A.I. has its risks for businesses looking to blaze the trail in its application. Overzealous application could potentially result in the wrong kind of information being gathered and analyzed without member consent, leaving credit unions open to potential lawsuits.

But these are all things the credit union can manage and improve upon to provide members with the best. Instead, today I want to talk about how bad actors are using these same tools to devise increasingly believable and sophisticated social engineering attempts that could leave you, your staff, or your members vulnerable.

Phishing emails and texts have come a long way

We in the credit union space are not new to the world of cybersecurity threats, computer security classes, and passing along helpful information to members. We stake our reputations on being up to date on threat vectors and knowing what to look out for as so much depends on our vigilance. That’s because we know that even if we employ state of the art systems with a complex layered approach to security, it’s the human layer that hackers often target.

Bad actors prey on us through a sense of urgency, tricking us into clicking on a link before we have fully considered the message and investigated it safely. Thankfully, we have mostly been able to depend on the fact that these bad actors often incorporate weird syntax, bad spelling and grammar, and obviously fake links to quickly and easily identify when something is a phishing attempt—though even identifying the safe from the fake links has become trickier.

What happens, though, when criminal organizations add another tool to their tool set?

A.I. and LLMs make it harder to spot fakes

The beauty of large language models such as ChatGPT is that it has given millions of people not only answers, but the ability to improve the quality of their writing by running it through their models. Is English not your first language? LLMs can help you out. Need assistance turning a terse email into one that is kinder? LLMs can do that too.

Unfortunately, this also means that scammers can turn to these tools to improve their own communications with prospective targets. And not just to clean up those grammar and spelling errors. In a recent example at our organization, a very convincing phishing email not only came across as professionally written, but it was well-researched, too. It addressed the target’s specific area of business within the organization, created a believable conflict needing resolution—ironically, in this case, purporting to be a response to a cyber incident at another organization—and even used other real organizations and real individuals at those locations, with their actual email addresses.

With everything appearing legitimate, it is that call to action that requires a great deal of scrutiny. In this case, the form that the recipient was meant to fill out actually directed to a website hosted in Botswana.

A measured response is a safe one

The key takeaway is perhaps the toughest to follow through on: slowing down. We lead busy, hectic lives, and scammers seek to capitalize on this by lumping on a sense of urgency. When we take the time to slow down, assess the communication, and ask ourselves if everything adds up, we stand a much better chance of catching a phishing attempt before it becomes an issue.

Ask yourself whether the communication really would be coming over email—just like you warn members that you will never ask for their login information. Scan the links carefully by holding your cursor over them without clicking to ensure they really go where they suggest they will. And if you still aren’t sure, err on the side of caution by getting your IT team involved.

When we know what to look out for, we can better educate our members and ensure the fruits of their hard-earned financial literacy labor are safe from bad actors.

Author

  • Esteban Camargo

    As a supervising editor of CUSO Magazine, Esteban reviews and edits submissions, assists in the development of the publishing calendar, and performs his own research and writing. His experience provides CUSO Mag with a seasoned writer and content curator, able to provide valuable input to contributors, correspondents, and freelance journalists. Esteban has worked at CU*Answers since 2008 and currently serves as the CUSO's content marketing manager.

    View all posts

Your email address will not be published. Required fields are marked *