Get more! Sign up for PLANSPONSOR newsletters.
Cognitive Decline and Financial Fraud: The Resources
Plan providers are using technology to authenticate participants, secure accounts, flag suspicious activity and help protect participants’ assets.
Older Americans have become a prime target for financial fraud and scams. A recent white paper from retirement solutions provider TIAA noted that the Federal Trade Commission received nearly 400,000 fraud and scam complaints from adults aged 60 and older in 2023. It is a lucrative market for criminals: those complaints reflected $1.9 billion in total losses.
The TIAA report highlighted several factors that make older adults more likely to be victimized than younger people. Older people are often wealthier, with their savings frequently concentrated in retirement savings plans. They might have less experience with technology, making them more susceptible to hackers and scammers looking to access confidential data kept on their computers. Other risk factors include social isolation, loneliness, and an age-related decline in cognitive function.
Scams come in multiple forms. The scammer might impersonate a business, government employee, family member, or friend. They could offer participation in a can’t-lose investment or the chance to collect on a winning lottery ticket. Lonely people can be particularly vulnerable to romance scams. Artificial intelligence can enhance scammers’ efforts with deepfakes that clone the voices and images of coworkers or relatives.
There is no simple, one-time solution for protecting older plan participants from scams because the threats evolve constantly. Sastry Durvasula, chief operating, information and digital officer at TIAA, describes the process as a cat-and-mouse game. “One can’t say at any given point that they have won because there is the next threat in the making,” he says. “The threat actors also can’t say that they’ve won because there’s always next level of prevention happening.”
Verifying Identity
Plan providers fulfill several protective roles for participants. It starts with authenticating plan participants’ identities to prevent unauthorized account access. Providers have adopted mobile digital apps that offer multifactor authentication and biometric checks like fingerprint and facial recognition. Durvasula says TIAA went to password-less logins to reduce the risk of stolen IDs and passwords. “We have deployed passkey authentication, which no longer requires a user ID or password,” he explains. “We also implemented a secure document identity verification process during the digital registration. Participants can provide an image of their driver’s license and a selfie as a means to self-identify. It’s almost like opening a bank account as you walk into a bank.”
Providers also use voice-biometrics to defend against audio deepfakes. Liveness detection systems listen for natural vocal traits, such as background noise, irregular pronunciations, and emotional nuances that deepfakes can lack. Spectral analysis searches for differences between deepfakes and human voices that can be evident in high frequencies. Deepfake audio also tends to miss the microvariations like pauses and intonations. Additionally, some verification systems can ask unpredictable challenge questions that deepfakes struggle to answer in real time.
Fidelity’s MyVoice® system creates a unique voiceprint for each enrolled customer that acts as a digital fingerprint of the customer’s vocal patterns. When a participant calls Fidelity, MyVoice listens to the conversation’s first seconds and verifies the caller’s identity by voice, eliminating the need for security questions. Schwab uses voice verification for customers calling its automated phone service or looking to speak with customer service. According to Schwab’s website, the system prompts callers to say the passphrase, “At Schwab, my voice is my password,” for verification.
Empower Retirement uses the Pindrop® security technology in its call centers. Pindrop’s AI analyzes incoming calls’ audio characteristics, such as the caller’s voice, accent, background noise, and device signal. It can cross-reference the caller’s voice against a database of known fraud voices or detect if the same voice calls about multiple accounts.
Monitoring Account Activity
Once a participant’s identity is verified, the next step is determining that the participant is not being manipulated to act against his or her best financial interest. For example, if an older participant requests a substantial withdrawal from his account, is it for a legitimate expense, or is he the target of a romance or other scheme?
Providers use AI and machine learning to monitor account activity and transaction requests for signs of fraud or cognitive decline. The software can analyze patterns such as the timing, size, and destination of withdrawals, changes to personal information, login location, and more. Unusual patterns can trigger alerts or protective holds. Durvasula says TIAA uses pattern recognition AI to monitor participant conversations and digital interactions. If the activity triggers a red flag, TIAA will contact the account owner’s trusted contact, other advocacy groups, or law enforcement to verify the activity.
Fidelity has partnered with fintech firm, EverSafe, to help monitor Fidelity clients’ accounts for anomalies. EverSafe is a subscription-based online software service that monitors users’ financial accounts and credit reports. Should something anomalous occur, the service notifies designated alert recipients about the suspicious activity. Fidelity customers are offered reduced-price access to EverSafe’s service.
Monitoring technology continues to evolve. Roy Zur, CEO of security platform Charm Security, says his company’s AI agents can play a stronger protective role by recognizing psychological manipulation in real-time. The company’s AI models train on scammer tactics, scam cases, and profiles of scam victims. The software can work in the background as a “co-pilot” for customer service reps by providing guidance and prompts during customer interactions. The AI agent can also act autonomously, says Zur, by “not only supporting the human representative but having a live conversation with the person. And today, with the conversation capabilities of the AI tools we are building, they are as good, sometimes even better, than people who are experts in this field.”
Establishing Trusted Contacts
Trusted contact, that Durvasula mentions, are people the account holder designates for the financial firm to contact when there is concern about scams, suspicious account activity, or diminished mental capacity. It is an optional arrangement that is not a power of attorney—the contact does not have access to or control over the account. Their role is to serve as a contact to help protect the account holder’s interest when a red flag gets raised.
FINRA Rule 4512 requires broker-dealers to make a reasonable effort to obtain the name and contact information of a trusted contact person for accounts. Durvasula says that he would like the trusted contact arrangement to be “more widely adopted and that’s why we are trying to educate our participants. We are seeing an uptick in the adoption of some of the new technologies and trusted contact capabilities. But should we do a lot more? Absolutely. I think as an industry, we have that as an opportunity.”





