A U.S. bank disclosed that it inadvertently shared customer data with an unauthorized AI software application, exposing a critical security gap in how financial institutions vet third-party tools.
The breach occurred when employees used an AI app that hadn't been approved through the bank's security protocols. Instead of routing customer information through secure, compliant channels, the unauthorized software received direct access to sensitive data. The bank discovered the lapse during a routine security audit and moved to contain the exposure.
This incident underscores a growing tension in financial services. Banks face pressure to adopt AI tools for efficiency gains, yet many lack adequate guardrails for vetting and monitoring these applications. Employees often install productivity software without IT department oversight, creating shadow IT risks that traditional security frameworks don't catch.
The unnamed bank didn't specify which AI app was responsible or how many customers were affected. Financial regulators like the Federal Reserve and OCC have flagged third-party AI risks as a priority concern, particularly as banks race to integrate large language models into customer service, fraud detection, and loan underwriting.
Other financial institutions have faced similar missteps. In 2023, employees at multiple banks accidentally fed confidential trading information and client data into ChatGPT before OpenAI clarified its data retention policies. JPMorgan Chase moved to restrict ChatGPT access on corporate networks. Goldman Sachs imposed strict AI usage guidelines.
The incident serves as a cautionary tale for the broader financial sector. Banks cannot simply encourage AI adoption without infrastructure to support it. Risk management teams need to establish approval processes for any third-party application, especially those handling customer information. The alternative is repeated breaches that erode customer trust and invite regulatory scrutiny.
This disclosure will likely prompt peer institutions to audit their own AI tool usage and strengthen approval workflows. Regulators may accelerate guidance on third-party AI governance before more breaches
