Published on : 1/5/2026
This article originally appeared in the January/February 2026 edition of ABA Risk and Compliance.
At Bangor Savings Bank, artificial intelligence (AI) isn’t just a technology initiative — it reflects a deep organizational commitment. Unlike institutions that need persuasion to adopt AI, leadership drove the vision from the top. The bank’s president championed the effort and inspired leaders to invest heavily in technology and provide comprehensive training for every employee across the organization.
This whole-bank leadership approach — combined with Bangor Savings Bank’s mutual ownership model and its customer-first promise, "You Matter More®," has shaped an AI strategy rooted in service and trust. The result is a thoughtful, intentional approach that frames AI not as a cost-cutting tool, but as a means to strengthen efficiency, accuracy, and elevate the customer experience.
The first is reaching customers at the right moment with the right product. One of the bank’s first AI projects utilizes probability modeling to help ensure that customers are aware of financial offerings or services at the right time in their financial journey. This is not about pushing products but about aligning information with customer needs. Importantly, the team tests for bias and fairness to avoid exclusion or over-marketing. In practice, this means AI supports timely awareness and financial literacy by alerting customers to relevant options based on their life stage or financial behavior, while staying true to the bank’s culture and approach. The adoption of this model reinforces the bank’s longstanding commitment to responsible AI innovation, utilizing advanced technology to serve customers better while maintaining regulatory confidence.
At the same time, the bank is careful to treat AI as more than a customer engagement tool. The second, and larger, focus for AI is the one most relevant to risk and compliance professionals — strengthening data quality, regulatory reporting, and responsible governance.
I sat down with Andrew "Andy" Grover, chief risk officer, and Steven Scott, director of compliance, at Bangor Savings Bank to discuss how their institution is using AI. In the conversation below, they share insights on identifying use cases, partnering with vendors, building responsible AI governance, and ensuring customer trust remains at the center of every decision.
Andrew Grover
Steven Scott
Andy: Our president and CEO, Bob Montgomery-Rice, really started the conversation and drove it forward. He saw AI as something we needed to embrace to be ready for the future. He secured leadership engagement early, bringing together about 15 department heads from operations, retail, legal, commercial and marketing. Because the message came directly from the top, employees recognized this wasn’t a short-term initiative but a lasting cultural shift. That clarity helped build alignment and enthusiasm across the organization.
Andy: We had already partnered for years with Northeastern University’s Roux Institute on data-quality and IT initiatives, so expanding that relationship made sense. It’s a true collaboration. Their team works on-site with us, and our staff works with them at their location. Because we’ve built a strong history together, we incorporated due diligence from the start. We understand their technical depth and expertise and share their values.
Andy: Even with a trusted partner, we took extra precautions. As always, we carefully reviewed contracts, included clear nondisclosure terms, and kept the work containerized. Risk management, compliance, legal, financial crimes, information security, and other departments all had a seat at the table. We started with hundreds of project ideas and narrowed them down to those that supported our mission and strategic plans, without introducing new regulatory or policy risk.
Additionally, we never overlook the importance of strategic marketing. Every innovation — especially one as transformative as AI — must be thoughtfully marketed both internally to employees and externally to customers. Culture drives adoption, and marketing plays a key role in shaping that culture. We recognize that our message and all communications must evolve in tandem with technological advancements.
Andy: Oversight is continuous. The first line of business runs quality checks to ensure that outputs are accurate and make sense. Risk management examines the underlying factors that drive the models and tests for potential bias. Internal audit performs periodic validations. As AI systems evolve, we expect to validate them more frequently than once a year.
Andy: We’re training all employees across the bank. Our goal is to reduce AI concerns and explain the why [Emphasis added.] The Roux team helped us design educational sessions — including in-person and virtual meetings and breakout groups — that covered what AI is and isn’t, as well as how to use it responsibly. Once employees realized AI wasn’t about job loss but about efficiency and better service, curiosity replaced concern. We use a "crawl, walk, run" approach, and this training has helped us move through those stages quickly because everyone has the same foundation.
Andy: Learning from small but essential mistakes often leads to the most significant improvements. For example, in Maine, zip codes begin with a zero. If someone leaves that zero off, the data might place a customer on the other side of the country. AI can help cross-check the address, city, and zip code to ensure they align and flag any discrepancies. The next step is selective auto-correction — with a human in the loop — to verify changes before they’re applied. That insight and level of accuracy improve everything from customer service to regulatory reporting.
Steven: We’re still in the early phase, but the goal is to fix inconsistencies before they reach the reporting stage. Currently, AI is assisting us in identifying and reconciling mismatched data that previously required manual review. Over time, that consistency will allow us to spend less energy cleaning data and more time interpreting its meaning.
Steven: It’s still developing, but the trajectory is clear. We already see fewer last-minute adjustments before large-scale filings. The long-term value is freeing employees from repetitive work, allowing them to focus on analysis and control testing instead of correction.
Steven: We spent six months drafting a comprehensive governance framework and are now deciding how to operationalize it. The team is evaluating whether to embed AI governance into existing oversight or establish it as a separate governance function.
Andy: We refer to our approach as responsible AI. [Emphasis added.] Everyone has a role, but if everyone owns it, no one does. Currently, our Chief Information Officer and I are developing a structure that is supported by a cross-functional committee. We’re mapping responsibilities to our existing information-security and compliance programs and will likely anchor it to the NIST [National Institute of Standards and Technology] risk-management framework. That gives us guardrails while keeping the business flexible.
Steven: Privacy is always a major focus. Everything stays within controlled systems. At the same time, we’re considering whether our privacy disclosures should evolve to explain how customer data supports AI-driven insights. Fairness is equally important. When we built our in-house "conversation engine" to engage with customers better, we reviewed more than 300 factors to remove anything that could serve as a proxy for prohibited bias. That proactive review gives us confidence that the models are both ethical and efficient.
From the marketing side, we involve compliance from the start and collaborate on every message. AI drafts ideas, but our teams refine them to maintain the bank’s brand and meet all disclosure requirements. We always keep a human in the loop — that’s how we maintain trust.
Steven: It’s getting there. Our next filing cycle will likely incorporate AI-assisted checks. The real benefit is consistency — aligning customer data internally so we’re not reconciling the same items repeatedly.
Andy: Every project includes leaders from risk, compliance, and the business lines. We never rely solely on automation. A person interprets and validates the insights that AI surfaces before making decisions. That balance ensures compliance and reinforces our culture of trust, while creating efficiencies along the way.
Steven: Don’t be afraid to start. The rules and regulations haven’t changed; you’re just applying them in a new context. Be deliberate, learn together, and keep your compliance team and business lines close.
Andy: Communicate the why and involve everyone early. That transparency creates buy-in and helps employees understand how AI can enhance their jobs, support organizational goals, and not replace their roles.
Andy: Overconfidence. AI is powerful, but it can make mistakes faster than people can catch them. If you lose the human oversight, you lose control of its work. We’re cautious not to assume the technology is infallible.
Steven: I’d add data drift. Models change as inputs change. Monitoring and validation must be ongoing, rather than relying on annual checklists.
Steven: Within compliance itself. We’re exploring tools that utilize AI to help monitor third-party relationships and banking-as-a-service programs — areas where automation can identify anomalies more quickly.
More importantly, AI is helping us invest in our people. By training employees in the responsible use of these tools, we’re strengthening both workforce skills and customer relationships. AI is here to enhance our work, and we’re excited about what the future brings.