Bank Systems & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Infrastructure

01:00 PM
John Lankenau
John Lankenau
Commentary
50%
50%

Using Technology to Clean the 'Muck' Between the Servicing System & the General Ledger

There are significant cost savings to be realized in this area.

Several months ago, I described three fundamental gaps in bank technology. This article discusses one of those gaps -- the gap between the servicing system and the general ledger -- in more detail.

So why is there a gap and why does it matter?

At the most basic level we feel that you can blame the gap on the rapid evolution of computing.  As advances in this space increased the art of the possible, the regulators and standard-setters increased what is required. In the years before the world was introduced to Han Solo and Luke Skywalker, there was no gap. Accounting was pretty much counting. When money came in, income was recognized. When a borrower didn’t pay, the bank deemed the loan uncollectable and wrote it off. Simple stuff.

Accounting standards and regulatory expectations began to change in the 1970s in ways that required banks to know more about each loan and to perform more calculations. As technology grew, the calculations required became more complex. It began with requiring an estimate of the appropriate level of reserves. Banks were then required to report effective yield, which requires the ability to generate contractual cash flows and calculate an internal rate of return. The regulations that became important after or were introduced in response to the financial crisis added an additional level of complexity in that the calculations materially became about expectations; that is, not about what already happened or what the contract is between the borrower and the bank, but about what might happen.

But even with the forward-looking dimension included, when viewed from a calculation perspective, these are not overly difficult problems. The ubiquitous spreadsheet is capable of executing the math; this is not a computational challenge -- it is operational. In many banks, the processes surrounding the calculations are, to put it diplomatically, inelegant. In less diplomatic terms, we use the phrase “muck in the middle” to describe the hodgepodge of convoluted manual processes, production spreadsheets, and point solutions that sit between the servicing system and the general ledger. 

The "muck" is a problem:

  • It’s inefficient, which means it wastes valuable resources (time and money, which my boss tells me are actually the same thing!)
  • It’s hard to control, which means mistakes are made and the oversight folks (auditors and regulators) don’t like it
  • It’s people processing instead of analyzing
  • It’s siloes instead of integration
  • It's redundant operations and reconciliations on each loan

So if technology is driving expectations, it is only logical that it be through technology that we meet them too. But before we start thinking about meeting expectations, we must first correctly define the problem that must be solved. Technology-driven expectations didn't create the Muck on their own. The operational mess just described, which is present at almost every single bank, was created in large part by how banks choose to address each new regulation: in a piecemeal manner with a primary focus on the individual calculation and their inputs and output. If the problem is merely defined as independently calculating your reserves or identifying loans as troubled-debt restructurings, then the current approach has technically solved the problem, albeit at an enormous cost. If instead we define the problem broadly as solving for all of the accounting and risk processes that sit between the servicing system and the general ledger, both those known today and those that will come tomorrow, such that the cost of current processes and future change is low and the processes themselves are well-controlled, then the current approach has clearly not succeeded. 

Most institutions recognize that they aren’t currently addressing the challenges in an effective manner and they must move beyond their piecemeal solutions and holistically solve for the challenge by centralizing information and computation flows into a single area. That said, the gap persists; knowing the current approach is sub-optimal unfortunately does not indicate the path to the right solution.

To help illuminate the path, we often break down the broader problem into the following basic problem statements:

  • Identifying and sourcing required data elements from across the enterprise
  • Ensuring you have the “right” populations for each calculation type
  • Performing the calculations
  • Leveraging the calculation output in the required process
  • Passing the results of that process onto downstream systems
  • Reporting on the results of the process, including interim calculations

This is a useful functional breakout of what needs to happen; however, this does not constitute an architectural paradigm. The resultant architecture cannot be a series of silos where one silo is responsible for sourcing and cleansing data and the next determines populations and yet others do computations and so on. Unfortunately, many financial institutions see the individual problem statements and then have teams independently go out and attempt to solve them. These teams tend to start large warehouse projects (more on this in a future post!). Sourcing teams expend effort on trying to gather all the data elements needed downstream, and a parallel analytical warehouse team on outputs and reporting consumption, often struggling to distinguish one processes’ input from another’s output. Separate application teams may worry about calculations and still others think about the process orchestration problem.

These siloed projects often cause as many problems as they solve. A better solution is an integrated risk and finance architecture that addresses all the above problem statements. This architecture must follow a series of fundamental principles.

The architecture must be open to extension across the stack but remain closed to modification. It needs to be easy and cheap to add new data and process flows, calculation processes, and reporting intelligence. These additions must not break existing functionality and need to be added in a controlled fashion.

Data must be sourced or staged and be made consistent in a single space. The hard work here is not staging the information, but integrating it and defining a clean, consistent, and precise model of the information captured so that calculation processes can easily use the information. Without solving the data problem, it’s very hard to solve any other problems. Too much time will be spent making sure definitions of elements are consistent and populations are right on the front end and aggregating on the backend. But I emphasize again that solving the data sourcing problem is only half the battle.

The calculation processes must be synergistic with the data space and with each other. What is one calculations output is another’s input; it is important to recognize that calculations often feed off of each other and make that a core part of the architecture. Hence the calculation engines need to put the results in the data space, where other processes can access them. The necessary calculation engines should be integrated with each other. For small banks, that can be relatively straightforward. For big banks, organized by LOBs and sub-LOBS, this is more challenging. But an integrated model execution platform will be more efficient and controlled and enable more analysis (this is one of the other gaps I discussed here).

Neglecting the applications is one of the common failure points of data warehouse projects -- the data architecture is often theoretically sound. Where things fall apart is when the data warehouse is expected to be the integration point for a mishmash of applications that were never designed to be integrated. Theoretical purity goes out the window in the name of pragmatism and the demands on the data warehouse increase endlessly... and now we have another case study to add to the 901,000 entries in Google for the search term, “Why do data warehouse projects fail.”

Processes should be designed in a controlled way, and create necessary outputs for downstream systems. For accounting processes, a subledger is a good place to store results, and risk processes may require a different structure to store results, but this must be designed so it is relatable to finance information.

And finally, layering a strong business intelligence tool onto the good data architecture, including the subledger, will make it easy to explain results and cut them in many different views for disclosures and other business uses.

The benefits of getting it right are obvious... The muck goes away. There is a significant reduction in point solutions and manual processes. It is much more efficient. For the manager and executive-types, this means cost savings, both indirect and direct. For people who are spending too much of their lives reconciling and processing, it means less work as well as more interesting work.

There are better controls. With the updated 2013 COSO framework, the effort -- and therefore, cost -- of controlling manual processes such as spreadsheets is increasing substantially.

And lastly, when the inevitable future change happens, you’ll be ready for it. 

John Lankenau is the head of valuation and accounting product solutions at Primatics Financial. He has extensive consulting and financial services industry experience, with an emphasis on complex loan systems integrating risk and finance. Mr. Lankenau is a notable thought ... View Full Bio

Register for Bank Systems & Technology Newsletters
Slideshows
Video
Bank Systems & Technology Radio
Archived Audio Interviews
Join Bank Systems & Technology Associate Editor Bryan Yurcan, and guests Karen Massey and Jerry Silva from IDC Financial Insights, for a conversation about the firm's 11th annual FinTech rankings.