Traditionally, grid computing, also known as peer-to-peer computing or utility computing, has been relegated mostly to universities and some high-tech firms that specialize in the technology. But more recently, many financial services firms have begun asking, "Why let hundreds or even thousands of desktops sit idle when they can constantly be working for the company?"
By taking advantage of unused CPU capacity - such as when desktops sit idle at night - IT departments on Wall Street are finding that they are able to increase transaction speed, improve computing agility and reduce costs. The main advantage of grid computing is that it enables firms to seamlessly link their computer systems interdepartmentally, or even worldwide, to tap unused computing capacity. This allows firms to better deal with increasingly larger volumes of data, and more-varied and exotic pricing methods. Rather than having to request more servers, which could cost millions of dollars, CIOs instead can use their existing capacity to provide equal, if not better, service in shorter periods of time.
Grid Spelled Out
In a grid environment, applications typically are broken into smaller programs that can be transactionalized and distributed as needed. Generally, sophisticated grid fabric engines pull scheduled stand-alone programs off a queue and assign them to available resources based on key requirements, such as the size of the computer needed, the amount of memory available and how long they would normally take to run, explains Kurt Ziegler, EVP of development at ASPEED Software, a New York-based venture-funded software company. This way, no computer takes on more than it can chew, he says.
By leveraging a grid approach, applications can run on available resources - wherever the systems reside. This allows for a more flexible IT structure as well as more scalability for financial services firms looking to accelerate time to market and turnaround times. "When you are running on a distributed environment that is loosely coupled, you can scale virtually on demand by throwing in more power," says Peter Lee, CEO at DataSynapse, a grid computing software company.
"In most cases, we are seeing grid being employed to accelerate the accuracy or speed of getting answers in a way that has significant business benefit," says Carl Claunch, research VP at Gartner, a research and consulting firm. By using grid, he asserts, a firm can cut the time it takes to run an application for a derivative pricing model from 18 hours to less than one hour.
Bank of America is among the financial institutions leading the charge to expand the use of grid computing. The bank typically computes several 100,000-derivative trades each day and must calculate profit/loss, risk and thousands of market scenarios in order to explore its exposure, explains Andy Bishop, head of liquid products and technology at the firm. "It would not be possible to do this level of compute-intensive calculations serially. As a result, the bank uses parallel computation, implemented using a grid, in order to get these calculations completed in time," Bishop says. "We can now more efficiently use the compute power that we have available to us in the bank, and that allows us to scale up considerably."
Thus, Bank of America continues to increase its grid computing environment. Due to the growth BofA has experienced over the past few years, its technology platform is undergoing significant changes, and, as a result, many different business lines are reengineering applications simultaneously. This has provided an opportunity to develop applications that can leverage cross-business resources, such as a global cross-business grid, says Bishop. "Lots of different businesses around the bank can now pitch in and use this capacity to calculate what they need to over a 24-hour day, and they can be agnostic to the location of the compute power they are accessing," he explains. This is particularly helpful when business units are located in different time zones, Bishop adds.
Despite the obvious advantages to grid computing, problems can - and do - arise. As an example, ASPEED's Ziegler points to a scenario in which a complex algorithm-based application that normally takes 18 hours to run is distributed across multiple boxes. "If huge investments have been made in the mathematics and validation of models, such as for pricing, risk management, credit risk and hedging, it's important that the algorithm not be affected by the way you parcel out the workload," he warns.