Bank Systems & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


01:50 PM
Kathy Burger
Kathy Burger
Slideshows
Connect Directly
Google+
Twitter
RSS
E-Mail

7 Big Data Players To Watch

New York City, IBM, State Street and four other organizations are developing innovative ways to leverage the ever-increasing volumes of data they're amassing.




Bryan Yurcan and Jonathan Camhi also contributed to this report.

Banks have long been built and powered by massive amounts of data. But in this era of big data, banks are facing several issues that hinder their ability to fully maximize all of the valuable information they possess and can access from social media, the Web and other third-party sources.

For one thing, legacy data storage systems are growing increasingly inefficient and even obsolete. Banks also now recognize the need for quick access to centralized data, which can be a difficult task as many financial services firms have data stored in siloed and disparate locations. Another challenge is a shortage of professionals with the sophisticated analytics skills needed to harness big data.

But many banks, technology companies, and even municipalities and universities are working on solutions to effectively tackle big data. While we acknowledge that there are many companies doing good and innovative work in this space, in this special report Bank Systems & Technology highlights seven organizations that are clearly at the forefront of big data innovation.

Great Western Bank

Analytics Finds The Right Customers

Great Western Bank has grown significantly over the past few years given the difficult banking environment of the post-financial crisis world. The bank has grown more than 300% since 2008 to become a $9 billion-asset institution. But growth has brought on new challenges, says Ron Van Zanten (pictured at right), the bank's VP of data quality. "Using big data is going to be the No. 1 enabler to allow us to continue to grow. We aren't a community bank anymore. We can't keep doing things like a small bank, using spreadsheets and doing things manually."

To continue growing, the Sioux Falls, S.D., bank is using big data tools from Microsoft to target more profitable customers in its marketing campaigns. The bank used to send out promotional mailings offering customers rewards for opening new checking accounts. Customers would make an opening deposit on the account and then the account would stay dormant. That meant Great Western Bank would lose money on those accounts, Van Zanten says.

To reverse this trend, Great Western now is targeting customers who are likely to add services such as ACH payments, debit cards or overdraft protection to their checking accounts. "It takes analytics to find that customer," Van Zanten says. "We want people to use services that add value. Then we can take their activities on their debit card, see what they're interested in and offer a product suite to fit their lifestyle."

With fewer customers visiting branches, the bank's staff doesn't have the one-on-one time to develop an understanding of what the customer wants, Van Zanten says. "Here is a new paradigm shift. You don't get to meet with the customer in the branch. You need a different channel to understand the customer. Big data is the most efficient way to do that. Otherwise you're flying blind out there," he says.

Great Western already has seen improved returns on its mailing campaigns using analytics-driven marketing, Van Zanten reports, although the bank wants to gather more data before it releases numbers concerning those returns. The next step in Great Western's big data strategy is to incorporate and analyze data from social media to help draw in younger customers, Van Zanten says. Those customers are easier to attract than older ones, he says. "We want to catch people in their 20s before they get a car loan and a home equity loan. It's very difficult to displace a well-financed customer." -- Jonathan Camhi




Big Blue Uses Big Data To Detect Cybercrime And Simplify Banking

Though certainly not the only technology company creating big data systems, IBM is one of the biggest and has several notable initiatives in this area, some of which are specifically applicable to banking.

Big Blue earlier this year announced a new service that uses big data analytics to bolster security intelligence. IBM Security Intelligence with Big Data is designed to detect the threats found in the ever-increasing amount of corporate data. The service combines real-time correlation for continuous insight, custom analytics across structured and unstructured data, and forensic capabilities for evidence gathering in a effort to detect malicious cybercrime.

The company also announced this year several advances to its PureSystems integrated systems, including one designed for big data. The PureData System for Analytics, which is powered by Netezza technology, features 50% greater data capacity per rack and is able to crunch data three times faster than in prior versions.

IBM also has initiated several partnerships with banks in this area, including a notable one with Citi. The New York-based bank will examine the use of deep content analysis and evidence-based learning capabilities found in the IBM Watson supercomputer to help advance customer interactions, as well as improve and simplify the banking experience.

IBM also is expanding its big data reach through acquisitions. The company has acquired six data analytics companies since February 2012 -- DemandTec, Emptoris, Star Analytics, StoredIQ, Varicent and Vivisimo -- a clear sign that its focus on trying to solve big data problems won't end anytime soon. -- Bryan Yurcan




Semantic Data Is Key

Banks have long been built on data, but as they continue to accumulate massive amounts of internal information along with an ever-growing pool of unstructured data, managing that data will become an increasingly difficult task. Traditional data storage and management systems are being stretched thin, but an answer to these problems may come in the form of semantic databases, says David Saul (pictured at right), chief scientist for Boston-based State Street Corp. ($222 billion in assets).

Saul's responsibilities with State Street are to propose and assess new advanced technologies for the organization as well as to evaluate technologies already in use at State Street and their likely evolution in order to reinforce the organization's leadership position in financial services.

Saul, who prefers the term "smart data" as opposed to "big data," explains that the semantic data model associates a meaning with each piece of data to allow for better analysis. Given their ability to analyze relationships, he adds, semantic databases are particularly well-suited to the financial services industry.

State Street began experimenting with semantic databases last year and has moved from proof-of-concept demonstrations to pilot programs using the semantic data model. These programs are helpful in producing better information for State Street's clients to optimize their investment strategies, and for the bank internally in doing regulatory reporting and risk calculation, says Saul.

On that front, Saul believes semantic data can be very helpful, particularly for complying with the Legal Entity Identification standard mandated by the Dodd-Frank Act. The LEI is a unique identification associated with an individual corporate entity. The purpose of this standard, according to regulators, is to help financial firms develop a consistent and integrated view of their exposures, such as in the case of default of a counterparty. There currently isn't a standard ID system for financial counterparties.

State Street has been working on taking a semantic approach to LEI data, and Saul says its ability to link different kinds of data and create equivalency could prove invaluable to banks in their efforts to comply with this new standard. "If you can automate this process (with semantic databases) rather than have to do it manually, it saves a lot of time," he notes.

While the adoption of semantic technology is happening slowly, Saul says that it's beginning to increase and more tools for creating these databases are becoming available. He also believes the continued development of standards will move it forward. "The combination of technology, process and standards is really coming together," he says. -- B.Y.




Partnerships Advance Financial Analytics And Data Sharing

Big data isn't just an issue businesses have to deal with; local and regional governments also are sitting on massive amounts of data that are increasing every day. And some of these municipalities are tackling big data in ways that provide some useful lessons for banks.

New York City last summer launched a $15 million partnership with Columbia University to tackle the increasing volume of big data the city has and produces. The agreement includes the creation of 44,000 square feet of new space on Columbia's campus by 2016 and the addition of 75 new faculty dedicated to big data. Among the initiatives is a financial analytics center, which will bring together expertise in finance theory, machine learning, statistics, signal processing, operations and natural language processing. One of its projects includes developing statistical algorithms for predicting market moves with greater accuracy.

New York also is among several big cities engaging in a data partnering plan. Chicago, Seattle, New York and San Francisco announced in August that they would add their data sets to the federal government's open data portal, Data.gov, with the belief that comparative data from different levels of government can be useful in aiding the work of developers. Such data could also conceivably be used to spot patterns and anomalies to help in the fight against cybercrime. -- B.Y.




Structured Plus Unstructured Data Equals Personalized Product Offers

Many banks are focused on using big data to gain a better understanding of their customers' purchasing behaviors. The best way to do that is to combine data from as many sources as possible, both structured and unstructured. NGDATA, in Ghent, Belgium, specializes in doing just that. The company's platform, dubbed Lily, is designed specifically for large enterprises and gathers structured and unstructured data for analysis. This helps clients take data sets that were previously siloed and analyze them together to get a 360-degree view of their customers, NGDATA CEO Luc Burgelman (pictured at right) says. The platform's recommendation engine then uses that analysis to create personalized product and service offerings for the customer.

Lily's recommendation engine is twice as successful at enticing customers to buy products as the next-to-buy algorithms used by companies such as Amazon and Netflix to recommend movies and books, Burgelman reports. And Lily analyzes data in real time, an ability that Burgelman says is key to its success.

"Real time equals timely offers. Without real-time analytics you can never capture the customer when they're at the store or on your website," he explains.

NGDATA released its second version, Lily 2.0, in January. It includes the ability to link to new applications such as QuickView, SAP Business Objects, SAS and Tableau. The company is already looking into adding more prepackaged connections to similar applications for its next release, as well as adding new churn-modeling capabilities to help understand why customers leave their bank, telco provider or utility service. -- J.C.




Analysis Enables Insight Into Unstructured Data

Splunk is one of the highest profile and most successful big data companies. "What Google is for Web searches, Splunk is for machine-generated data," Global Equities Research analyst Trip Chowdhry said in a recent article about the San Francisco company.

Splunk has made its name from analyzing unstructured data that companies and organizations obtain from websites, servers, mobile devices and other machines to understand behaviors, user actions, application and system performance, and cyber threats. This kind of data can represent more than 90% of the data that any given organization receives, according to Splunk's estimates.

Splunk works with organizations such as Major League Baseball, Cisco, more than 200 government agencies and several financial institutions, including Bank of America. One of its most recent initiatives includes a partnership with the Federal Emergency Management Agency to analyze data from social media sites to help find people in distress during and after natural disasters. The partnership was announced in the aftermath of Superstorm Sandy in the New York/New Jersey/Connecticut area late last year. This could become an interesting case study in analyzing social media data, as many individuals who were affected by Sandy turned to social media to seek help when cell phone towers were knocked out by the storm.

The company is also worth watching as Bloomberg News reported last March that Oracle and IBM are both interested in acquiring it. -- J.C.




Next Generation Of Data Experts In Training

In order for banks to realize the promise of big data, they must acquire people with an array of skills -- including data architects, data scientists and data mining engineers -- that may not exist among their current workforces. Gartner has predicted that by 2015 4.4 million IT jobs globally will be created to support big data (1.9 million of those in the U.S.). No wonder colleges and universities worldwide are hustling to beef up undergraduate and graduate programs focused on the applications of analytics, statistics, business intelligence and big data.

Whether household-name academic institutions such as MIT, Northwestern University, Harvard and Stanford, or less-well-known schools such as Stevens Institute of Technology in Hoboken, N.J., Bentley University in Waltham, Mass., and Loras College in Dubuque, Iowa, a growing number of schools are responding to what is expected to be a serious shortage of professionals with analytical expertise. McKinsey recently predicted there could be close to 190,000 unfilled big data-related positions by 2018.

One of the most aggressive institutions of higher learning that is aiming to educate the next generation of data experts is the University of Virginia. Late last year the Charlottesville, Va., school "embarked on a big data initiative designed to help faculty, staff and students across academic disciplines and administrative units come together and develop services, curricula and new research activities related to complex data," according to a university press release. Following a Big Data Summit that UVA hosted this past May, the university created an interschool group of faculty charged with "developing an overall plan for big data that will identify hiring needs, address curricular issues, coordinate service and research needs, create connections with industry and enable fundraising."

Among other activities, the group is investigating what needs to go into a program "to train quantitative data and information researchers and to provide literacy to anyone whose research might touch big data," according to the UVA statement. One possibility, for example, is an introductory course sequence that would bring quantitative literacy in math, statistics and computation to a large body of students. Another goal of the school is to form a virtual institute that would coordinate big data activities and promote cutting-edge research in big data. -- Katherine Burger

[Big Data At J.P. Morgan: Delivering The Firm]

 

Katherine Burger is Editorial Director of Bank Systems & Technology and Insurance & Technology, members of UBM TechWeb's InformationWeek Financial Services. She assumed leadership of Bank Systems & Technology in 2003 and of Insurance & Technology in 1991. In addition to ... View Full Bio

Copyright © 2018 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service