12:11 PM
Q&A: Adrian Kunzle, JPMorgan Chase, Expects More Hybrid Clouds to Emerge in 2011
Adrian Kunzle, managing director and global head of engineering and architecture for JPMorgan Chase, shapes technology strategy for the bank and has long been a luminary in bank technology circles (where he has spoken publicly about JPMorgan's large-scale grid computing projects). In October, he became a steering committee member of the Open Data Center Alliance, a new and promising consortium of large-company IT executives who are writing user-driven roadmaps for vendors of data center and cloud computing technology. Yesterday, he spoke with us about the benefits for banks in adopting cloud computing and new facets of cloud computing that at least the largest banks will be engaged in next year.
Bank Systems & Technology: What do you see happening in banking related to cloud computing next year? Do you see more momentum building behind it?
Kunzle: Yes, I do. I don't think banks can ignore it. We are at an interesting crossroads where "cloudy" models can deliver operational cost reductions. At the same time, it's almost impossible to generalize and for banks, there are going to be some things that are suitable for clouds, and there will be some things that will never be suitable for clouds. When we talk about clouds, we do have to distinguish between private clouds, public clouds, and hybrid clouds, a combination of the two. Most large banks are doing something with private clouds and have been for a while.
BS&T: Will the Open Data Center Alliance's cloud computing work be mostly focused on private, internal clouds, or will it also develop road maps for public cloud services?
Kunzle: The roadmaps we're planning to put together will deliberately focus on both. The high-level premise of the ODCA is we want to communicate and reason with these cloudy things, regardless of where they are. That's how you drive a hybrid cloud strategy, so you don't have to speak a different language to your internal stuff versus your external stuff.
BS&T: If you were among regional bank CIOs who were objecting to cloud computing, saying they don't ever want to put customer data on the internet because of security concerns (as I was recently), what might you say to them?
Kunzle: Have people looked closely enough at the security to understand it and determine whether it's any different from what they're already doing today? At the end of the day, whether data is sitting in our data center or somebody else's data center, it's all about software layers that protect external people -- both regular customers and malicious people -- from reaching data they shouldn't reach. I would argue that with Salesforce, for example, enough people are using it and have looked it over to suggest there's a reasonable level of security there. By leveraging the vendor's security and your own security guys, these things start becoming possibilities.
That said, everything is gray, nothing is black and white. People who are saying no, they're never even going to think about cloud computing are missing an opportunity to reduce operational expense. There is a balance. There are savings to be had at a variety of layers in the stack. A lot of security techniques are available to cloud providers. Amazon is already certified HIPAA compliant. That's not a trivial certification.
It's a matter of everybody's risk level, probably what you're hearing is people saying they're not willing to explore it right now, while others feel there's operational benefit and cost reduction available there and they're willing to balance the risk against that.
BS&T: You mentioned that there are some apps that lend themselves to the cloud and others that don't. Can you say which bank apps you think are a good fit today and how that might change next year?
Kunzle: It's easier to talk about what's not appropriate: applications where performance is a competitive advantage. You're always going to be able to drive better performance with dedicated hardware at a close proximity. Take high frequency trading and algorithmic trading, where speed is a competitive advantage for the business, today you have people fighting over space in the exchanges' data centers, each one trying to get their machine closer to the exchange matching engine. Perhaps the exchanges will offer some cloud based services at some point in the future, but I'm never going to put a high frequency trading application on an Amazon EC2 data center in the Pacific Northwest, because it's just too far from the data feeds.
But most applications that are not performance sensitive are ultimately going to be suitable to the cloud; the real challenge is where do you put your data. There are a variety of strategies there, you can host data within your own four walls but still have your application in the cloud. In some cases you'll be willing to put the data in the cloud, too. But to make those decisions you've got to understand the nature of the app, it's hard to draw black and white conclusions.
Some applications are old and built in languages that are not so in favor today, while clouds tend to use Java, .net, PHP, Python, and other "web 2.0" sorts of languages. It would be very hard to move an old COBOL mainframe app to the cloud, unless it's rewritten and rearchitected. Even things written five years ago might not have an architecture that will be good for the cloud.
BS&T: I've been hearing some bankers talk about buying new mainframes, which is almost the opposite of going to the cloud. Maybe those things can coexist for a while.
Kunzle: They are surprisingly, scarily similar to each other, if you go back to time-sharing on the mainframe. What you're seeing is a convergence, and people are going to start to realize that what they're actually trying to do is resource management across both environments. Ultimately what it's all about is improved utilization.
One thing that's nice about the cloud versus the mainframe is you have to plunk down $5 million to $10 million upfront to buy a new mainframe, which means you're immediately in the hole in terms of value return. It can take a year or so before you have enough workload on the machine to get the value back. The nice thing about the cloud is you can buy it in smaller chunks. So you get immediate business value for the $10,000 recurring charge you just signed up for. At the end of the day, the technologies are very similar, you want to think about the same way, you want to drive utilization the same way.
BS&T: What might be new in cloud computing next year? You're probably privy to what the vendors are working on and you obviously know what JPMorgan Chase is doing. Do you think we'll see banks making their private cloud implementations bigger or experimenting with the public cloud?
Kunzle: There are two dimensions to the maturity curve. You'll see continued work in the private cloud space, because especially for a large organization, it gives us a degree of operational efficiency. I think you'll see people get their feet wet in hybrids in 2011, where they connect their private cloud to a public cloud. I also think you're going to see at a technical level an increased differentiation between cloud services at infrastructure level, things like EC2, versus cloud services that are platform level, like Google's AppEngine. The platform level is a much less clear starter in the stack right now. We know how to do IAAS, we're just going through that process of getting comfortable with how it works, how we scale it, how we operationalize it. Platform as a service is much more of an unknown, it's an area that will be more explored in 2011. And then I think there will be a continued maturing around what software services we can source from outside.
As people start getting their feet wet in that public/private cloud hybrid federation, the big thing everybody is going to realize, if they don't realize it already (as I think most of us do), is we're at a place right now where everybody's trying to use different APIs and languages to interact. If we continue in that vein as customers, we'll be doing so much heavy lifting mapping requests and whatnot that it's going to become almost unmanageable by end of 2011 if we don't deliver on some of these road maps and standards the ODCA is crafting.
One thing we've learned over the last 10 years is that open source computing introduces true competition; it makes the vendors compete at a valuable, functional level, not at a "well, you coded all your stuff to speak our particular language, therefore it's going to be a year's worth of rewriting to speak anybody else's." The whole point of what we're trying to do is still allow innovation on the back end, but how you talk to those different infrastructures, and how you define performance characteristics, is the same, so we can switch workloads between private clouds and commodity infrastructure and between internal and external clouds.