Previously with Microsoft's capital markets group, Dr. Mark Horvath now heads up the security planning organization for the company's entire commercial sector, including financial services. Horvath's industry experience includes a stint with CertCo, where he played a role in developing SETCO, a root certificate authority for the major card associations, as well as the groundbreaking Identrus standard for interbank identity authentication. Horvath spoke with Bank Systems & Technology's Ivan Schneider about emerging trends in information security for financial services firms.
BS&T: How have security concerns changed staffing requirements for sensitive positions at banks and other financial institutions?
Horvath: People are being a lot more circumspect about the type of people they bring in. They're requiring much more intensive background checks, and they're requiring much more intensive screening processes. People are brought into certain groups and vetted out depending on what they're doing.
BS&T: How is that impacting the IT department?
Horvath: There was a big tendency to push a lot of stuff offshore, especially a lot of the programming pieces. But, as security has come more into the focus, at least in certain parts of financial services, that seems to be stopping. How do you do a background check in a country that you may not have any access to? How do you know what's actually going on? Is it cheaper per hour to outsource to a foreign country, and then have to bring that code back in and run it through a security review?
Right now, pretty much all of the financial institutions that I've talked to are doing full-blown security reviews and code walkthroughs. Microsoft actually has partners that we use to help some financial services firms [and other industries] go through their code base and look for problems.
One of the things Microsoft does is publish several books on how to write secure code and how to test secure code. Also, we have two programs that we use internally, called Prefix and Prefab, that build on a database of mistakes that everybody has made in security. It forms a database of best practices. Before we allow code to go into the "build," it has to run through these checkers and make sure that the code that goes in is at least minimally secure.
BS&T: So could it find a buffer overflow problem, for example?
Horvath: Yes. Not necessarily every buffer overflow problem, but certainly the most common ones, the ones that you should know better than to code or that you should have had some instruction about how to code.
BS&T: Have the latest programming practices, such as the use of object-oriented languages, made it easier to avoid errors in code?
Horvath: Yes and no. C++ sort of allowed programmers to reach for the chainsaw and the vodka at the same time. The newer programming languages like C# ["C-sharp"] have a much more restricted set of things you can do. There are fewer ways to go wrong, either by intent or by mistake.
BS&T: Looking inside the financial services firms, what type of work are these in-house programming teams actually doing?
Horvath: For the key financial services firms, especially in something like foreign exchange trading, your advantage is your ability to execute a deal faster than somebody else. Your ability to control information and make an informed bet, better or faster than anyone else, is a key advantage in what you do. Maybe not quite so much in banking or insurance, but certainly in capital markets.
For example, banks hire a lot of people to help them write derivatives calculators. Derivatives are an excellent way of finding a unique advantage over your competitors in the market, even if for a short amount of time.
BS&T: So where do the problems typically arise?
Horvath: They end up maintaining a database of all these kinds of calculators, and it lives on for many more years than people think it will.
Then, you start getting into programs with the people who wrote the code, who maybe were in a hurry and didn't document it very well, or had been promoted out, or moved to another company. Suddenly, that code is a key piece of some other part of the calculation engine, and nobody completely understands what it does.
Or someone will rely upon a particular feature in a program set or the operating system, and that will get patched or fixed. Suddenly, something that was working just fine no longer works. Then, you really don't know what's going to happen when you upgrade or when you install a patch. This becomes an ongoing problem, because you're more and more reticent to change any part of your system. But at the same time, there are real security holes that can be exploited.
So you're caught in this double bind. You're afraid if you patch your systems, then the stuff that you're making money off of will stop working. But you're afraid if you don't patch your systems, somebody's going to come in at night and steal all your money.
BS&T: How can a financial institution protect those vulnerable segments of code from attacks from the outside world?
Horvath: Companies are building Web services, so they'll take the code and encapsulate it into a Web service. That will isolate it from the majority of security risks, but at the same time it will allow that particular calculator, or piece of code, to be called by other programs.
This is just entering the mainstream. It's been around for a little while, and there are examples at Merrill Lynch, Morgan Stanley, Deutsche Bank and Bear Stearns, where they've all taken some piece of their legacy code base and encapsulated it into Web services.
First, they can reuse it, and second, they can save money by taking it off the mainframe and putting it on cheap PCs that are actually running a little bit faster. By isolating the code in a controlled environment, it's shielded from a lot of problems.
BS&T: How concerned are bankers about corporate technology espionage?
Horvath: There's some concern about that, but generally, you can't just steal a little bit of code. It doesn't do you any good. You usually have to steal a lot of code, all at once. And there are usually pretty good control mechanisms in place. It's very difficult to steal anything very useful for very long, or not leave your fingerprints all over the place when you do that .
BS&T: Then what are the bankers really worried about?
Horvath: If it turns out that I've written a back-door in, or I've put a time-bomb in, or I've put something destructive into the code and caused some kind of major event, then they'll know where it came from.
So you have developers sign the code that they put in. If I put in a piece of code for a big project, it contains my signature so that everybody knows that I did it. More likely, though, I'm going to make some kind of error or mistake that's going to cause a security hole to open up. They'll want to know how that got in there and whether it was malicious.