IT groups face a barrage of demands from CEOs, CFOs, auditors, and boards to ward off new information-security risks such as subtler viruses, evolutionary hacking algorithms, and strategies that exploit wireless connectivity. With resources already stretched thin, IT security executives will have to do ruthless triage. They must discern which security risks pose the most substantial threats, which are small enough to postpone taking immediate action, andperhaps most importantwhich are threats for which IT lacks sufficient risk-evaluation abilities.
The usual way to prioritize these projects is to measure the risk in terms of worst-case scenarios that could result if nothing is done: For example, loss of customer information and the attendant legal ramifications; or loss of revenue, reputation, or brand appeal. But since many information-security risks are just emerging, there's little in the way of data or best practices, making it hard to ascertain the frequency or severity of some security problems with much confidence. In fact, some risks that appear small may be worth mitigating first because we can't really grasp their implications. In short, CIOs and CISOs need to go beyond merely assessing security risk; they must assess their assessments.
Not all security risks are created equal, so CIOs shouldn't regard them as interchangeable, acting as if their IT teams were equally good at monitoring them all. Typically, they first estimate the worst-case loss, or the asset value at risk in their organization based on some standard level of confidence. Next, they estimate the cost of a security project designed to mitigate that risk. Finally, they rank the projects based on some measure of expected returnnet benefit or ROI.
A broader approach is needed. That's why my colleagues at the CIO Executive Board and the Information Risk Executive Council focus on how worst-case loss estimates from different parts of a corporation add up at the enterprise level. These aggregates are then used to determine which business-unit risks contribute most to enterprise risk. Surprisingly, the biggest contributors aren't always the ones with the potential for the largest worst-case loss at the business-unit level.
However, bias exists because every organization is likely to be more sensitive to some kinds of security risks than others. Moreover, risk competencies will differ from business to business.
For example, one company may know from bitter experience just what a breach of customer data privacy can cost in terms of customer flight, damaged reputation, plummeting stock price, and the like. Another company might have learned the hard way about how information-integration efforts can compromise records. It would be a mistake to assume every company has the same strengths and weaknesses in assessing fast-evolving information-security threats.
In general, diversified financial-services companies have had to think hardest about how their assessment competencies vary from risk to risk in light of new Basel Committee capital requirements for operating risk. Some energy companies have also thought about their advantages and disadvantages in anticipating commodity risk.
The strengths and weaknesses we all have in assessing different kinds of discernible risks are what I call risk intelligence, which varies not only from company to company, but also among departments within a company.
Based on this, a critical new step in spending time or allocating resources can be added to the assessment process: Ask which risks your organization is skilled at determining. Then, separate high-risk intelligence projects from those for which the organization has low risk intelligence before deciding which to pursue first.
As I detail in my book, Risk Intelligence: Learning to Manage What We Don't Know (Harvard Business School Publishing, 2006), begin by listing the main risk types your organization faces. Your list may look something like this:
The best way to assemble a list is to canvass operating managers. Those from the audit, legal, and marketing departments may have useful perspectives; and don't forget to talk to other IT managers, too. Ask for estimates of worst-case losses from the risks of your business partners over several time intervals: a month, a year, five years. Ask about the cost and reliability of mitigating them.
In the end, you should have a list that captures the risks that account for the lion's share of information-security problems. Now you must assess your risk intelligence for each.
As an example, suppose you want to know your organization's risk intelligence for a customer-data privacy breach. Remember that a risk-intelligence score is always comparative: It just tries to gauge how well your organization can learn about one risk compared with others. Here's how you might approach the five factors of a risk-intelligence score:
1. How often do your IT and business colleagues investigate customer-privacy complaints? Do they talk with customers, suppliers, and partners about customer-data privacy experiences? Think of it in terms of exposures per week.
One European business cross-checks reports on risks that recur in most business units with units that don't report them. The purpose isn't only to catch a threat a manager may have missed, but also to evaluate differences. Also, some IT executives in financial services are able to compare the extensiveness of their own and other organizations' customer-satisfaction calls. That tells them whether they're likely to know more about customer-privacy breaches than the competition.The severe nature of some information-security risks will force IT executives to quickly develop ways of evaluating the evaluators.
Score yourself a 2 if you think your organization has more information about this risk than others. Give yourself a 1 if you think the number of times you're learning something about this risk each week is about average. Mark a zero if you think you learn more about other risks.
2. How relevant are these experiences to what might influence the risk? For example, say you tallied frequent privacy breaches in complex processes involving customer contact, and infrequent breaches in complex processes that weren't customer-facing. This suggests it's unlikely that complexity makes processes susceptible to privacy breaches, but very possible that customer contact does.
Of course, someone in an organization that has suffered privacy breaches or near-breaches knows whether they happen mostly in customer applications. The question here is whether the manager who provided you an estimate of worst-case losses from privacy breaches had access to the finding.
Ultimately, this crucial question asks whether your typical experiences might help you rule out factors that matter less and focus on those that matter more. Make this assessment more concrete by estimating the proportion of your experiences that are both 100% possible if some factors are in fact driving privacy breaches and 100% impossible if other factors were driving it. Give yourself a 2 if you have more of these "yes or no" experiences regarding this risk, a 1 if they're average for the risks you're comparing, and a zero if you have more "yes or no" experiences with other risks.
3. How surprising are these experiences? The more surprising, the more they tell you about factors driving privacy breaches. In other words, when something unusual happens, you really need to pay attention.
For example, one small financial-software vendor sent an E-mail to customers warning them about a security weakness. When customer complaints increased a few days later, the information-security team found that a hacker had mimicked the notice, but suggested a new product setting that increased customer exposure.
This question is really a measure of the improbability of your experiences. It may not change your views to learn, for example, that process changes affecting key controls weaken customer-data security; such an outcome would be expected. But you might be very surprised to see how risky it could be for customers to process a change that requires two units within the organization to cooperate. Score yourself a 2, 1, or zero based on whether you suspect your typical experiences are unearthing more unexpected news for this risk than others.
4. How diverse are these experiences as sources of information? Consider whether your team engages in a variety of activities that could help it better judge the risks to customer-data privacy. For example, do you visit lots of data-processing sites? Do you have friends in organizations that approach customer-data integrity differently?
This question focuses on the range of information sources you're using. The idea is that frequent exposure to data from the same sources may not tell you anything new. The important thing, as always, is how your information on this risk compares with what you know about other risks. Again, give yourself a 2, 1, or zero.
5. How methodically do you record what you learn? Do you keep records of what you learn from the news and from discussions with business partners, developers, customers, and other IT groups? For example, do you keep track of which mitigation projects have succeeded and which haven't?
It isn't merely about the quality of your memory or that of your company. It's about enabling better business performance, especially if you're working with sizable implementation teams. And it's about keeping a record of how your beliefs have evolved with new information about the market. Give yourself a 2 if you think you keep track of what you learn about customer-data privacy more systematically than what you learn about other risks. Give yourself a 1 if your tracking is average, or a zero if it's below average.
Next, rank your organization's major information-security risks by their risk-intelligence score. You may want to allocate more resources to those risks you rate yourself weakest at assessing.
By the way, this is the opposite of the conclusion you would draw from risk-intelligence scores for elective project risks. While logic dictates we pursue projects with risks we're good at assessing, here the information we're seeking is different: We have no choice about these risks and are figuring out where to spend scarce mitigation resources.
What matters to an operational risk portfoliothat is, to a portfolio of recognizable risks, not random ones? I've argued that the risk intelligence you calculate for each major risk is crucial. But the other critical factor is how much each major risk diversifies the remainder of your risks. So for each risk, you'll want to capture your relative skill in assessing it and the extent to which it diversifies your net risk.
The chart below is meant to show information-security threats in a risk portfolio. Each circle represents a major risk or mitigation project for the risk. Circle size reflects your best assessment of the worst-case loss for the risk, net of the up-front cost to mitigate it.
The horizontal axis measures risk-intelligence scores for the various risks, while the vertical axis indicates each risk's diversifiability. Risks that you assess well and that your other risks reduce appear in the upper-right quadrant; those that you assess well but that your other risks can't diversify appear in the lower-right quadrant.
New kinds of risk tend to start to the upper left, because we usually aren't good at assessing these risks at first. What's more, those risks are often easily diversified to the extent they're unlike the risks we've borne the longest. As these risks become more widespread, they grow in importance, become less diversifiable, and fall lower on the chart.
As we get better at understanding one of these risks, it moves to the right, and as our knowledge and mitigation efforts diminish it, it rises to the upper right. If we decide to stop monitoring the factors driving the risk, it moves left. Accordingly, our operational risks tend to move in a counterclockwise life cycle. Note that this pattern is quite different from what we'd see for the elective project risks of new business opportunities.
How can you use risk intelligence and diversification evaluations to prioritize information-security risks? For example, how should risk intelligence and diversification affect the order of a long list of mitigation projects?
All other things being equal, the largest circles represent the biggest threats. That's because we drew the circles to reflect estimated worst-case loss, net of the investment required to eliminate the threat.
Losses from breaches of customer privacy are starting to convince IT executives that they face a larger issue. How well managers evaluate and understand security risks is more important than how much these security risks correlate with a company's other operating risks. IT risk-assessment tools are only beginning to take into account such variations in risk-evaluation competencies.
Nevertheless, estimates of worst-case loss and mitigation investment aren't all that matter. We also need to take into account how little we know about some of these kinds of risksthose to the left with the lowest risk intelligence. And we should worry about concentrated risks toward the bottom of the chart that we can't easily diversify.
To return to the chart, we might choose to start work on Risk 4 because it represents a big threatthat is, as far as we can telland we can't diversify it against our other risks. Next, we might want to turn to Risk 2, even though it appears to be a smaller threat. But it's no more diversifiable than Risk 4, and given our low risk intelligence, we can't be sure we really know what kind of worst-case damage it may do. Finally, we might pursue mitigation of Risk 1 next because of its relatively low risk-intelligence score.
In general, you'll have to apply judgment to decide whether risk-intelligence gapsin the chart, the circles to the leftor diversifiability gapsthe circles toward the bottomrepresent the bigger problem for your organization. What's more, you'll need to judge whether to compare both of those gaps against the size of the risks.
The method I've described doesn't pretend to replace IT experts' knowledge in prioritizing security risks. But it forces you to consider the main issues: worst-case estimated loss, risk intelligence based on the experience of your organization and the people within it, and whether the risks your organization faces can be diversified.
This method not only lets you apply your judgment systematically, but also provides a basis for visualizing and discussing the key trade-offs in your risk-management strategy.
David Apgar is a managing director of the Corporate Executive Board.
See Related Articles:
Teaming Up To Tackle Risk, December 2005
Lock Out Business Risks, December 2005
Armed Against Risk, December 2004
Send In The Chief Risk Officer, September 2003
Rather than a single process requiring 90 days, we recommend IT executives prioritize their security risk mitigation through two parallel tracks, each requiring 45 days.
List the major security risks. Canvass audit, IT, legal, marketing, and operating executives who have had experience over the past three years with information-security risks. This process will take up to three weeks.
Separate teams from IT and operating units should pin a worst-case loss on each security risk the organization is facing. The team will estimate the investment cost of a project to mitigate it. Estimate worst case losses and mitigation costs. Estimate the worst case losses for your organization under each major security risk for three time periods.
Estimate the up-front costs of building or acquiring a system and changing business processes to mitigate each risk. Allow three weeks to gather this information.
A different team should perform a risk-portfolio audit of your security risks. This team will conduct the following three steps.
Canvass audit, IT, legal, marketing, and operating executives. This will take up to three weeks.
Estimate the risk intelligence for each type of risk, defined as its comparative ability to learn about what drives the type of risk and how the drivers change over time. The team should make a quick initial cut, then ask business partners to refine the preliminary score. This should take two weeks.
Plot the position of each security risk based on the team's evaluation of risk intelligence and diversification. The team must estimate diversification based on the correlation of each type of risk with other types. This step requires one week.
Dovetail The Tracks
Meld the output using the second team's risk-portfolio visualization tool. The tool can then support security-risk prioritization decisions that take into account not just what you know about those risks, but your confidence in what you know about them.