Online Security, a global provider of computer forensics and information technology risk mitigation since 1997
CyberSecurity Czar - An Interview with Richard Clarke
CyberSecurity Czar - An Interview with Richard Clarke
Posted: Jul 30 2003
RICHARD CLARKE has been America's de facto Security Czar for the past eight years. Facing an uncertain future in the new Bush administration, Clarke reflects on the state of national security--and his roll in bolstering it.
EDITOR'S NOTE: Richard Clarke is the first U.S. coordinator for Security, Infrastructure Protection and Coun-terterrorism, serving on the National Security Council's Principals Committee. A career civil servant since 1973, Clarke served as the assistant secretary of state for Politico-Military Affairs in the first Bush administration, during which time he coordinated State Department support of Operation Desert Storm and led the efforts to create a post-Gulf War security architecture. In 1992, he joined the National Security Council staff as Special Assistant to the President for Global Affairs and chairman of the Interagency Counterterrorism Committee. Mr. Clarke is a graduate of the University of Pennsylvania and MIT.
Q:You've been criticized for overstating the case for cyberterrorism. The statements you've made about a potential "electronic Pearl Harbor" have been viewed in some quarters as an exaggeration of the threat, perhaps an attempt to increase the budget for cybersecurity programs.
A:Without talking about whether or not we have enemies, just look at our vulnerabilities. You can take virtually any major sector of our economy-or, for that matter, the government-and do a vulnerability analysis and discover that it's relatively easy to alter information, disrupt and confuse the system, and even shut the system down.
In some cases, people will say, "So you turned off a computer; big deal." But in other cases, shutting the system down has consequences-the electric power grid crashes, trains stop running, airplanes crash into each other. Obviously, these are significant things; the economy is badly damaged, the nation is unable to operate for a period of time and people die.
So I don't overstate the vulnerabilities. In fact, I go around understating them because I don't want to publicly put my finger on precisely where some of these vulnerabilities are.
Is it difficult to talk about out loud? You can talk about it broadly, but when you limit yourself to sweeping generalities like "we are vulnerable as a nation," there are always people who will doubt you. If you can get them into a closed room and get them at least a temporary security clearance, you can give them some specific examples, and we've done that with a number of groups. We got permission to bring people in on a one-time basis, sit them down, and show them classified information, and they [went] away worried. So I think the vulnerabilities are pretty well established.
Q: What about the term electronic Pearl Harbor?
A: The difference between an electronic Pearl Harbor and the actual attack on [Oahu] is that you can now attack scores of cities and the connecting fiber between them, so the economic effects can be much more severe. The economic effects of the attack on Pearl Harbor were insignificant. It's a different kind of war, and it can happen.
As we build the next generation of networks, we have to be aware that we have now become completely dependent on IT for the functioning of our economy. There is no way to go back; you can't eliminate that dependency. Because the economy and national security are now dependent upon IT networks, the next-generation networks must be secure.
Q: Now, what about enemies?
A: I don't think there are any terrorist groups engaged in cyberactivities. I never use the word "cyberterrorism." Other people do. So it's not about "cyberterrorism...."
Q: You dismiss popular reports of groups like Palestinians and Israelis, Indians and Pakistanis, attacking each other's Web sites...
A: Attacking each other's Web sites doesn't really bother me. The really major terrorist groups like Osama bin Laden's, Hamas and Hezbollah don't seem to be developing really sophisticated cybertools-yet. The people I worry about are at the low end of the spectrum. Crackers doing it for fun, organized criminal groups that do it for extortion...there's a lot of that going on, some of which gets reported, but most of which doesn't.
My sense is that it's about 90 percent not reported.
That's about right. And then at the high end of the spectrum, there are countries organizing military and intelligence units to do offensive operations. I don't know why anyone would disbelieve me when I say that.
Let me distinguish between two things. I recently read a detailed report about China's development of an extensive cyberwar capability...
That's a matter of public record; they say very explicitly that they are developing this capability. But that's a military capability that might be used in the event of war-not just cyberwar, but war.
Q. Why would you assume that it would only be used in the event of war?
A. I'm trying to distinguish between war and the kind of maneuvering that takes place under the cover of intelligence in a relatively stable global environment.
A major attack on our infrastructure to shut down our electricity or airplanes would not be an isolated incident...
No, it wouldn't be an isolated incident, but that doesn't mean we have to be throwing nuclear bombs at each other, either.
Q. It wouldn't be part of a larger attack?
A. No, it wouldn't, not at all. No one has done this-yet-so there's not a lot of history, not a lot of military doctrine, not even a military strategy. But I don't see why it can't be done in isolation.
Let's go back two years. China was saber rattling and threatening Taiwan, and Clinton sent two aircraft carrier battle groups into the straits between China and Taiwan-throwing a little symbolism [into the situation]. No one was shooting, but the tensions were getting up there. What if China, in those circumstances, did a little symbolism-throwing back to us while our carrier battle groups were moving into position, and all the lights went out in California? China would never have to say they did it and we would never be able to prove they did it.
My point is, you can imagine a lot of circumstances in which a demonstration would be effective. We are now helping Columbia fight the drug lords, a $3 billion program called Plan Columbia. We're spraying the coca fields to kill the plants. The Columbian drug lords have annual revenues in the area of tens of billions of dollars. They can buy the best capability. What if we get a message to stop spraying the coca fields in Putumayo, or else? And then the next day there is no electric power in Florida?
So the potential is demonstrated. But to use it on a larger scale is another...
What's demonstration and what's use? If you take out the electric power grid in California or the telephone system in Florida, that's not just a demonstration, that's effective use, with a profoundly negative effect on the economy that will probably cause lives to be lost.
Q. Do you think our model of trust has become obsolete, as borders around countries become ambiguous and transglobal entities like the Columbian drug lords gain international power? When we talk about electronic security, we talk about locking down the network at a level which, to be effective, must pervade the entire society. The arrow seems to be moving in the direction of ubiquitous surveillance...
A. I don't think so. Utilities, for example, have been subject to every kind of attack other than cyberattack for 50 to 100 years, so there's nothing new there. People have been able to blow them up for a long time, but they have installed security and it's largely worked. The new element is the cyberthreat. I don't think we necessarily must violate people's liberties to protect cyberspace. I think there's a difference between privacy and anonymity.
Unfortunately, some privacy advocates wrap the two up together. I believe strongly in protecting privacy, but I don't believe that we can afford to have anonymity everywhere in cyberspace. It's anonymity that's used to attack cybersystems. I think I ought to be anonymous when I'm in the book racks of the public library, but I don't think I ought to be anonymous in the medical records room of a hospital.
Q. Some civil libertarians might say that you're not anonymous in the library. I routinely log into the public library, find and request books, and they're delivered to my local branch. They have a record of my transactions and know what I'm reading. Law enforcement groups can access those records. Didn't they pursue the Unabomber by seeing who was reading what?
A. I think the larger point is that there ought to be places in cyberspace where you can be anonymous. If I want to go to a Web site and read about heart problems, I don't want my insurance company knowing that. I believe that strongly. But there also ought to be places in cyberspace which you can enter only if you're willing to have some kind of certification of your identity.
Q. But isn't it tricky when we try to guarantee that? I spoke last week about privacy issues to a major insurance company. They have no end of policies assuring customers that their medical records are safeguarded. I suggested that, really, the only way to ensure that is not to collect the information in the first place. Well, of course, they can't do that, because they would lose the parity it gives them with competitors.
A. I'm saying that requiring strong encryption and strong authentication as a condition of entering some networks is reasonable because there are networks we cannot afford to have go down. If you have strong authentication and can prevent a packet from entering, then 99 percent of the cyberattacks on those networks go away. When I mention this, people get upset and say I'm trying to require authentication in all of cyberspace. I'm not. I want some of cyberspace to be anonymous so I can check out hard information with anonymity.
Q. Do you really think this is possible? My experience is that if information has value, and someone wants it, it can be obtained, one way or the other.
A. I think it is possible. You can wander around cyberspace relatively easily without anyone knowing who you are, if you want that, and I think that ought to continue. But not if you're on a network that affects banking or electric power or the operation of a telephone network.
That obsolete trust model I mentioned is analogous to the difference between getting on a plane in the U.S. and getting on a plane in Israel. I suggest that right now we behave in America as if we're in America, whereas in fact--because of cyberspace--we're all in Israel. The response should be appropriate to the level of threat.
Oh, I agree. If you're taking the shuttle from New York to Washington you shouldn't have to go through what you do at Ben Gurion [Airport]. But if you're getting into an area in cyberspace that is the equivalent of Ben Gurion, such as going on a network that controls the U.S. military logistics system or the power grid, you should have to go through that same level of inspection.
But you don't! Not today.
Q. And you think that, using the tools we have at hand, you can safeguard those borders or boundaries without an air gap?
A. Yes, I do. You can. Well, maybe some of the networks will have to be air-gapped, or given the functional equivalent of an air gap. When it comes down to the actual management of the provider network, should the fiber be separate or is it sufficient for a VPN to be created using a particular color or frequency to safeguard that network? I don't know the answers to those things. That's some of the technical work that needs to be done. But I think it's possible to create things that are much more reliable than existing VPNs.
Q. We're talking about a gray world, where we're redefining our systems and we're in a state of quasi-warfare all the time. How do we raise consciousness among the general population without making game-playing on a secure network sound like a "hack" of the nth degree and a national threat? How can we talk about all of this without sounding like Chicken Little?
A. I think you have to be careful to avoid saying things that aren't accurate because people will come back with that kind of criticism. In some respects, Y2K didn't help, because we got public acceptance that there was going to be a Y2K problem and then, because we solved the problem, it didn't occur.
People thought we had hyped the problem.
Q. Don't you think people grasp how much work was done by so many thousands of people for years to manage Y2K?
A. No, I don't. People don't understand that the reason there was no Y2K failure was not that we were wrong in predicting it, but that we did an enormous amount of work to stop it. Some commentators say to me that it will take an electronic Pearl Harbor to get that message across. I hope that's not right. We're getting the message across, but it's slow and sometimes it's not visible to the public....
It's slow-it's narrowcasting, not mass broadcasting-but I think it's possible to persuade people and to demonstrate the threat. I think it's moving along. And I hope the transition from one administration to another doesn't cause a hiccup in that momentum.
When Assistant Secretary of Defense Art Money came to DefCon in July, he thanked the hackers for not attacking the Department of Defense the previous New Year's Eve. It was quite a power rush for them to have someone of his stature say that. He also said that 22,000 attacks on the DoD took place in 1999, of which 20,000 came from recreational hackers. Their response was, fix the network and prevent the attacks. Money added that the DoD has a complex network, and not all the systems administrators are up to the task, which was unacceptable to the hackers. They said, don't leave doors open and then tell us not to walk through them.
It's certainly true that DoD networks are enormous. Fixing them is not something one does overnight. It's also true that they are doing more than anyone else in the world to achieve high-level security on their networks. But it will take a while to get there. Some parts of the network are not appropriately configured, and that means having the right policy and enforcing it. It also means having the right enterprise-wide management system, so you know when networks aren't properly administered.
There is also a severe personnel shortage throughout the country in trained people, not just at the DoD. We're trying to address that through scholarship programs and training programs. But until we get there we have to see if we can't scale things so the maximum can be done at high levels-so where we do have highly trained people, they get to have broad oversight over a lot of systems.
Government and military organizations are also outsourcing immense projects, which makes software backdoors and trapdoors a concern. Art Money said, "Sometimes we don't even know where the code is written." What more can be done to safeguard the code itself-writing the code, examining the code? We wouldn't tolerate the error levels we tolerate in commercial software in other areas, yet that's what's used by military and government systems.
There's a relationship between the quality of software and what the market demands. If a market develops for highly secure software, software vendors will create it. I think that's beginning to happen.
Then we need to do R&D on systems to look for trapdoors more effectively than we do now-and also look for insider activity that's anomalous. We need systems running on the network to detect anomalous activity by insiders, not just intrusion detection systems. We're doing more and more research on getting that capability. So you try to safeguard the code but, in addition, we're trying to find ways of identifying problems with the code after it's out and identifying ways people are slipping through the holes.
You need to have a system that ensures that, when you do find a hole, the patch is sent out in a way that can be authenticated and that the systems administrator puts the patch on. We're deploying a system like that, called SafePatch, at the Depart-ment of Energy, and we'll move it next to the DoD.
This requires a high level of accountability.
With SafePatch, they don't have much choice. It goes to their system and they have to make a decision whether or not to apply it. The record of their decision and action is then sent back to a central authority, so discretion is diminished.
A recent government exercise called Eligible Receiver demonstrated that 35 hackers, using tools downloaded from the Internet, could shut down large segments of Ameri-ca's power grid and silence the command-and-control system of the U.S. military's Pacific Command. But at least one computer security consultant said, "Show me a gun. I need empirical evidence."
There's lots of empirical evidence, and we do show it to people who have a "need to know."
Q. That's of course critical. So how do you realistically communicate the urgency of the threat without unduly alarming people and disclosing details or compromising sources and methods inappropriately?
This is no different from any other area where we see there's a threat. The average citizen for the most part relies on the government's veracity when we say it's a threat. People who have a reason to know the details are generally shown the details, even though they're classified-if they have a reason to know them. There's nothing unique about this because it's a cyber-threat, but maybe the people in cyber- space have never thought about it before.
And the Internet was designed for ease of access and flow of information, not for security, and was ported so rapidly into the platform for government, commerce, everything...
That's right. That's the problem. The IT networks-which is probably a better way of thinking about it than the Internet-were not designed for security, they were not designed by any single group, they have grown spontaneously, and now we have to do two things simultaneously: We have to Band-Aid problems with the existing system, and we have to look ahead three or four years, when there will be a different system, and ensure it has security built in, designed as part of the system rather than glommed on.
Security should be intrinsic, designed as part of the architecture from the inside out.
Q. With the new Bush administration, what changes do you anticipate in the organizational structures you work with?
A. I have no idea what they are going to do, but I have some ideas about what they should do. I think there is a need for a top-level official who has authority to worry about the security of government systems and also to work with the private sector on the security of critical privately owned-and-operated systems. Whether you call that person a CIO-there is a lot of discussion in Congress now about creating a government CIO-or an information security officer, he or she has to be someone like the Y2K czar, who was an assistant to the president and could cause things to happen by direction, rather than just persuasion.
What we've done over the past few years is to create a partnership with the private sector regarding critical information security. Through a number of other mechanisms, we have created public-private partnerships and various structures to share information about threats.
This is the first time there has been a government attempt to deal with information security on a government-wide level and a nationwide level. It was a first toe into the water, bureaucratically, and I think it worked, but one thing we have learned is that the problem is much greater than we thought.