Home > News > The root of the problem: Bad software (security courses at UVa)

The root of the problem: Bad software (security courses at UVa)

From CNet News
November 28, 2001

By Paul Festa

Jump To UVA-related information

There's only one problem with software development these days, according to security analyst and author Gary McGraw: It isn't any good.

McGraw, noted for his books on Java security, is out with a new book that purports to tell software developers how to do it better. Titled Building Secure Software and co-authored with technologist John Viega, the book provides a plan for designing software better able to resist the hacker attacks and worm infestations that plague the networked world.

At the root of the problem, McGraw argues, lies "bad software." While the market demands that software companies develop more features more quickly, McGraw and others in the security field are sounding the alarm that complex and hastily designed applications are sure to be shot through with security holes.

Raised in eastern Tennessee, McGraw studied philosophy at the University of Virginia before getting his dual doctorate in computer and cognitive science from Indiana University. He subsequently went to work for Reliable Software Technologies, now called Cigital, and gained attention in computer security circles for the books he co-authored on Java security.

McGraw spoke to CNET News.com about the state of software development and education, outlining his 10 principles for better security and the five worst software security problems.

Q: You've identified the root of the computer security problem as bad software development. Why is software such a problem?
A: I would say there are three major factors influencing the problem. Number one is complexity. It turns out that software is way more complicated than it used to be. For example, in 1990, Windows 3.1 was two and a half million lines of code. Today, Windows XP is 40 million lines of code. And the best way to determine how many problems are going to be in a piece of software is to count how many lines of code it has. The simple metric goes like this: More lines, more bugs.

The second factor in what I like to call the "trinity of trouble" is connectivity. That is, the Internet is everywhere, and every piece of code written today exists in a networked world. And the third factor is something where we've only seen the tip of the iceberg. It's called extensibility. The idea behind an extensible system is that code will arrive from God knows where and change the environment.

Such as?
A perfect example of this is the Java Virtual Machine in a Web browser, or the .Net virtual machine, or the J2ME micro VM built into phones and PDAs. These are all systems that are meant to be extensible. With Java and .Net, you have a base system, and lots of functionality gets squirted down the wire just in time to assemble itself. This is mobile code.

The idea is that I can't anticipate every kind of program that might want to run on my phone, so I create an extensible system and allow code to arrive as it is needed. Not all of the code is baked in. There are a lot of economic reasons why this is a good thing and a lot of scary things that can happen as a result. I wrote lots about this in 1996 in the Java security book. So if you look at those three problems together--complexity, connectedness and extensibility--they are the major factors making it much harder to create software that behaves.

What are some of the specific problems facing programmers trying to write secure code?
There are many subtleties in writing good programs. There's too much to know, and there aren't many good methods in how to develop software securely. The tools that developers have are bad. Programming is hard. And popular languages like C and C++ are really awful from a security standpoint. Basically, it's not an exact science. So all of these factors work together to cause the problem.

Who else shares responsibility for this problem?
If you think about who practices security today, you'll find that it's usually a network architect, someone who understands the network, an IT person. Now, who develops software? Software architects and developers. Those guys don't talk to the security or network guys. They're often not even in the same organization. The software guys are associated with a line of business, and the IT staff is part of corporate infrastructure.

Historically, isn't part of the problem the fact that a lot of software was developed before computers were networked?
Sure, but computers have been networked for a long time now. You can't exactly say that the Internet is new. Yet we're still producing code as if it were living in a non-networked environment, which is why the connectivity thing is part of this trinity of trouble. Most developers do not learn about security. And so we see this same problem come up over and over again, like buffer overflows, for example.

You write a lot about the lack of security education in the computer science field. What should be done to bridge the education gap?
One thing is that some universities are beginning to teach security, sometimes even software security--UC Davis, the University of Virginia, Purdue, Princeton. And then, the fact is that the world is catching on. The world realizes that if we want to get a proactive handle on computer security, we're going to have to pay more attention to software.

Where has the security focus been, if not on software?
It's been on firewalls.

Wait, aren't firewalls software?
They're software, but they're supposed to be a partial solution to the security problem. You're connected to the Internet and all your ports are wide open, so you get a firewall so only a few of your ports are going to be open, and that lessens your exposure. But the problem is that what's listening to those few ports though the firewall is a piece of code. A lot of people treat security as a network architecture problem. They think, "If I put a firewall between myself and the evil, dangerous Internet, I'll be secure." And I say, "Good start, but your firewall has holes in it on purpose." What's on the other side of those holes?

In the back of their minds, people know that security problems are caused by bad software, like (Microsoft's) IIS Web server. But they try to solve the problem in the wrong way--like firewalls--and the second way is magic crypto fairy dust.

You're not a crypto fan?
Sure, I'm a crypto fan, but it's not magic dust. Cryptography is not security. I think even Bruce Schneier knows this now. (Schneier is an author on the topic of security and a founder of Counterpane Internet Security, a security services company.) Cryptography is a very useful tool for security. But what most people think of as software security is kind of scary. "I'll just make it use SSL (Secure Sockets Layer, a transaction security standard). OK, I'm done." That's a very common approach.

And what's wrong with it?
That does not make your software secure. It secures the communication channel, but not the code. Cryptography's very, very useful, but it's not a complete solution. In fact, cryptography can only solve about 15 percent of serious problems, according to some studies. When you ask them about security, people will name three things: firewalls, cryptography and antivirus software. And that trinity is incomplete because people are not thinking, "Where are these vulnerabilities that are being exploited by the bad guys coming from?"

What is the heart, the root, of the problem?
The root of the problem is bad software. For example, why does BIND (Berkeley Internet Name Domain) get hacked? Because it was badly designed.

What's wrong with it?
The idea is, "Let's make this software do something cool," and nobody thinks, "What about security?" Security is about getting nothing done, and that's the opposite of cool things. So there's this bad trade-off between getting stuff done and being secure. And when people develop software they generally put more emphasis on functionality than on security. This is true of almost every piece of software you can think of--Microsoft, Linux, all of it. If you're selling software, what sells? Functionality sells.

So the problem is the market.
The market certainly seems to prefer functionality over security, yeah. But the market seems to be becoming aware of the security issues. Horrible things are happening in the world that are making people much more cognizant of security. But even without all of the despicable terrorist activity in September, people were becoming more aware of security because they'd been using the Net for a while and coming to realize that there are bad people on the Net, just as there were in the rest of the world. The Net is not a warm and fuzzy place.

How has this increased awareness manifested itself?
You can see it in the growth of the security market. Another thing to think about: Back in the early days, in the mid- and late '90s, there was much more emphasis on dot-com, cool things--get it out quick, get mind share--and very little emphasis on how to protect yourself, to watch out, to assume nothing.

Are you saying that the crash in the dot-com economy might have a silver lining with regard to security?
Absolutely. Because dot-coms were producing lots of code very fast that was really pretty awful. That said, eliminating that effect alone cannot overcome the trinity of trouble.

How does what happened on Sept. 11 affect the computer security agenda?
There are some very thorny issues there. One is, What about privacy? Security and privacy are deeply intertwined. And if you think about it, good security involves figuring out who people are: Who's trying to connect to me? Who's looking at my content? Who's logging in to my system? Who deleted that record? Who made that deposit? Security is about authenticating users, which is the antithesis of anonymity, and to some extent privacy as well. If you cannot be anonymous, you cannot hide anymore.

What do you think of things like Echelon and Carnivore? (Both are controversial U.S. surveillance systems.)
My opinion is that they're very powerful tools, and you have to be extremely careful in how they're used. I don't think computer security is a silver bullet for dealing with terrorists, but I do believe we have some very serious infrastructure vulnerabilities in the computer security arena.

We have a lot of national infrastructure that's all connected to the network, like say the communications network, the electric grid, the train control system, nuclear power plants, gas pipelines, the CDC, the DOD. And all these things are connected to public infrastructure, and there are vulnerabilities in that infrastructure. And we need to do something about that as a nation.

In your book you outline 10 principles for writing secure software. The fourth principle has to do with so-called "security by obscurity," which is how many people in the security community characterize the DMCA (Digital Millennium Copyright Act).
If you think about the DMCA, there are the organizations like the RIAA (Recording Industry Association of America) that are producing content-protection mechanisms that do not work. And their solution, instead of building ones that do work, is to pass a law forbidding people from telling anyone why they don't work. It's a great example of "The Emperor's New Clothes," and what we have done is outlaw the little boy from saying that the emperor has no clothes.

What's open source's role in the security-by-obscurity debate?
Open-source software is neither more nor less secure than closed-source software. And the whole issue of whether open source is more secure is a red herring. We have a chapter in the book about it. Security by obscurity doesn't work. But just because you have your source code sitting around in public doesn't mean someone's going to do a free security review on it, either, which is what the open-source guys think. That's wrong.

People think that because you can look under its hood, open-source software is more vulnerable to attack.
Incorrect. If I have executable code, I can decompile it, I can disassemble it, I can poke it and prod it and steal all its little secrets, just as if I had the source code. I don't need the source code. But get this: The DMCA expressly forbids me from poking and prodding and recompiling that. That's ridiculous. The DMCA should be repealed.

You write about a natural evolution toward complexity that is contrary to the principles behind good security.
It's also known as "feature creep," or a propensity toward what people know as "bloatware." People only use 10 percent of the features in the program, but the rest of it has to go in because some specialized customers wanted it, or the vice president of yadda yadda wanted it.

Here's the bad news: Even if you follow the 10 principles, there's no guarantee that your software will be secure. There's no checklist for this. You have to think about why you're building what you're building, what you have to protect, what you have to lose, who might attack you, what kind of resources they have available to them, and how long you want to protect it from them. Only when you answer those questions can you begin to design something that is secure enough.

What do you mean by "penetrate and patch"?
The idea today is somebody will find a hack for a program, then the vendor runs around like a chicken with its head cut off to provide a patch to fix the program. There are a lot of problems with penetrate and patch. One is that the patches are put together so quickly that they have their own security holes. They introduce more problems than they fix. The other is that most systems administrators ignore patches and don't apply them. The reason is that their stuff just barely works, and they don't want it to screw up.


Original Article | Local Copy

Return To List