The Neverending Quest for IT Security

There’s no end-state where we can call ourselves “secure” and move on to something else. It’s not that security doesn’t have the same challenges and complexities that other projects have — like resource availability, competing priorities, and implementation complexity. It’s just that it’s so very easy to assume we are doing well when we’re really not.

If you ever have a need to burn off some excess optimism, try taking a look through some of the statistics out there about success and failure rates for enterprise IT projects — it’s pretty ugly. Although specifics of statistic and survey data vary, studies have historically suggested failure rates as high as 75 percent for technology projects. That means it’s quite a bit more likely for an IT project to fail than succeed — including projects that don’t complete at all, as well as projects that have time, budget or quality “challenges.”

In general, this isn’t the best news — but those of us in information security should take particular notice of it. Why? Because security has the same challenges — it’s just more difficult to measure the failures.

With an IT project like an application deployment, software development project, or cloud migration — it’s easy to tell whether we’re in the successful 25 percent. That’s because a successful end state is easy to recognize.

For example, if we’re chartered with building a new application, we know we’re done when we decommission the old infrastructure and users are actively using the new system. Because the effort is time-bound, we can understand how efficient we were throughout the process as well: we can compare how long it actually took (and how much it cost) with our initial budgets to see if we met expectations or if we were off.

Security, on the other hand, is different: There’s no end-state where we can call ourselves “secure” and move on to something else. Because of this, gauging success and failure is difficult. It’s not that security doesn’t have the same challenges and complexities that other projects have — like resource availability, competing priorities, and implementation complexity. It’s just that it’s so very easy to assume we are doing well when we’re really not.

We can assume that we are doing well, but it’s important for us to note two things: 1) There are some pretty significant forces acting against us — forces sufficient to cause most IT projects to fail, and 2) These are forces that we have very little in the way of instrumentation to measure. In practice, this is a dangerous place to be — it’s like flying a plane in zero visibility without reliable instruments to provide guidance. It’s a place where we can’t stay long term.

Post your Comments

Leave a Reply