This article is an elaboration on ideas I originally developed in a post to the project blog for my pet programming language project here. The ideas remain as valid (if not moreso) now as they did eight months ago when I wrote the original piece.
The year 2015 saw a great deal of publicity surrounding a number of high-profile computer security incidents. While this trend has been ongoing for some time now, the past year marked a point at which the problem entered the public consciousness to the point where it has become a national news item and is likely to be a key issue in the coming elections and beyond.
“The Security Problem” as I have taken to calling it is not a simple issue and it does not have a simple solution. It is a complex, multi-faceted problem with a number of root causes, and it cannot be solved without adequately addressing each of those causes in turn. It is also a crucial issue that must be solved in order for technological civilization to continue its forward progress and not slip into stagnation or regression. If there is a single message I would want to convey on the subject, it is this: the security problem can only be adequately addressed by a multitude of different approaches working in concert, each addressing an aspect of the problem.
Trust: The Critical Element
In late September, I did a “ride-along” of a training program for newly-hired security consultants. Just before leaving, I spoke briefly to the group, encouraging them to reach out to us and collaborate. My final words, however, were broader in scope: “I think every era in history has its critical problems that civilization has to solve in order to keep moving forward, and I think the security problem is one of those problems for our era.”
Why is this problem so important, and why would its existence have the potential to block forward progress? The answer is trust. Trust: specifically the ability to trust people about which we know almost nothing and indeed, may never meet is arguably the critical element that allows civilization to exist at all. Consider what might happen, for example, if that kind of trust did not exist: we would be unable to create and sustain basic institutions such as governments, hospitals, markets, banks, and public transportation.
Technological civilization requires a much higher degree of trust. Consider, for example, the amount of trust that goes into using something as simple as checking your bank account on your phone. At a very cursory inspection, you trust the developers who wrote the app that allows you to access your account, the designers of the phone, the hardware manufacturers, the wireless carrier and their backbone providers, the bank’s server software and their system administrators, the third-party vendors that supplied the operating system and database software, the scientists who designed the crypto protecting your transactions and the standards organizations who codified it, the vendors who supplied the networking hardware, and this is just a small portion. You quite literally trust thousands of technologies and millions of people that you will almost certainly never meet, just to do the simplest of tasks.
The benefits of this kind of trust are clear: the global internet and the growth of computing devices has dramatically increased efficiency and productivity in almost every aspect of life. However, this trust was not automatic. It took a long time and a great deal of effort to build. Moreover, this kind of trust can be lost. One of the major hurdles for the development of electronic commerce, for example, was the perception that online transactions were inherently insecure.
This kind of progress is not permanent, however; if our technological foundations prove themselves unworthy of this level of trust, then we can expect to see stymied progress or in the worst case, regression.
The Many Aspects of the Security Problem
As with most problems of this scope and nature, the security problem does not have a single root cause. It is the product of many complex issues interacting to produce a problem, and therefore its solution will necessarily involve committed efforts on multiple fronts and multiple complimentary approaches to address the issues. There is no simple cause, and no “magic bullet” solution.
The contributing factors to the security problem range from highly technical (with many aspects in that domain), to logistical, to policy issues, to educational and social. In fact, a complete characterization of the problem could very well be the subject of a graduate thesis; the exposition I give here is therefore only intended as a brief survey of the broad areas.
As the security problem concerns computer security (I have dutifully avoided gratuitous use of the phrase “cyber”), it comes as no surprise that many of the contributing factors to the problem are technological in nature. However, even within the scope of technological factors, we see a wide variety of specific issues.
Risky Languages, Tools, and APIs
Inherently dangerous or risky programming language or API features are one of the most common factors that contribute to vulnerabilities. Languages that lack memory safety can lead to buffer overruns and other such errors (which are among the most common exploits in systems), and untyped languages admit a much larger class of errors, many of which lead to vulnerabilities like injection attacks. Additionally, many APIs are improperly designed and lead to vulnerabilities, or are designed in such a way that safe use is needlessly difficult. Lastly, many tools can be difficult to use in a secure manner.
We have made some headway in this area. Many modern frameworks are designed in such a way that they are “safe by default”, requiring no special configuration to satisfy many safety concerns and requiring the necessary configuration to address the others. Programming language research over the past 30 years has produced many advanced type systems that can make stronger guarantees, and we are starting to see these enter common use through languages like Rust. My current employer, Codiscope, is working to bring advanced program analysis research into the static program analysis space. Initiatives like the NSF DeepSpec expedition are working to develop practical software verification methods.
However, we still have a way to go here. No mature engineering discipline relies solely on testing: civil engineering, for example, accurately predicts the tolerances of a bridge long before it is built. Software engineering has yet to develop methods with this level of sophistication.
Modern systems involve a dizzying array of configuration options. In multi-level architectures, there are many different components interacting in order to implement each bit of functionality, and all of these need to be configured properly in order to operate securely.
Misconfigurations are a very frequent cause of vulnerabilities. Enterprise software components can have hundreds of configuration options per component, and we often string dozens of components together. In this environment, it becomes very easy to miss a configuration option or accidentally fail to account for a particular case. The fact that there are so many possible configurations, most of which are invalid further exacerbates the problem.
Crypto has also tended to suffer from usability problems. Crypto is particularly sensitive to misconfigurations: a single weak link undermines the security of the entire system. However, it can be quite difficult to develop and maintain hardened crypto configurations over time, even for the technologically adept. The difficulty of setting up software like GPG for non-technical users has been the subject of actual research papers. I can personally attest to this as well, having guided multiple non-technical people through the setup.
This problem can be addressed, however. Configuration management tools allow configurations to be set up from a central location, and managed automatically by various services (CFEngine, Puppet, Chef, Ansible, etc.). Looking farther afield, we can begin to imagine tools that construct configurations for each component from a master configuration, and to apply type-like notions to the task of identifying invalid configurations. These suggestions are just the beginning; configuration management is a serious technical challenge, and can and should be the focus of serious technical work.
Legacy systems have long been a source of pain for technologists. In the past, they represent a kind of debt that is often too expensive to pay off in full, but which exacts a recurring tax on resources in the form of legacy costs (compatibility issues, bad performance, blocking upgrades, unusable systems, and so on). To most directly involved in the development of technology, legacy systems tend to be a source of chronic pain; however, from the standpoint of budgets and limited resources, they are often a kind of pain to be managed as opposed to cured, as wholesale replacement is far took expensive and risky to consider.
In the context of security, however, the picture is often different. These kinds of systems are often extremely vulnerable, having been designed in a time when networked systems were rare or nonexistent. In this context, they are more akin to rotten timbers at the core of a building. Yes, they are expensive and time-consuming to replace, but the risk of not replacing them is far worse.
The real danger is that the infrastructure where vulnerable legacy systems are most prevalent: power grids, industrial facilities, mass transit, and the like are precisely the sort of systems where a breach can do catastrophic damage. We have already seen an example of this in the real world: the Stuxnet malware was employed to destroy uranium processing centrifuges.
Replacing these legacy systems with more secure implementations is a long and expensive proposition, and doing it in a way that minimizes costs is a very challenging technological problem. However, this is not a problem that can be neglected.
Cultural and Policy Factors
Though computer security is technological in nature, its causes and solutions are not limited solely to technological issues. Policy, cultural, and educational factors also affect the problem, and must be a part of the solution.
The most obvious non-technical influence on the security problem is policy. The various policy debates that have sprung up in the past years are evidence of this; however, the problem goes much deeper than these debates.
For starters, we are currently in the midst of a number of policy debates regarding strong encryption and how we as a society deal with the fact that such a technology exists. I make my stance on the matter quite clear: I am an unwavering advocate of unescrowed, uncompromised strong encryption as a fundamental right (yes, there are possible abuses of the technology, but the same is true of such things as due process and freedom of speech). Despite my hard-line pro-crypto stance, I can understand how those that don’t understand the technology might find the opposing position compelling. Things like golden keys and abuse-proof backdoors certainly sound nice. However, the real effects of pursuing such policies would be to fundamentally compromise systems and infrastructure within the US and turn defending against data breaches and cyberattacks into an impossible problem. In the long run, this erodes the kind of trust in technological infrastructure of which I spoke earlier and bars forward progress, leaving us to be outclassed in the international marketplace.
In a broader context, we face a problem here that requires rethinking our policy process. We have in the security problem a complex technological issue- too complex for even the most astute and deliberative legislator to develop true expertise on the subject through part-time study -but one where the effects of uninformed policy can be disastrous. In the context of public debate, it does not lend itself to two-sided thinking or simple solutions, and attempting to force it into such a model loses too much information to be effective.
Additionally, the problem goes deeper than issues like encryption, backdoors, and dragnet surveillance. Much of the US infrastructure runs on vulnerable legacy systems as I mentioned earlier, and replacing these systems with more secure, modern software is an expensive and time-consuming task. Moreover, this need to invest in our infrastructure this way barely registers in public debate, if at all. However, doing so is essential to fixing one of the most significant sources of vulnerabilities.
Education, or the lack thereof also plays a key role in the security problem. Even top-level computer science curricula fail to teach students how to think securely and develop secure applications, or even to impress upon students the importance of doing so. This is understandable: even a decade ago, the threat level to most applications was nowhere near where it is today. The world has changed dramatically in this regard in a rather short span of time. The proliferation of mobile devices and connectedness combined with a tremendous upturn in the number of and sophistication of attacks launched against systems has led to a very different sort of environment than what existed even ten year ago (when I was finishing my undergraduate education).
College curricula are necessarily a conservative institution; knowledge is expected to prove its worth and go through a process of refinement and sanding off of rough edges before it reaches the point where it can be taught in an undergraduate curriculum. By contrast, much of the knowledge of how to avoid building vulnerable systems is new, volatile, and thorny: not the sort of thing traditional academia likes to mix into a curriculum, especially in a mandatory course.
Such a change is necessary, however, and this means that educational institutions must develop new processes for effectively educating people about topics such as these.
While it is critical to have a infrastructure and systems built on sound technological approaches, it is also true that a significant number of successful attacks on both large enterprises and individuals alike make primary use of human factors and social engineering. This is exacerbated by the fact that we, culturally speaking, are quite naive about security. There are security-conscious individuals, of course, but most people are naive to the point that an attacker can typically rely on social engineering with a high success rate in all but the most secure of settings.
Moreover, this naivety affects everything else, ranging policy decisions to what priorities are deemed most important in product development. The lack of public understanding of computer security allows bad policy such as back doors to be taken seriously and insecure and invasive products to thrive by publishing marketing claims that simply don’t reflect reality (SnapChat remains one of the worst offenders in this regard, in my opinion).
The root cause behind this that cultures adapt even more slowly than the other factors I’ve mentioned, and our culture has yet to develop effective ways of thinking about these issues. But cultures do adapt; we all remember sayings like “look both ways” and “stop, drop, and roll” from our childhood, both of which teach simple but effective ways of managing more basic risks that arise from technological society. This sort of adaptation also responds to need. During my own youth and adolescence, the danger of HIV drove a number of significant cultural changes in a relatively short period of time that proved effective in curbing the epidemic. While the issues surrounding the security problem represent a very different sort of danger, they are still pressing issues that require an amount of cultural adaptation to address. A key step in addressing the cultural aspects of the security problem comes down to developing similar kinds of cultural understanding and awareness, and promoting behavior changes that help reduce risk.
I have presented only a portion of the issues that make up what I call the “computer security problem”. These issues are varied, ranging from deep technological issues obviously focused on security to cultural and policy issues. There is not one single root cause to the problem, and as a result, there is no one single “silver bullet” that can solve it.
Moreover, if the problem is this varied and complex, then we can expect the solutions to each aspect of the problem to likewise require multiple different approaches coming from different angles and reflecting different ways of thinking. My own work, for example, focuses on the language and tooling issue, coming mostly from the direction of building tools to write better software. However, there are other approaches to this same problem, such as sandboxing and changing the fundamental execution model. All of these angles deserve consideration, and the eventual resolution to that part of the security problem will likely incorporate developments from each angle of approach.
If there is a final takeaway from this, it is that the problem is large and complex enough that it cannot be solved by the efforts or approach of a single person or team. It is a monumental challenge requiring the combined tireless efforts of a generation’s worth of minds and at least a generation’s worth of time.