Reactions to Burr-Feinstein and Congressional Hearings

The relationship of government and technology has been cast to the forefront in the past two weeks, with the official introduction of the Burr-Feinstein anti-encryption bill, comments made by a US Attorney about banning “import of open-source encryption software”, and two congressional hearings on technological issues: one by the committee on energy and commerce, and one by the committee on oversight and government reform.  All of this points to a need for greater understanding of the issues surrounding strong encryption, both in the context of this debate as well as in the government at large.

Strong Encryption is Indispensable

Strong encryption is a technological necessity for building and operating computing and communication systems in the modern world.  It is simply not feasible and in many cases not possible to design these systems securely without building in strong encryption at a fundamental level.  We are seeing an increase in the attacks against computing and communication infrastructure, and there is no reason to believe this trend will stop in the foreseeable future.  Simply put, strong encryption is indispensable.

To fully understand the issue, however, we need to explore the specifics in greater detail.

Role of Strong Encryption in Secure Systems

Strong encryption plays a vital role in protecting information in modern computing and communication systems.  Cryptography deals with methods of secure communication over insecure channels.  Because of the scale, the distribution, and the inherent physics of modern communication and computing technology, it is simply not feasible (and in many cases, not even possible) to design and deploy “secure” channels and computing devices.

For example, it would be prohibitively expensive to replace the telecommunications grid with physically secure and shielded land-lines; moreover, this physical security system would be so large as to require its own “secure” communication channels.  Wireless communication, on the other hand, can’t be secured by physical means at all.  Similarly, physically securing every computing device is not even remotely possible, particularly with the proliferation of mobile devices.  Finally, strong encryption is critical for protecting systems from threats like malicious insiders, physical theft or assault, persistent threats, and attackers who are able to breach the outer defenses.

Even with physical security, there are still systems that inherently rely on strong encryption to function.  Authentication systems, which provide a means of securely identifying oneself inherently depend on the ability to present unforgeable credentials and communicate and store those credentials in a manner that prevents theft.  Basic authentication mechanisms rely on encryption to communicate passwords and store them securely.  Advanced authentication mechanisms such as the Kerberos protocol, certificate authentication, and CHAP protocols incorporate strong encryption on a more fundamental level, relying on its properties as part of their design.  These systems are especially high-value targets, as they serve as the “gatekeepers” to other parts of the system.  If an attacker is able to forge or steal authentication materials, they can gain arbitrary access to the system.

Necessity of Increased Use of Strong Encryption

Despite several assertions in the ongoing debates of “rapidly advancing technologies” and “going dark”, strong encryption is nothing new.  The methods and ciphers have existed for decades, and various protocols and technologies have been using them for the better part of twenty years.  Indeed, in certain applications such as banking, medical, and payment processing, use of encryption is mandated by law.  Even when there are no statutory requirements, strong encryption has been used for decades in many applications to mitigate the civil liability risk of data loss.

Prior to 2013, areas such as commodity operating systems, mobile devices, communication protocols, and cloud storage had been lagging behind the aforementioned higher-risk domains in terms of their use of strong encryption for security.  This was driven largely by a lack of perceived need.  However, the increasing interconnectedness of devices and systems coupled with a steady increase in the number, scope, and sophistication of cyberattacks, together with the increase in attacks sponsored by organized crime, corporate, and nation-state entities has driven vendors to build strong encryption into new products by default.  This is not criminals “going dark”.  Rather, it is the world-at-large reacting to an increasingly hostile climate by shoring up its defenses.

This strengthening of defenses is necessary; the data breaches of 2015 are quite literally too numerous to cite here and affected everything from major retailers to critical government systems.  This trend is expected to continue if not increase.  Because attackers tend to target the weak links in a system, we can expect systems that fail to employ strong encryption in their design to become targets for attacks.  Moreover, because of the increasing interconnectivity of devices and sophistication of attacks, we can expect these systems to become entry-points for multi-stage attacks and persistent infiltration.

The Fallacy of Secure Back-Doors

The notion of a secure back-door or “golden key” is a theme that has surfaced again and again in the ongoing debate on encryption.  Moreover, this notion played a central role in the similar debate that took place in the 1990’s.

In 1994, there was a push to legislate the Escrowed Encryption Standard (EES) as legally-usable crypto and to ban unescrowed encryption.  The EES hardware implementation was named “Clipper”, and was designed to provide the very sort of back-door access to encrypted traffic that has been the subject of recent debates.  This push lost its momentum when researchers discovered critical flaws in the cipher.  A very recent attempt by the British GCHQ to design a similar cipher has been found to have similar flaws.

In the mid-2000’s, the NSA introduced a surreptitious back-door into the Dual-EC random-number generation standard.  This back-door was designed to allow the NSA to reconstruct the stream of random numbers generated by the algorithm, thus allowing them to decrypt traffic.  The vulnerability was speculated about and exploits were developed by third-party researchers, and it was ultimately revealed to be the result of a deliberate effort by the NSA in the Snowden documents.  This back-door vulnerability has been a root cause of at least one high-profile breach: the Juniper ScreenOS vulnerability, which affected a number of high-security networks including the U.S. State and Treasury departments.

These real-world cases demonstrate the practical danger of back-doors.  On a more abstract level, a “secure” back-door is a paradox for the simple fact that any back-door is inherently a vulnerability.  Introduction of covert vulnerabilities into security systems has been one of the leading causes of exploits.  Doing so introduces added complexity and anomalies that an experienced researcher can detect and ultimately find ways to exploit.

Moreover, even if a back-door could be engineered in such a way as to be undetectable, there still remains the problem of protecting the information necessary to exploit the back-door.  Were back-doored encryption to be mandated by law, the information necessary to exploit it would be invaluable, as it would provide uncontrolled, unmitigated access into every system using the standard.  We can and should expect rival nation-state entities to employ every means to steal this information and were they to succeed, the result would be a severe national security crisis.

There is a scientific consensus among security researchers that back-doors cannot be engineered in such a way that does not introduce severe security risks.  Moreover, it is very telling that agencies such as GHCQ and the NSA have not produced such a system themselves, despite their considerable mathematical and computational resources and decided interest in doing so.  To ignore these facts and attempt to mandate back-doors would introduce critical and systemic vulnerabilities and grave risks to U.S. national security.

The Futility of an Encryption Ban

Even if secure back-doored cryptography were possible and the access materials could somehow be kept secure from attackers, a ban on strong encryption would be futile for the simple fact that it could not effectively be enforced.  It would be impossible to prevent anyone from obtaining the source code of, or at least the knowledge of how to implement strong crypto even within the U.S., let alone outside of it.

For starters, encryption software is ubiquitous.  Strong crypto has been the subject of extensive academic research for over half a century and has been written about in dozens textbooks and thousands of research papers.  Exact descriptions of strong encryption algorithms have been published in international standards by multiple bodies.  There are many implementations of these algorithms in both open- and closed-source software used around the world.  Moreover, these algorithms can be printed on a few sheets of paper, or even on a T-shirt.

Attempting to ban access to strong encryption is tantamount to attempting to ban the possession and implementation of widespread and pervasive knowledge.  Banning knowledge is as futile as it is misguided, and even if it could work, it would apply only to U.S. persons.  It would not prevent foreigners from obtaining and using knowledge about crypto.  Moreover, there is a long history of case law that would render any such action unconstitutional.  Griswold v. Connecticut arose from an attempt to ban possession or use of knowledge almost a century ago; more recently, Bernstein v. United States establishes the publication of open-source software as a form of free speech, protected by the First Amendment.

Lastly, even if such a ban could stand legally, strong encryption could still be utilized through the related technique of steganography which provides methods for surreptitiously embedding information inside seemingly innocuous data.  As a simple example, a hidden, encrypted message or file can be disguised as ordinary background noise in an image.  It is easy to see how this can be used to defeat any attempt to enforce a ban on encryption.

More fundamentally though, cryptography arises out of mathematics; it is not something we created, but rather something we discovered.  Trying to control the laws of mathematics through legislation is a doomed effort.  Rather, we should focus our efforts on finding ways to make the most of what encryption offers.

Impacts on the U.S. Infosec and Technology Industries

The U.S. information security and technology sectors rely on strong encryption to build secure products and maintain their competitive advantage.  Any ban or restriction on the ability of U.S. companies to use strong encryption in their products will almost certainly have serious negative consequences for these sectors.  This would likely lead to a serious negative impact on the U.S. economy and workforce, as well as national security and technological advantage.

Such a ban would amount to a guarantee that software produced inside the U.S. is insecure, which would create a critical competitive advantage for companies based outside the U.S.  The inability to properly secure software would prevent the information security industry from being able to operate effectively, and we should expect to see those firms immediately begin relocating operations to foreign countries where no such ban exists.  The competitive disadvantage imposed by being unable to produce secure software would likewise drive much of the software and technology sectors to move primary development activities off-shore, if at a slower rate.  The end result of this would not be the sort of universal access by law enforcement that these policies seek to provide, but rather a world where secure software incorporating strong encryption is produced by foreign nations, but not within the U.S.

We can expect that this move by industry would be echoed in the workforce, with the best workers emigrating as soon as possible to avoid negative impacts on their careers, followed by larger migrations driven by a shrinking job pool.  There is already a global shortage for technology workers, and several savvy nations have programs in place to encourage technology workers to immigrate, bringing their talents (and tax revenues) with them.  We could expect more of these sorts of policies should U.S. policies turn against the infosec and technology sectors, as foreign nations seek to capture talent leaving U.S.  This sort of foreign migration of an entire sector was evident during the 1990’s and early 2000’s, when export of strong crypto was controlled under arms trafficking laws within the U.S.

This risk to the information security and technology industry and the potential loss of the U.S.’s technological advantage was directly referenced during the energy and commerce hearing multiple times.  The industry panel confirmed that this is a concern among the industry leaders.  The law enforcement panel rebuffed the concern, but offered only a vague counterargument, stating that the demand for U.S. software would not be impacted because of the U.S.’s reputation.  This argument, which asserts that general reputation will somehow override specific, serious, and material concerns about quality, is an example of magical thinking and does not reflect an accurate picture of how reputation works, particularly with regards to technology.

The impact of the loss of the U.S. information security and technology sectors, as well as the technological advantage enjoyed by the U.S. as a result of its position within these industries on the U.S. economy would be catastrophic.  Moreover, the impact on national security would similarly be severe as it becomes necessary to look abroad for software vendors and security solutions.  Policies that industry leaders agree are likely to lead to this scenario are simply not a risk the U.S. can afford to take.

Relationship of Government and Technology

On a broader scale, we are facing a problem rooted in the relationship of technology and government.  The congressional hearings in particular point to a number of issues in this relationship, ranging from outdated systems, to lack of knowledge and understanding, to a generally disorganized approach.

Encryption is Complex and Requires New Thinking

One of the key difficulties of the issues surrounding encryption is the fact that it is very different from what existing laws and policies have grown accustomed to regulating.  This became evident in the congressional hearings, with representatives and law enforcement officials proposing “real-world” analogies, which do not hold up under more serious scrutiny.

If we are to use analogies to think about encryption and information security, the only really appropriate one is the world of microbiology, where pathogens are ubiquitous, adaptive, and require constant suppression by various immunity mechanisms.  An immune system in this environment is not an extra feature, but an absolute necessity for continued existence.  In such an analogy, the most dangerous pathogens of all are those that target the immune system itself; thus, any additional vulnerability such as a back-door opens up the entire system to such an attack.

More specifically, computational and communication infrastructure is vulnerable to attack because it is automatic, fast, and removed from human judgment.  Institutions like banks can institute security policies for access to assets like safe deposit boxes and vaults that rely on human judgment and that are not susceptible to mass exploitation.  The same is not true of systems protected by encryption: human judgment is far too slow to be a part of any computing process, and attackers can often use exploits against large amounts of data before being detected.

Lastly, the civil rights implications of encryption cannot be overlooked.  Encryption is quite rare among technologies in that it directly protects and supports basic freedoms in an environment that is far less friendly to those freedoms than the physical world in which we live.  While private communications can be conducted and accurate attributions can be made in the physical world, neither of these things are possible over the internet without strong encryption.  With a significant portion of public discourse having moved to computing-based platforms, technologies such as encryption play a key role in protecting  basic freedoms.  Moreover, strong encryption is vital for activists living in countries with oppressive governments, state censorship, and discrimination.  We must be careful to ensure that advancing technology does not erode basic rights, and technologies such as strong encryption play a vital role in doing so.

Encryption is a complex subject that cannot be accurately represented by any “real world” phenomenon, and requires effort to understand enough to form effective policy.  Moreover, it is subject to a “weakest link” principle that mandates considerable caution when developing both systems and the policies that govern them.  However, it is essential that we take the time and effort necessary to develop this understanding.

Technological Deficiency of Law Enforcement: A Serious Problem

One of the overarching themes of the congressional hearings- particularly of the first one is the apparent technological incompetence of high-level law enforcement officials.  This is a very serious problem, especially with attacks by state-sponsored hackers and organized crime on the rise.

The first panel in the hearing by energy and commerce was composed of high-level law-enforcement officials.  As a whole, these officials demonstrated an apparent lack of knowledge of the basics regarding technology and information security.  Their testimony was of a wholly different tone from some of the press we’ve seen in the course of this debate.  We have seen technically dubious PR, such as the claims about “dormant cyber-pathogens” and the New York Times’ characterization of what sounds like a command-line interface as encryption software.  This sort of malicious PR is no doubt designed to exploit false public perceptions formed from inaccurate depictions of hacking in movies and TV to make its point.

However, I do not believe that was what we saw in the energy and commerce hearing; rather, the law-enforcement officials seemed to be making a genuine testimony, but were simply lacking in the knowledge and competency necessary to make a coherent, factually-correct point.  In one of the more serious examples, one of the panelists responded to a question about the role of encryption in protecting authentication with a comment that authentication was a “firewall issue, not an encryption issue”.  This makes no sense technically (firewalls generally don’t manage authentication, while encryption is central in the design of authentication protocols), and points to a fundamental lack of understanding about how secure systems work.  Another panelist suggested statutory limits on the complexity of passwords.  Simply put, such a policy would be nothing short of an information security catastrophe.

This lack of competence shows in the solutions that were proposed by the panelists, which largely focused on attempting to break encryption outright, or else legislate weaknesses into security systems to facilitate this course of action.  This kind of thinking is common among novices in information security; experienced, knowledgeable actors such as professional hackers do not work this way.  A professional hacker would not attempt to break encryption, but rather would focus on circumventing it through measures such as capturing keys, capturing data in an unencrypted form, social engineering, persistent malware, and forensic analysis.

The appropriate response to this by technologists is not scorn and arrogance, but rather alarm and action.  The testimony in this hearing is evidence of a critical vulnerability in our law-enforcement system and by extension an inability to deal with the very real threats posed by the security problem.  This suggests that law enforcement is in desperate need of assistance to develop the necessary competencies to deal with these issues.  The technology sector can and should make efforts to educate and inform law enforcement, and help develop alternatives that do not weaken our infrastructure and create serious economic and national security risks.

Lack of Consensus within the Government

More generally, the hearings demonstrate a critical lack of consensus within the government as to how to act.  This division was evident among the panelists as well as the representatives questioning them.  Some demonstrate good technical competence, and make technically sound recommendations; others quite plainly do not.

Unsurprisingly, the most technically-competent areas of the government take a position in favor of strong encryption.  The NSA for example, has voiced support for strong encryption, as has the Secretary of Defense.  Former NSA and DHS heads have likewise voiced support for strong cryptoA report cited during the oversight and reform panel recommends (among similar points) that the U.S. Government “should not in any way subvert, weaken, or make vulnerable generally-available commercial software.”

Large sections of the government remain dangerously behind both in terms of technical competence and the state of their systems.  We of course have the technically unsound arguments in favor of the introduction of back-doors and other weaknesses in critical systems.  The oversight and reform hearing also revealed that some areas of the government are running dangerously out-of-date legacy systems, even referencing COBOL and punched-card based systems.  This is a serious problem in a world where state-sponsored hackers are on the rise.

To give credit where due, the Obama administration has begun to make moves to address this.  The foundation of the U.S. Digital Service, which seeks to draw talent from industry to address the problems within the government is a step in the right direction.  However, the congressional hearings suggest that we will need to step up these sorts of efforts significantly in order to address these problems effectively.

The Burr-Feinstein Anti-Encryption Bill

The Burr-Feinstein anti-encryption bill (formally, the “Compliance with Court Orders Act”) represents the wrong kind of thinking and policy on the issue of encryption.  The bill mandates that any producer of encryption software must provide access to encrypted data on demand.  While the bill does contain a strange provision stating that it does not mandate or prohibit any design feature, the fact remains that it is impossible to comply with its basic stipulations for any system which includes strong end-to-end encryption.  In spite of its assertion, the bill does effectively prohibit the development and use of these technologies.

As previously discussed, should the bill pass, we should expect the consequences with regard the U.S. information security and technology industries, the U.S. economy and workforce, U.S. national security and technological advantage, and the ability to defend against increasing information security threats to be very bad.  Moreover, the bill’s direction is very much out-of-sync with the recommendations and directions of the most technically competent parts of the government, and would likely undermine their ongoing efforts.

More generally, this bill is simply the wrong direction.  This kind of legislation will not work, as it will not prevent the development of truly secure software outside the U.S., nor can it prevent the use of strong encryption by criminals, state-sponsored hackers, and other extralegal entities.  It does nothing to address the critical lack of technological expertise by critical areas of the government, including law enforcement.  It stands to seriously undermine ongoing and important efforts to strengthen our defenses against a rising tide of attacks, and moreover, it is not at all clear how to comply with the bill’s stipulations while maintaining compliance with existing information security requirements in areas like banking, healthcare, payment processing, and storage of classified data.

Conclusion: Towards Effective Policy

Even though the congressional hearings served to highlight a number of problems, the overall tone was one of Congress taking action- which I believe to be more or less effective action -to understand and address these issues.  Moreover, it was apparent that some members of Congress do possess an astute grasp of the issues surrounding information security and encryption.  Of course, the existence of measures such as the Burr-Feinstein bill and the other problems I’ve mentioned show that we have quite a way to go.

I believe there is a need for the technology sector to take a proactive role in helping to shape these policies.  These issues are extremely complex, and we need to apply our expertise to the problems we are facing to find solutions that won’t cause serious damage to our economy and national security.  There are a number of issues that need to be addressed, including the following:

  • Make addressing the increasing number and sophistication of cyberattacks and vulnerabilities in our infrastructure a policy priority.
  • Address the pervasive presence of vulnerabilities in software as a whole.
  • Proactively replace vulnerable legacy systems and update outdated IT practices within the government.
  • Education and training to address the technological deficiency apparent in law enforcement competencies.
  • Develop techniques, guidance, and equipment to enable law enforcement to capture data in an unencrypted state.
  • Better understanding of the fundamental constraints governing what is possible with regard to encryption and information security.
  • Develop mitigation scenarios and techniques to deal with loss of critical infrastructure due to an exploit.
  • Further encourage and facilitate interaction with industry experts to help the government address these issues effectively.

In closing, one of the most telling remarks in the congressional hearings was the statement by an industry panelist that the state of software security is “a national crisis”.  A crisis of this kind calls for action, and it is critical that we take the necessary steps to understand the issues, so that we may address the crisis effectively.

Advertisements

Spicy Braised Bacon-Wrapped Center-Round

About a month ago at this point, an associate hosted a “hot-foods” party: an annual thing he does every February.  I resolved to actually cook something for it this year, and spent the week before developing concepts for a recipe.

The Concept

Some background on this party: it’s packed full of nerds, and if there’s one thing I’ve learned about cooking for a crowd like that, it’s that you can’t go wrong with bacon.  I started imagining something consisting of about a 3-4 pound piece of beef, rubbed, then wrapped in bacon.  I decided on a rub consisting of salt, garlic, cayenne, paprika, and chili powder.  I ended up making the chili powder myself by cutting up dried chilis (I have a really good knife set, and a large mortar and pestle).  I had considered cilantro, but ended up not adding it as it threw off the balance.

This would do fantastic as a roast, but roasting is a precise art that doesn’t lend itself to packing up the results, driving 20 miles, and then leaving it out all day.  Braising, I’ve found, is a process much more tolerant of this kind of thing.  Thus, I decided on doing a braise.

The Sauce

The liquid is arguably the most important element of a good braise.  Braising is all about putting all the right elements together, then letting them all melt down together into a nice, rich sauce while the meat turns into a beautiful, tender, juiciness that you can pull apart with a fork.

So I knew I had to get the sauce right.  Working out from the rub ingredients, I considered possible bases.  Something made from tomato paste, whiskey, and vinegar (remnants of my North Carolinian origins: vinegar-based barbecue sauce) started to come to mind.  Another idea came to me while eating at my favorite ramen joint: soy sauce, chili oil, and white vinegar (something I make as a dipping sauce for gyoza).  Then I got the idea to try to combine the two.

This seemed challenging, but making a sauce is fundamentally no different from mixing a cocktail: you have to blend flavor spectrums together in a way that balances and compliments.

The combination that ended up working was an even blend of organic soy sauce (this has a different flavor from pasteurized soy sauce), apple cider vinegar, and Rittenhouse Rye, with about one tablespoon of tomato paste per half cup of liquid.

The Preparation

I started out by making the rub by grinding up salt, pepper, and finely minced chilis and chipotle chilis (I used about a 2-1 ratio by volume of regular and chipotle chili), then added about 5-6 cloves of garlic and about as much paprika as I chipotle chili.  Note that if you’re using fresh spices, you really have to adjust the flavor yourself; potency varies too much by batch and by plant.

egami_content.__media_external_images_media_1343.jpeg

I debated adding a little sugar to this, but decided against it.  For people who like sweets more than me and aren’t as fond of salt and vinegar as I am, this might work.

Next, I applied some of this rub to the meat.  When applying a salt-based rub to meat, you need to put some amount into a pan and keep rubbing it in for about 30 minutes.  Most of it won’t stick at first, but if you keep at it, it eventually all will.

egami_content.__media_external_images_media_1344.jpeg

I was originally going to wrap it in bacon and let it sit overnight, but a coworker gave me the idea of soaking the bacon in rye instead.  So I put the roast in a container in the refrigerator overnight and put the bacon in a separate container with rye.

After about 18 hours of sitting, I took the roast out, seared it, wrapped it up in the bacon then seared it again.  I made a mistake here: I should have deglazed the pan so that I could give the bacon a good sear to the point that it started to get nice and crispy.  Instead, the carmelized bits in the pan from the first sear started to burn and I had to stop early.

I had doubts about the double-sear, but it turns out that it is possible to wrap up a seared roast in bacon without burning yourself, if you’re careful.egami_content.__media_external_images_media_1345.jpeg

After this, I deglazed the pan (add a bit of water on low heat and let it dislodge everything, save all the liquid for later), and sauteed one leek, one sweet onion, two dried chiles, one dried chipotle pepper, some mushrooms, some of the leftover rub, and some marrow bones until they were good and browned, then added back the liquid from deglazing the pan along with the braising liquid I’d prepared and let it boil down some.

egami_content.__media_external_images_media_1346.jpeg

After that, I put the roast in with everything, put the lid on, and let it braise at 300 degrees for about 3 hours.  I used a shallow pan that had just enough room with the lid on for the roast to fit inside.  But for a braise, the less open space you have inside the container, the better.

I was going for a good slow cook, so I chose a lower heat and a longer time.  I took it out of the oven about every 30 minutes or so to turn the meat over and spoon some of the liquid on to it.  However, braising is all about moist heat, so you don’t want to open the lid too often.

Because of the nature of braising, it’s hard to overcook, but I probably could have gone with a 2 1/2 or 2 hour cook time just as well.

At the end of the braise, I had to skim off quite a bit of oil.  This isn’t surprising, as bacon and marrow-bones tend to add a lot of oil and as they cook.  I had no use for the oil, but in a larger cooking process it could have been re-used in another dish that called for oil, as it would have soaked up quite a bit of chili and garlic flavor.

The Results

The results were quite pleasing.  After braising for about 3 hours, the sauce had mellowed out quite a bit into a lovely tangy mixture with the “slow-burn” effect one gets from cayenne pepper.  The meat was nothing short of amazing.

I had a 3 1/2 pound roast, and it lasted all of 30 minutes at this party, and people were literally scraping the pan to get every last bit of the sauce.  Someone had made some cornbread, which did quite well in combination with the sauce.

All in all, this was a definite success.

Recipe

This is reconstructing the recipe after the fact, but it should be relatively accurate:

  • 1 Leek
  • 1 Sweet onion
  • 2 Cups mixed aromatic mushrooms
  • 3 Dried chipotle peppers
  • 4 Dried mexican red chiles
  • 1/2 cup kosher salt
  • 1/2 cup mixed peppercorns
  • 2 tsp paprika
  • 1/2 cup organic soy sauce
  • 1/2 cup apple cider vinegar
  • Rittenhouse Rye (1/2 cup for liquid)
  • 3 tbsp tomato paste
  • 3-4 lb center round beef roast
  • Thick-cut bacon

Combine the salt, pepper, 2 chilis, and 1 chipotle pepper in a mortar and pestle and grind.  Mince the garlic and add it along with the paprika, then stir around until the garlic dries up.

Apply the rub to the roast, then cover it.  Also add a small amount to a separate container along with the bacon.  Cover and let both refrigerate for 12-24 hours.

Sear the roast in a pan with a small amount of olive oil, then deglaze the pan and set the liquid aside.  Carefully wrap the roast in the bacon, tie it, and then sear it again (no oil this time) until the bacon is crispy.  Set the seared meat aside.

Combine the soy sauce, vinegar, rye, tomato paste, and 2-4 tbsp of leftover rub.  When the flavor is right, add the liquid from deglazing the pan earlier.

Chop up the leek, the onion, the remaining peppers, and the mushrooms, sautee them in a pan along with the marrow bones and a small amount of olive oil.  Sautee until golden brown, then add the liquid and cook it down until it starts to thicken.

Add the roast, spoon some of the liquid on to the top, cover, and place in an oven at 300 degrees for 2-2 1/2 hours.  Turn the roast and the bones over every 30-45 minutes and spoon some liquid on top of them before covering and placing back in the oven.  For the last 10-15 minutes, remove the lid and place the uncovered container in the oven.

Librem 13 FreeBSD Port

When the Librem laptops were announced last year, I was quite excited and I ordered both the 15 and 13-inch models.  My 13-inch model arrived last week, and I have begun the process of porting FreeBSD to it.

I have to say, I am very excited to finally have a laptop from a fully-cooperative manufacturer, where I can get my hands on all the hardware specs and possibly even upstream fixes.  This is a very welcome boon after a decade of having to deal with flaky BIOS issues, black-box hardware, and other difficulties.

The Laptop

The physical laptop itself is very solid and rather light.  It doesn’t creak, and the lid stays put even better than a macbook.  My only complaints are that the camera/microphone and wireless kill-switches are unlabeled, and that ethernet cables tend to fall out of the drop-down port.  Aside from those minor issues, I’m quite pleased with the physical unit.

IMG_2016-03-02_13-03-19.65.jpeg

It’s hard to see the kill-switches in the photo below, but they are on the hinge under the screen.

My only other regret is that the dvorak keyboard option became available after I’d ordered mine.  Oh well; maybe I can sweet-talk them into swapping it for me at a conference 😉

It was also very nice to unpack a laptop without implicitly accepting a Microsoft license agreement by opening the box!

BIOS and FreeBSD Installation

The first thing I do when I get a new laptop is poke around in the BIOS menu (no photos yet).  The librem has a coreboot port, but I decided to get FreeBSD installed and check the system out a bit before diving into the art of flashing my BIOS, so I was looking at the proprietary American Megatrends BIOS menu.  Even still, I was pleased by the features it presented, most notably the ability to set up custom signing keys.  I am going to have to do some work on a signed FreeBSD boot and loader chain.

My FreeBSD installation went off without any serious issues.  I installed FreeBSD 11 from a bootable memstick option, setting up a pure-ZFS system.  I had ordered a 1TB spindle drive and a 250GB SSD.  I reserved 48GB of the SSD for swap (total of 64GB memory).  I then set up a ZFS pool with the spindle drive as the main storage, a 16GB intent log on the SSD, and the rest of the SSD as an L2ARC cache device.  (I will eventually set up the ZFS volume to make all writes synchronous, so as to really use the intent log.)  I realize some might consider ZFS on a laptop to be overkill; however, I have found it to be an extremely versatile and stable filesystem.  It is incredibly crash-resistant and corruption-resistant, and its snapshotting is invaluable for risky updates.  The transparent compression features are useful as well, and can effectively increase your available space by a sizable amount.  Lastly, I have used the ability to serialize and deserialize the entire filesystem more than once.

I did encounter one of the issues in this process: a sporadic boot-hang and USB timeout that I now strongly suspect to be a timing bug in the FreeBSD boot process.

FreeBSD did handle the hardware kill-switches rather well (I’ve heard reports of Linux kernel panicking from them).  Flipping them off causes some kernel messages about timeouts, but the bus re-initializes upon flipping them back on.  If you boot with them off, then flip them on, the kernel detects the hardware properly.

FreeBSD Setup

The first thing I do on a new FreeBSD system is grab the source tree and build world, followed by kernel customization.  I noticed that building Clang has gotten pretty slow these days (which doesn’t bother me too much; I’d rather the compiler have a lot of optimization machinery than not).

After that, I grabbed the latest ports tree and started building the usual suspects to test the system (also, to get to where I could test X11).  I also grabbed Jean-Sebastian’s Intel graphics patch to see if that driver worked with the Broadwell card.  Sadly, it didn’t.

Working Hardware

Most of the hardware Just Works™, which is nice.  I was particularly pleased that all the fn-key combinations work out-of-the-box.  I have never seen that happen with any other vendor.

The following is a list of the working hardware:

  • The EFI boot/loader
  • SD card reader (mmc driver)
  • Realtek Ethernet (re driver)
  • System management bus and CPU frequency/temperature (smb, smbus, ichsmb, coretemp, cpufreq drivers)
  • Intel High-Def Audio (snd_hda driver), though I haven’t tested the microphone yet.  Also, plugging into the headphone jack properly switches to headphones from the speakers (I’ve seen that not work).
  • Hard Drive and SSD (obviously)
  • USB ports
  • Bluetooth

Unfortunately, the Intel accelerated graphics drivers don’t support the Broadwell cards.  This will come eventually, but FreeBSD is in the midst of a graphics framework overhaul to better track the Linux drivers.  Looks like it’s going to be VESA for now.

Current Issues

There are currently some issues, which I will be working to fix:

  • The Atheros 9462 card is detected, but the radio doesn’t seem to be working.  The pciconf tool reports a few errors, and scans seem to run, but don’t pick up anything.  I have confirmed this is not a hardware issue by booting with a Kali linux memstick.
  • Blank screen on resume.  My initial investigations reveal some ACPI execution errors during resume, which may be related.  I need to get up in the kernel source and add some logging to see what’s going on.
  • VESA wierdness with X11.  The VESA X driver works mostly, but if you switch back to the terminal, there’s a couple of pixels around the border of the screen that stay the way they looked in X.  Also, when you shutdown X, the screen freezes and the logs indicate some kind of timeout.  Both of these seem to implicate the VGA BIOS.
  • Sporadic boot-hang and USB timeouts.  These seem to be specific to a kernel configuration, and go away when changing the verbosity level.  This strongly indicates a timing-related bug in the kernel initialization procedures.

Of these issues, the wireless card and blank screen are the most critical, followed by the X11 weirdness.  I will be in contact with the Librem developers should my initial attempts to fix these issues prove unsuccessful.

Following that, I want to see if there’s a way to make the kill-switches behave more gracefully.  If the USB driver could be connected to treat those devices as hot-pluggable, or else assume timeouts are disconnects.

In any case, stay tuned for updates…

The Complex Nature of the Security Problem

This article is an elaboration on ideas I originally developed in a post to the project blog for my pet programming language project here.  The ideas remain as valid (if not moreso) now as they did eight months ago when I wrote the original piece.

The year 2015 saw a great deal of publicity surrounding a number of high-profile computer security incidents.  While this trend has been ongoing for some time now, the past year marked a point at which the problem entered the public consciousness to the point where it has become a national news item and is likely to be a key issue in the coming elections and beyond.

“The Security Problem” as I have taken to calling it is not a simple issue and it does not have a simple solution.  It is a complex, multi-faceted problem with a number of root causes, and it cannot be solved without adequately addressing each of those causes in turn.  It is also a crucial issue that must be solved in order for technological civilization to continue its forward progress and not slip into stagnation or regression.  If there is a single message I would want to convey on the subject, it is this: the security problem can only be adequately addressed by a multitude of different approaches working in concert, each addressing an aspect of the problem.

Trust: The Critical Element

In late September, I did a “ride-along” of a training program for newly-hired security consultants.  Just before leaving, I spoke briefly to the group, encouraging them to reach out to us and collaborate.  My final words, however, were broader in scope: “I think every era in history has its critical problems that civilization has to solve in order to keep moving forward, and I think the security problem is one of those problems for our era.”

Why is this problem so important, and why would its existence have the potential to block forward progress?  The answer is trust.  Trust: specifically the ability to trust people about which we know almost nothing and indeed, may never meet is arguably the critical element that allows civilization to exist at all.  Consider what might happen, for example, if that kind of trust did not exist: we would be unable to create and sustain basic institutions such as governments, hospitals, markets, banks, and public transportation.

Technological civilization requires a much higher degree of trust.  Consider, for example, the amount of trust that goes into using something as simple as checking your bank account on your phone.  At a very cursory inspection, you trust the developers who wrote the app that allows you to access your account, the designers of the phone, the hardware manufacturers, the wireless carrier and their backbone providers, the bank’s server software and their system administrators, the third-party vendors that supplied the operating system and database software, the scientists who designed the crypto protecting your transactions and the standards organizations who codified it, the vendors who supplied the networking hardware, and this is just a small portion.  You quite literally trust thousands of technologies and millions of people that you will almost certainly never meet, just to do the simplest of tasks.

The benefits of this kind of trust are clear: the global internet and the growth of computing devices has dramatically increased efficiency and productivity in almost every aspect of life.  However, this trust was not automatic.  It took a long time and a great deal of effort to build.  Moreover, this kind of trust can be lost.  One of the major hurdles for the development of electronic commerce, for example, was the perception that online transactions were inherently insecure.

This kind of progress is not permanent, however; if our technological foundations prove themselves unworthy of this level of trust, then we can expect to see stymied progress or in the worst case, regression.

The Many Aspects of the Security Problem

As with most problems of this scope and nature, the security problem does not have a single root cause.  It is the product of many complex issues interacting to produce a problem, and therefore its solution will necessarily involve committed efforts on multiple fronts and multiple complimentary approaches to address the issues.  There is no simple cause, and no “magic bullet” solution.

The contributing factors to the security problem range from highly technical (with many aspects in that domain), to logistical, to policy issues, to educational and social.  In fact, a complete characterization of the problem could very well be the subject of a graduate thesis; the exposition I give here is therefore only intended as a brief survey of the broad areas.

Technological Factors

As the security problem concerns computer security (I have dutifully avoided gratuitous use of the phrase “cyber”), it comes as no surprise that many of the contributing factors to the problem are technological in nature.  However, even within the scope of technological factors, we see a wide variety of specific issues.

Risky Languages, Tools, and APIs

Inherently dangerous or risky programming language or API features are one of the most common factors that contribute to vulnerabilities.  Languages that lack memory safety can lead to buffer overruns and other such errors (which are among the most common exploits in systems), and untyped languages admit a much larger class of errors, many of which lead to vulnerabilities like injection attacks.  Additionally, many APIs are improperly designed and lead to vulnerabilities, or are designed in such a way that safe use is needlessly difficult.  Lastly, many tools can be difficult to use in a secure manner.

We have made some headway in this area.  Many modern frameworks are designed in such a way that they are “safe by default”, requiring no special configuration to satisfy many safety concerns and requiring the necessary configuration to address the others.  Programming language research over the past 30 years has produced many advanced type systems that can make stronger guarantees, and we are starting to see these enter common use through languages like Rust.  My current employer, Codiscope, is working to bring advanced program analysis research into the static program analysis space.  Initiatives like the NSF DeepSpec expedition are working to develop practical software verification methods.

However, we still have a way to go here.  No mature engineering discipline relies solely on testing: civil engineering, for example, accurately predicts the tolerances of a bridge long before it is built.  Software engineering has yet to develop methods with this level of sophistication.

Configuration Management

Modern systems involve a dizzying array of configuration options.  In multi-level architectures, there are many different components interacting in order to implement each bit of functionality, and all of these need to be configured properly in order to operate securely.

Misconfigurations are a very frequent cause of vulnerabilities.  Enterprise software components can have hundreds of configuration options per component, and we often string dozens of components together.  In this environment, it becomes very easy to miss a configuration option or accidentally fail to account for a particular case.  The fact that there are so many possible configurations, most of which are invalid further exacerbates the problem.

Crypto has also tended to suffer from usability problems.  Crypto is particularly sensitive to misconfigurations: a single weak link undermines the security of the entire system.  However, it can be quite difficult to develop and maintain hardened crypto configurations over time, even for the technologically adept.  The difficulty of setting up software like GPG for non-technical users has been the subject of actual research papers.  I can personally attest to this as well, having guided multiple non-technical people through the setup.

This problem can be addressed, however.  Configuration management tools allow configurations to be set up from a central location, and managed automatically by various services (CFEngine, Puppet, Chef, Ansible, etc.).  Looking farther afield, we can begin to imagine tools that construct configurations for each component from a master configuration, and to apply type-like notions to the task of identifying invalid configurations.  These suggestions are just the beginning; configuration management is a serious technical challenge, and can and should be the focus of serious technical work.

Legacy Systems

Legacy systems have long been a source of pain for technologists.  In the past, they represent a kind of debt that is often too expensive to pay off in full, but which exacts a recurring tax on resources in the form of legacy costs (compatibility issues, bad performance, blocking upgrades, unusable systems, and so on).  To most directly involved in the development of technology, legacy systems tend to be a source of chronic pain; however, from the standpoint of budgets and limited resources, they are often a kind of pain to be managed as opposed to cured, as wholesale replacement is far took expensive and risky to consider.

In the context of security, however, the picture is often different.  These kinds of systems are often extremely vulnerable, having been designed in a time when networked systems were rare or nonexistent.  In this context, they are more akin to rotten timbers at the core of a building.  Yes, they are expensive and time-consuming to replace, but the risk of not replacing them is far worse.

The real danger is that the infrastructure where vulnerable legacy systems are most prevalent: power grids, industrial facilities, mass transit, and the like are precisely the sort of systems where a breach can do catastrophic damage.  We have already seen an example of this in the real world: the Stuxnet malware was employed to destroy uranium processing centrifuges.

Replacing these legacy systems with more secure implementations is a long and expensive proposition, and doing it in a way that minimizes costs is a very challenging technological problem.  However, this is not a problem that can be neglected.

Cultural and Policy Factors

Though computer security is technological in nature, its causes and solutions are not limited solely to technological issues.  Policy, cultural, and educational factors also affect the problem, and must be a part of the solution.

Policy

The most obvious non-technical influence on the security problem is policy.  The various policy debates that have sprung up in the past years are evidence of this; however, the problem goes much deeper than these debates.

For starters, we are currently in the midst of a number of policy debates regarding strong encryption and how we as a society deal with the fact that such a technology exists.  I make my stance on the matter quite clear: I am an unwavering advocate of unescrowed, uncompromised strong encryption as a fundamental right (yes, there are possible abuses of the technology, but the same is true of such things as due process and freedom of speech).  Despite my hard-line pro-crypto stance, I can understand how those that don’t understand the technology might find the opposing position compelling.  Things like golden keys and abuse-proof backdoors certainly sound nice.  However, the real effects of pursuing such policies would be to fundamentally compromise systems and infrastructure within the US and turn defending against data breaches and cyberattacks into an impossible problem.  In the long run, this erodes the kind of trust in technological infrastructure of which I spoke earlier and bars forward progress, leaving us to be outclassed in the international marketplace.

In a broader context, we face a problem here that requires rethinking our policy process.  We have in the security problem a complex technological issue- too complex for even the most astute and deliberative legislator to develop true expertise on the subject through part-time study -but one where the effects of uninformed policy can be disastrous.  In the context of public debate, it does not lend itself to two-sided thinking or simple solutions, and attempting to force it into such a model loses too much information to be effective.

Additionally, the problem goes deeper than issues like encryption, backdoors, and dragnet surveillance.  Much of the US infrastructure runs on vulnerable legacy systems as I mentioned earlier, and replacing these systems with more secure, modern software is an expensive and time-consuming task.  Moreover, this need to invest in our infrastructure this way barely registers in public debate, if at all.  However, doing so is essential to fixing one of the most significant sources of vulnerabilities.

Education

Education, or the lack thereof also plays a key role in the security problem.  Even top-level computer science curricula fail to teach students how to think securely and develop secure applications, or even to impress upon students the importance of doing so.  This is understandable: even a decade ago, the threat level to most applications was nowhere near where it is today.  The world has changed dramatically in this regard in a rather short span of time.  The proliferation of mobile devices and connectedness combined with a tremendous upturn in the number of and sophistication of attacks launched against systems has led to a very different sort of environment than what existed even ten year ago (when I was finishing my undergraduate education).

College curricula are necessarily a conservative institution; knowledge is expected to prove its worth and go through a process of refinement and sanding off of rough edges before it reaches the point where it can be taught in an undergraduate curriculum.  By contrast, much of the knowledge of how to avoid building vulnerable systems is new, volatile, and thorny: not the sort of thing traditional academia likes to mix into a curriculum, especially in a mandatory course.

Such a change is necessary, however, and this means that educational institutions must develop new processes for effectively educating people about topics such as these.

Culture

While it is critical to have a infrastructure and systems built on sound technological approaches, it is also true that a significant number of successful attacks on both large enterprises and individuals alike make primary use of human factors and social engineering.  This is exacerbated by the fact that we, culturally speaking, are quite naive about security.  There are security-conscious individuals, of course, but most people are naive to the point that an attacker can typically rely on social engineering with a high success rate in all but the most secure of settings.

Moreover, this naivety affects everything else, ranging policy decisions to what priorities are deemed most important in product development.  The lack of public understanding of computer security allows bad policy such as back doors to be taken seriously and insecure and invasive products to thrive by publishing marketing claims that simply don’t reflect reality (SnapChat remains one of the worst offenders in this regard, in my opinion).

The root cause behind this that cultures adapt even more slowly than the other factors I’ve mentioned, and our culture has yet to develop effective ways of thinking about these issues.  But cultures do adapt; we all remember sayings like “look both ways” and “stop, drop, and roll” from our childhood, both of which teach simple but effective ways of managing more basic risks that arise from technological society.  This sort of adaptation also responds to need.  During my own youth and adolescence, the danger of HIV drove a number of significant cultural changes in a relatively short period of time that proved effective in curbing the epidemic.  While the issues surrounding the security problem represent a very different sort of danger, they are still pressing issues that require an amount of cultural adaptation to address.  A key step in addressing the cultural aspects of the security problem comes down to developing similar kinds of cultural understanding and awareness, and promoting behavior changes that help reduce risk.

Conclusion

I have presented only a portion of the issues that make up what I call the “computer security problem”.  These issues are varied, ranging from deep technological issues obviously focused on security to cultural and policy issues.  There is not one single root cause to the problem, and as a result, there is no one single “silver bullet” that can solve it.

Moreover, if the problem is this varied and complex, then we can expect the solutions to each aspect of the problem to likewise require multiple different approaches coming from different angles and reflecting different ways of thinking.  My own work, for example, focuses on the language and tooling issue, coming mostly from the direction of building tools to write better software.  However, there are other approaches to this same problem, such as sandboxing and changing the fundamental execution model.  All of these angles deserve consideration, and the eventual resolution to that part of the security problem will likely incorporate developments from each angle of approach.

If there is a final takeaway from this, it is that the problem is large and complex enough that it cannot be solved by the efforts or approach of a single person or team.  It is a monumental challenge requiring the combined tireless efforts of a generation’s worth of minds and at least a generation’s worth of time.

Distributed Package and Trust Management

I presented a lightning talk at last night’s Boston Haskell meetup on an idea I’ve been working on for some time now, concerning features for a distributed package and trust manager system.  I had previously written an internal blog post on this matter, which I am now publishing here.

Package Management Background

Anyone who has used or written open-source software or modern languages is familiar with the idea of package managers.  Nearly all modern languages provide some kind of package management facility.  Haskell has Hackage, Ruby has RubyGems, Rust has Cargo, and so on.  These package managers allow users to quickly and easily install packages from a central repository, and they provide a way for developers to publish new packages.  While this sort of system is a step up from the older method of manually fetching and installing libraries that is necessary in languages like C and Java, most implementations are limited to the use-case of open-source development for applications without high security, trust, and auditing requirements.

These systems were never designed for industrial and high-trust applications, so there are some key shortcomings for those uses:

  • No Organizational Repositories- The use of a central package repository is handy, but it fails to address the use case of an organization wanting to set up their own internal package repository.
  • Lack of Support for Closed-Source Packages- Package systems usually work by distributing source.  If you can’t push your packages up to the world, then you default back to the manual installation model.
  • Inconsistent Quality- The central repository tends to accumulate a lot of junk: low-quality, half-finished, or abandoned packages, or as my former colleague John Rose once said, “a shanty-town of bikesheds”.
  • No Verifiable Certification/Accountability- In most of these package systems, there is very little in the way of an accountability or certification system.  Some systems provide a voting or review system, and all of them provide author attribution, but this is insufficient for organizations that want to know about things like certified releases and builds.

Distributed Package Management

There has been some ongoing work in the Haskell community to build a more advanced package management library called Skete (pronounced “skeet”).  The model used for this library is a distributed model that functions more like Git (in fact, it uses Git as a backend).  This allows organizations to create their own internal repositories that receive updates from a central repository and can host internal-only projects as well.  Alec Heller, who I know through the Haskell community is one of the developers on the project.  He gave a talk about it at the Haskell meetup back in May (note: the library has progressed quite a bit since then), which you can find here.

This work is interesting, because it solves a lot of the problems with the current central repository package systems.  With a little engineering effort, the following can be accomplished:

  • Ability to maintain internal package repositories that receive updates from a master, but also contain internal-only packages
  • Ability to publish binary-only distributions up to the public repositories, but keep the source distributions internal
  • Option to publish packages directly through git push rather than a web interface
  • Ability to create “labels” which essentially amount to package sets.

This is definitely an improvement on existing package management technologies, and can serve as a basis for building an even better system.  With this in hand, we can think about building a system for accountability and certification.

Building in Accountability and Certification

My main side project is a dependently-typed systems language.  In such a language, we are able to prove facts about a program, as its type system includes a logic for doing so.  This provides much stronger guarantees about the quality of a program; however, publishing the source code, proof obligations, and proof scripts may not always be feasible for a number of reasons (most significantly, they likely provide enough information to reverse-compile the program).  The next best thing is to establish a system of accountability and certification that allows various entities to certify that the proof scripts succeed.  This would be built atop a foundation that uses strong crypto to create unforgable certificates, issued by the entities that check the code.

This same use case also works for the kinds of security audits done by security consulting firms in the modern world.  These firms conduct security audits on applications, applying a number of methods such as penetration testing, code analysis, and threat modeling to identify flaws and recommend fixes.

This brings us at last to the idea that’s been growing in my head: what if we had a distributed package management system (like Skete) that also included a certification system, so that users could check whether or not a particular entity has granted a particular certification to a particular package.  Specific use cases might look like this:

  • When I create a version of a package, I create a certification that it was authored by me.
  • A third-party entity might conduct an audit of the source code, then certify the binary artifacts of a particular source branch.  This would be pushed upstream to the public package repository along with the binaries, but the source would remain closed.
  • Such an entity could also certify an open-source package.
  • An public CI system could pick up on changes pushed to a package repository (public or private) and run tests/scans, certifying the package if they succeed.
  • A mechanism similar to a block-chain could be used to allow entities to update their certifications of a package (or revoke them)
  • Negative properties (like known vulnerabilities, deprecation, etc) could also be asserted through this mechanism (this would require additional engineering to prevent package owners from deleting certifications about their packages).
  • Users can require that certain certifications exist for all packages they install (or conversely, that certain properties are not true).

This would be fairly straightforward to implement using the Skete library:

  • Every package has a descriptor, which includes information about the package, a UUID, and hashes for all the actual data.
  • The package repositories essentially double as a CA, and manage granting/revocation of keys using the package manager as a distribution system.  Keys are granted to any package author, and any entity which wishes to certify packages.
  • Packages include a set of signed records, which include a description of the properties being assigned to the package along with a hash of the package’s descriptor.  These records can be organized as a block-chain to allow organizations to provide updates at a later date.

Implementation Plans

After I gave my brief talk about this idea, I had a discussion with one of the Skete developers about the possibility of rolling these ideas up into that project.  Based on that discussion, it all seems feasible, and hopefully a system that works this way will be coming to life in the not-too-distant future.

ZFS Support for EFI Now in FreeBSD

Sometime last year, I started working on a patch to add ZFS support to the UEFI boot facilities for FreeBSD.

Backstory: I’ve been a FreeBSD fan and user since my junior year of undergrad (2003), and I run it as my OS whenever I can.  I first started looking into UEFI support as a GSoC project.  Unfortunately, I had to drop the project due to a combination of a sudden job search and my grandfather’s brain cancer diagnosis.

Fast forward a few years, and I circled back to see what remained to be done on the UEFI front.  The boot process was there, but only for UFS.  So over the holidays, I started poking around to see what could be done.

I started out by refactoring boot1 (the program that resides in the EFI partition and pulls loader off the main filesystem and runs it), putting it into a more modular form to support multiple filesystem modules.  I then started writing the ZFS module.  I hit a tipping point some time in April, and got it working completely shortly thereafter.

The next task was loader itself.  This proved trickier, but I eventually figured out what needed to be done.  To my delight, the modified loader worked fine with the GRUB2 bootloader as well as FreeBSD’s boot1.

For most of the rest of the year, it’s been passed around and used by various people and was picked up by NextBSD and PCBSD.  It entered the formal review process in late autumn, and several people contributed changes that helped out immensely in the integration effort.  In particular, several people addressed stylistic issues (I am not terribly familiar with FreeBSD’s style guide) and integrated libstand support (which I had thought to be a problem due to the need for Win32 ABI binaries in EFI).

I was informed on the way home from the gym that it’s been committed to HEAD, and will hopefully make it into 10.3.  I’m glad to see it now officially in FreeBSD, and I’m grateful to the people who helped out with the integration.

I have future plans in this arena, too.  I deliberately modularized the boot1 program in preparation for some other efforts.  First, I plan to look into adding GELI (the full-disk encryption mechanism for FreeBSD) support.  I would also like to see support for checking cryptographic signatures of loader and kernel at boot-time (I’ve heard others are working on something like that).  In the very long run, I’d like to see a completely trusted custody chain from boot to kernel, but that is something that will take multiple steps to realize.

Boston-Area PL/Type Theory

Last night saw the first meeting of the Boston-Area PL/Type Theory group that I put together on Meetup.com (link).  This was an initial meet-and-greet and organizing meeting, intended to serve as a brainstorming session for what to do next.

I’m pleased with the outcome of this meeting.  We were joined by a number of folks from the Boston Haskell community as well as Adam Chlipala of MIT.  Adam suggested that we use space in the MIT computer science department for our events, which seems to be the most advantageous option for several reasons.

We also had a productive discussion about the mission of the group, in particular how to deal with the fact that we will have a rather wide variation in the level of knowledge among members.  The idea came forward that we have different “tracks” of events geared towards different experience levels and goals.  Three distinct tracks emerged from the discussion:

  • Beginners: Featuring events like introductory lectures and group dial-ins to the internet type theory group’s sessions
  • Experienced: Featuring events like a reading group and discussions of and/or lectures on advanced topics
  • “Do Stuff”: Geared towards active work on research and projects, featuring unconference-style events and specific project groups

Some first steps emerged as well.  We decided to have an initial unconference/hackathon (on the “do stuff” track) at some point in February.  We also decided to set up a GitHub group for maintaining the group page, as well as any other projects that happen.  We will surely find other venues for organizing as time goes on.

It looks like we’re off to a good start, and hopefully we’ll see some interesting developments grow out of this!