FreeBSD EFI GELI Support

I have completed my work to add support for the GELI disk encryption system to the FreeBSD EFI boot loader.  This work started off intending to be a “simple” patch, but it became a much larger undertaking that ended up refactoring a significant portion of the EFI boot loader.

Regardless, the changeset is now usable and ready for testing.  It can be accessed on my GitHub.  I will be merging this periodically with the FreeBSD master in order to keep it up to date.

I am not recommending this for inclusion in the 11 release; it’s too big a change to incorporate this late in the game.

Design/Implementation Notes

This work breaks down into roughly four different components: EFI refactoring, boot crypto framework, GELI support, and kernel key injection.  I’ll cover each of these in turn.

EFI Refactoring

I have already written extensively about my EFI refactoring here.  The reason for undertaking this effort, however, was driven by GELI support.  Early in my work on this, I had implemented a non-EFI “providers” framework in boot1 in order to support the notion of disk partitions that may contain sub-partitions.  This was deeply unsatisfying to me for several reasons:

  • It implemented a lot of the same functionality that exists in the EFI framework.
  • It involved implementing a GPT partition driver to deal with partition tables inside GELI partitions (GPT detection and support is guaranteed by the EFI spec).
  • The interface between the EFI framework and the custom “providers” framework was awkward.
  • The driver was completely boot1-specific, and exporting it to something like GRUB probably involved a total rewrite.
  • Implementing it within loader was going to involve a lot of code duplication.
  • There was no obvious was to pass keys between boot1, loader, and the kernel.

Doing the EFIzation work eliminated these problems, and in my opinion, cleaned things up in boot1 as well.  The results were pleasing:

  • The GELI driver can be extracted from the FreeBSD codebase without too much trouble.
  • While I was unable to go all the way to the EFI driver model, the only blocker is the bcache code, and once that is resolved, we can have hotplug support in the boot loader!
  • The boot1 and loader codebases are now sharing all the backend drivers, and boot1 has been reduced to one very small source file.

As I previously mentioned, my only reservation about this is increased dependence on (historically flaky) vendor-specific BIOS code.  However, with things like CoreBoot on the rise, I’m less concerned about this.

There are a couple of open questions and future work items coming out of this refactoring:

  • Might it be a good idea to move the backend drivers out of boot and loader completely, and have them be wholly-separate EFI drivers that get installed by boot/loader?
  • Provide EFI drivers for GELI and FreeBSD filesystems to projects like CoreBoot, TianoCore, and GRUB.
  • Refactor the bcache code to support dynamic device detection and complete the transition to the EFI driver model.
  • Play with possibilities that arise from hot-plugging, like pluggable USB dongles containing access credentials.

Boot Crypto

The boot crypto refactoring was a small effort to pull the crypto code out of the BIOS GELI code and put it in a common place for all boot utilities.  My hope is that boot_crypto becomes the go-to place for boot-time crypto code.

However, I’m a bit displeased with the state of crypto code in general.  I deemed it too ambitious to take that on in addition to everything else, but it seems that the crypto framework could profit from some work.  The problems I see are as follows:

  • The codebase seems rather disorganized.  There are crypto and opencrypto in the kernel, and there doesn’t seem to be a common interface for all ciphers.
  • There are insecure ciphers and algorithms (RC4, SHA1, MD5, DES) as well as missing modern ciphers (ChaCha20-Poly1305, stronger RipeMD variants, etc).
  • The procedure for linking against the crypto codebase is harder than it could be.

The core crypto code should be usable in userland, kernel, and boot environments, with the only difference being the ABI for which it is compiled (native for user and kernel, MSABI for EFI, 32-bit for x86 BIOS, etc).  It should be possible to have a single codebase that gets turned into static libraries for each ABI and a shared library for userland.

It might be worth an experiment to replace the current crypto code with something like NaCL (libsodium) and use the scheme I describe above.

GELI Driver

The GELI EFI driver is a straightforward EFI bus[0] driver that detects the presence of a GELI volume on a device handle and creates a new device handle bearing an EFI_BLOCK_IO_PROTOCOL interface that provides access to the encrypted data.  This involves getting ahold of a password or a key, which I manage through the KMS driver (actually, the GELI driver currently contains the code to ask for a password directly, though this could and probably should be moved into the KMS driver).


[0]: EFI considers any driver that creates new handles to be a “bus driver”, even if it is something like a partition driver.

Kernel Key Injection and Boot KMS

Passing keys between boot1, loader, and the kernel was one of the key challenges of this work.  I also wanted to do this in a way that could be extended to support hardware security modules with a minimum of effort.  The EFI_KMS_PROTOCOL interface provided a means to accomplish this.

I provided another EFI driver that implements a software-based EFI_KMS_PROTOCOL interface.  This allows boot services to register keys, look them up later, and ultimately have them injected into the kernel.  This code is a fairly simple key table manager internally, and uses the file metadata functionality in loader to communicate keys to the kernel.

The current implementation leaves the task of asking for passwords up to the individual drivers, but I am seriously considering bringing that into the KMS driver as well, under the auspices of the CreateKey function.

On the kernel side, I added an interface to the crypto framework that provides access to the key buffer provided to the kernel by the KMS driver.  This allows keys to be passed from boot to kernel in a safer manner than the environment variable currently used by the BIOS GELI implementation.  It also provides a generic interface going forward for accomplishing this task.

This also puts things in such a state that a hardware-based key management system could be integrated.  Because boot1 and loader use EFI_KMS_PROTOCOL, which was specifically designed for key management systems, it is easy to add support for a device that supports this.  On the kernel side, we would simply expect the hardware KMS to be initialized with the keys we need.

Using and Testing EFI GELI Support

The changeset should be usable for most people.  I have tested the core functionality and have successfully loaded and booted kernels from ZFS volumes inside a GELI partition.  I am not actively using this, however, as I want to participate in the tests of the drm2 4.6 update, and I don’t want to assume the risks involved in merging two experimental branches.

If you want to use or test the changeset, I strongly recommend the following procedure:

  • Be aware of the risks: this is full-disk encryption, so a problem may result in your files becoming inaccessible by any means.
  • Read and understand the basics of EFI and the FreeBSD boot architecture.
  • Install an EFI Shell program on your EFI System Partition, know how to run it from your BIOS menu.
  • Install boot1 to an alternate location (like /efi/boot/bootx64.tst).
  • Install loader to /boot/loader.tst on your main partition (I have left the LOADER_PATH variable set to /boot/loader.tst to facilitate this kind of testing)
  • Start the EFI shell and run /efi/boot/bootx64.tst
  • Try creating an empty GELI partition, and verify that boot1 and loader detect it[1]
  • Try creating a filesystem in the GELI partition, copy your /boot/ directory from your main partition, see if boot1 and loader can read files from it
  • Try loading and booting a kernel from your GELI partition
  • Once you have succeeded at these, back up all your files, try converting your main disk into a GELI partition, and see if you can boot from it.


[1]: The EFIzed loader (with GELI support) should work fine even when loaded by the old boot1 (or by a tool such as GRUB)

Looking Forward: Tamper-Resilience

This changeset adds the first of three major capabilities that are needed to realize a longer-term tamper-resilience that I would very much like to see be a part of FreeBSD.  The steps toward this are as follows:

  1. Support full-disk encryption from boot, with the ESP being the only non-encrypted partition on the system[2]
  2. Support EFI secure boot of boot1, combined with signature checking for loader, the kernel, and device drivers, with the ability to designate a machine-specific signing key to be used to generate the signatures
  3. Support secure suspend-to-disk

These three features when combined and implemented on a ZFS-based system create a powerful tamper-resilience scheme.  Because secure boot is used with a machine-specific platform key, this key can be stored on the encrypted disk.  This ensures that only someone with the ability to decrypt the disk is able to create a boot1 program that will run on the machine.  This prevents anyone from tampering with boot1 and hijacking the boot process.  Moreover, with suspend-to-disk, the machine is only vulnerable to data exfiltration when it is in active use.  When suspended or powered-off, everything is encrypted and protected by the secure boot scheme.

Obviously, this is not perfect; anyone able to overwrite the firmware or mess with the hardware can hijack the secure boot process and tamper with boot1.  This is why I call it tamper-resilience as opposed to tamper-proofing.  However, this scheme guarantees that the OS does not open any vulnerabilities to an attacker who wants to tamper with the system.


[2]: With projects like CoreBoot, it may become possible to have the firmware itself load programs from an encrypted volume

Design Sketch for a Quantum-Safe Encrypted Filesystem

Almost all disk encryption systems today follow a similar design pattern.  Symmetric-key block ciphers are used, with the initialization vector being derived entirely from the block index to which the data is stored.  Often times, the disk is broken up into sections, each of which has its own key.  However, the point of this is that the key and IV are static across any number of writes.

This preserves the atomicity of writes and allows the design to work at the block layer as opposed to the filesystem layer.  However, it also restricts the modes of operation to those that are strong against reuse of IVs.  Typically, this means CBC mode.  This block-level design also makes it quite difficult to integrate a MAC.  Modes like AES-XTS go some distance to mitigating this, and the problem can be mitigated completely by using a filesystem with inherent corruption-resistance like ZFS.

The problem is that this scheme completely prohibits the use of stream ciphers such as ChaCha20 or modes of operation such as CTR or OFB that produce stream cipher-like behavior.  This would be a footnote but for recent results that demonstrate a quantum period-finding attack capable of breaking basically all modes other than CTR or OFB.  This suggests that to implement quantum-safe encrypted storage, we need to come up with a scheme capable of using stream ciphers.

The Problem of Disk Encryption

The fundamental problem with using stream ciphers for block-layer disk encryption stems from the fact that the initialization vector (and ideally the key) must be changed every time the block is written, and this key must be available at an arbitrarily later time in order to read.

In general, there are basically three ways to manage keys in the context of disk encryption:

  1. Derive the key and IV from the block index
  2. Store the keys in a separate location on disk, look this up when needed
  3. Alter the interface for block IO to take a key as a parameter

Most current disk encryption schemes use option 1; however, this ends up reusing IVs, which prohibits the use of stream ciphers (and modes like CTR and OFB).  Option 2 guarantees that we have a unique IV (and key, if we want it) every time we write to a given block; we simply change keys and record this in our key storage.  The price we pay for this is atomicity: every incoming block write requires a write to two separate disk blocks.  This effectively undermines even atomic filesystems like ZFS.  The only example of this sort of scheme of which I am aware is FreeBSD’s older GBDE system.

Option 3 eschews the problem to someone else.  Of course, this means they have to solve the problem somehow.  The only way this ends up not being wholly equivalent to options 1 or 2 is if the consumer of the block-layer interface (the filesystem) somehow organizes itself in a way that a key and IV are always readily available whenever a read or write is about to take place.  This of course, requires addressing full-disk encryption in the filesystem itself.  It also places certain demands on the design of the filesystem.

Atomic Snapshot Filesystems

Atomic filesystems are designed in such a way that all I/O operations appear to be atomic.  With regard to writes, this means that the sort of filesystem corruption that necessitates tools like fsck cannot happen.  Either an operation takes place, or it does not.

Of course, a given write operation may actually perform many block writes; however, the filesystem’s on-disk data structures are carefully designed in such a way that one single write causes all of the operations that lead up to it to “take effect” at once.  Typically, this involves building up a number of “shadow” objects representing the new state, then switching over to them in a single write.

Note that in this approach, we effectively get snapshots for free.  We have a data structure consisting of a mutable spine that points to a complex but immutable set of data structures.  We never overwrite anything until the single operation that updates the mutable spine, causing the operation to take effect.

Atomic Key/IV Updates

The atomic snapshot filesystem design provides a way to effectively change the keys and IVs for every node of a filesystem data structure every time it is written.  Because we are creating a shadow data structure, then installing it with a single write, it is quite simple to generate new keys or IVs every time we create a node in this shadow structure.  Conversely, because the filesystem is atomic, and every node contains the keys and IVs for any node to which it points, anyone traversing the filesystem always has the information they need to decrypt any object they can reach.

This scheme has advantages over conventional disk or filesystem encryption.  Unlike conventional disk encryption, each filesystem object has its own key and IV, and these are uniquely generated every time a write takes place.  Nothing about the key and IV can be inferred by any attacker looking at an arbitrary disk block.  Unlike conventional filesystem encryption which typically only encrypts file contents, everything is encrypted.

Possible ZFS Extension

The ZFS filesystem is a highly advanced filesystem and volume management scheme that provides fully atomic operations and snapshots.  I am admittedly not familiar enough with its workings to know for absolute certain whether the scheme I describe above could be added to it, but I am fairly confident that it could.  I am also aware that ZFS provides an encryption system already, but I am also fairly confident that it is not equivalent to the scheme I describe above.

ZFS would also need to be extended to support a broader range of ciphers and modes of operation to take advantage of this scheme.  Support for CTR and OFB modes are absolutely essential, of course.  I would also recommend support for ciphers beyond AES.  Camellia and ChaCha20 would make good additions, among others.


Quantum-safe disk encryption is arguably not as critical to develop as quantum-safe encryption for network communications.  With network communications, it is reasonable to assume that all traffic is being recorded and will be subject to quantum attacks once those become available.  The same it not true of disk storage.  However, the technology does need to be developed, and the recent results about the period-finding attack on symmetric cipher modes demonstrate a workable attack against nearly all disk encryption schemes.

I would urge all filesystem projects to consider the scheme I’ve laid out and integrate concerns for quantum-safe encryption into their design.

As a final note, should anyone from Illumos run across this blog, I’d be more than willing to discuss more details of this scheme with them.

FreeBSD EFI boot/loader Refactor

I have just completed (for some value of “complete”) a project to refactor the FreeBSD EFI boot and loader code.  This originally started out as an investigation of a possible avenue in my work on GELI full-disk encryption support for the EFI boot and loader, and grew into a project in its own right.

More generally, this fits into a bunch of work I’m pursuing or planning to pursue in order to increase the overall tamper-resistance of FreeBSD, but that’s another article.


To properly explain all this, I need to briefly introduce both the FreeBSD boot and loader architecture as well as EFI.

FreeBSD Boot Architecture

When an operating system starts, something has to do the work of getting the kernel (and modules, and often other stuff) off the disk and into memory, setting everything up, and then actually starting it.  This is the boot loader.  Boot loaders are often in a somewhat awkward position: they need to do things like read filesystems, detect some devices, load configurations, and do setup, but they don’t have the usual support of the operating system to get it done.  Most notably, they are difficult to work with because if something goes wrong, there is very little in the way of recovery, debugging, or even logging.

Moreover, back in the old days of x86 BIOS, space was a major concern: the BIOS pulled in the first disk sector, meaning the program had to fit into less than 512 bytes.  Even once a larger program was loaded, you were still in 16-bit execution mode.

To deal with this, FreeBSD adopted a multi-stage approach.  The initial boot loader, called “boot”, had the sole purpose of pulling in a more featureful loader program, called “loader”.  In truth, boot consisted of two stages itself: the tiny boot block, and then a slightly more powerful program loaded from a designated part of the BSD disklabel.

The loader program is much more powerful, having a full suite of filesystem drivers, a shell, facilities for loading and unloading the kernel, and other things.  This two-phase architecture overcame the severe limitations of the x86 BIOS environment.  It also allowed the platform-specific boot details to be separated from both the loader program and the kernel.  This sort of niceness is the hallmark of a sound architectural choice.

Inside the loader program, the code uses a set of abstracted interfaces to talk about devices.  Devices are detected, bound to a device switch structure, and then filesystem modules provide a way to access the filesystems those devices contain.  Devices themselves are referred to by strings that identify the device switch managing them.  This abstraction allows loader to support a huge variety of configurations and platforms in a uniform way.

The Extensible Firmware Interface

In the mid-2000’s, the Extensible Firmware Interface started to replace BIOS as the boot environment on x86 platforms.  EFI is far more modern, featureful, abstracted, and easy to work with than the archaic, crufty, and often unstandardized or undocumented BIOS.  I’ve written boot loaders for both; EFI is pretty straightforward, where BIOS is a tarpit of nightmares.

One thing EFI does is remove the draconian constraints on the initial boot loader.  The firmware loads a specific file from a filesystem, rather than a single block from a disk.  The EFI spec guarantees support for the FAT32 filesystem and the GUID Partition Table, and individual platforms are free to support others.

Another thing EFI does is provide abstracted interfaces for things like device IO, filesystems, and many other things.  Devices- both concrete hardware and derived devices such as disk partitions and network filesystems are represented using “device handles”, which support various operational interfaces through “protocol interfaces”, and are named using “device paths”.  Moreover, vendors and operating systems authors alike are able to provide their own drivers through a driver binding interface, which can create new device handles or bind new protocol interfaces to existing ones.

FreeBSD Loader and EFI Similarities

The FreeBSD loader and the EFI framework do many of the same things, and they do them in similar ways most of the time.  Both have an abstracted representation of devices, interfaces for interacting with them, and a way of naming them.  In many ways, the FreeBSD loader framework is prescient in that it did many of the things that EFI ended up doing.

The one shortcoming of FreeBSD loader is in the lack of support for dynamic device detection, also known as “hotplugging”.  When FreeBSD’s boot architecture was created (circa 1994), hotplugging was extremely uncommon: most hardware expected to be connected permanently and remain connected for the duration of operation.  Hence, the architecture was designed around a model of one-time static detection of all devices, and the code evolved around that assumption.  Hot-plugging was added to the operating system itself, of course, but there was little need for it in the boot architecture.  When EFI was born (mid 2000’s), hot-pluggable devices were common, and so supporting them was an obvious design choice.

EFI does this through its driver binding module, where drivers register a set of callbacks that check whether a device is supported, and then attempt to attach to it.  When a device is disconnected, another callback is invoked to disconnect it.  FreeBSD’s loader, on the other hand, expects to detect all devices in a probing phase during its initialization.  It then sets up additional structure (most notably, its new bcache framework) based on the list of detected devices.  Some phases of detection may rely on earlier ones; for example, the ZFS driver may update some devices that were initially detected as block devices.

Refactoring Summary

As I mentioned, my work on this was originally a strategy for implementing GELI support.  A problem with the two-phase boot process is that it’s difficult to get information between the two phases, particularly in EFI, where all code is position-independent, no hard addresses are guaranteed, and components are expected to talk through abstract interfaces.  (In other words, it rules out the sort of hacks that the non-EFI loader uses!)  This is a problem for something like GELI, which has to ask for a password to unlock the filesystem (we don’t want to ask for a password multiple times).  Also, much of what I was having to implement for GELI with abstract devices and a GPT partition driver and such ended up mirroring things that already existed in the EFI framework.

I ended up refactoring the EFI boot and loader to make more use of the EFI framework, particularly its protocol interfaces.  The following is a summary of the changes:

  • The boot and loader programs now look for instances of the EFI_SIMPLE_FILE_SYSTEM_PROTOCOL, and use that interface to load files.
  • The filesystem backend code from loader was moved into a driver which does the same initialization as before, then attaches EFI_SIMPLE_FILE_SYSTEM_PROTOCOL interfaces to all device handles that host supported filesystems.
  • This is accomplished through a pair of wrapper interfaces that translate EFI_SIMPLE_FILE_SYSTEM_PROTOCOL and the FreeBSD loader framework’s filesystem interface back and forth.
  • I originally wanted to move all device probing and filesystem detection into the EFI driver model, where probing and detection would be done in callbacks.  However, this didn’t work primarily because the bcache framework is strongly coupled to the static detection way of doing things.
  • Interfaces and device handles installed in boot can be used by loader without problems.  This provides a way to pass information between phases.
  • The boot and loader programs can also make use of interfaces installed by other programs, such as GRUB, or custom interfaces provided by open-source firmware.
  • The boot and loader programs now use the same filesystem backend code; the minimal versions used by boot have been discarded.
  • Drivers for things like GELI, custom partition schemes, and similar things can work by creating new device nodes and attaching device paths and protocol interfaces to them.

I sent an email out to -hackers announcing the patch this morning, and I hope to get GELI support up and going in the very near future (the code is all there; I just need to plug it in to the EFI driver binding and get it building and running properly).

For anyone interested, the branch can be found here: