Slides from my BSDCan Talk

These are the slides from by BSDCan talk on a system-level public-key trust system for FreeBSD.

Advertisements

Democratizing and Decentralizing Social Media Infrastructure

This post is more of a release announcement for a whitepaper I’ve been working on for some time now.  After a number of edits, I am prepared to release the full paper for public reading.  I will be seeking to present this in the form of a talk at open-source, particularly socially-focused conferences.

The central thesis lies around an alternative infrastructure for (among other things) social applications that is fundamentally decentralized, democratic, and user-respecting.  No, this is not a blockchain proposal!  While I claim no credit for it, the timing really couldn’t have been better, with the news about FaceBook, Cambridge Analytica, and others breaking daily.

In the coming year, I will be seeking to implement the architecture I describe herein.  I’ll be publicizing the paper, seeking non-profit funding and industry partners, and developing an open-source reference implementation.

But rather than keep talking, I’ll leave you to read the paper.  Comments and other points of interest are welcome, and longer-form comments should be directed to me via email.

Meltdown/Spectre and Implications for OS Security

This week has seen the disclosure of a devastating vulnerability: meltdown and its sister vulnerability, spectre.  Both are a symptom of a larger problem that has evidently gone unnoticed for the better part of 50 years of computer architecture: the potential for side-channels in processor pipeline designs.

Aside: I will confess that I am rather ashamed that I didn’t notice this, despite having a background in all the knowledge areas that should have revealed it.  It took me all of three sentences of the paper before I realized what was going on.  Then again, this somehow got by every other computer scientist for almost half a century.  The conclusion seems to be that we are, all of us, terrible at our jobs…

A New Class of Attack, and Implications

I won’t go in to the details of the attacks themselves; there are already plenty of good descriptions of that, not the least of which is the paper.  My purpose here is to analyze the broader implications with particular focus on the operating systems security perspective.

To be absolutely clear, this is a very serious class of vulnerabilities.  It demands immediate and serious attention from both hardware architects and OS developers, not to mention people further up the stack

The most haunting thing about this disclosure is that it suggests the existence of an entire new class of attack, of which the meltdown/spectre attacks are merely the most obvious.  The problem, which has evidently been lurking in processor architecture design for half a century has to do with two central techniques in processor design:

  1. The fact that processors tend to enhance performance by making commonly-executed things faster.  This is done by a whole host of features: memory caches, trace caches, and branch predictors, to name the more common techniques.
  2. The fact that processor design has followed a general paradigm of altering the order in which operations are executed, either by overlapping multiple instructions (pipelining), executing multiple instructions in parallel (superscalar), executing them out of order and then reconciling the results (out-of-order), and executing multiple possible outcomes and keeping only one result (speculative).

The security community is quite familiar with the impacts of (1): it is the basis for a host of side-channel vulnerabilities.  Much of the work in computer architecture deals with implementing the techniques in (2) while maintaining the illusion that everything is executed one instruction at a time.  CPUs are carefully designed for this with regard to processor state according to the ISA, carefully tracking which operations depend on which and keeping track of the multiple possible worlds that arise from speculation.  What evidently got by us is that the side-channels provided by (1) provide an attacker with the ability to violate the illusion of executing one instruction at a time.

What this almost certainly means is that this is not the last attack of this kind we will see.  Out-of-order speculative execution happens to be the most straightforward to attack.  However, I can sketch out ways in which even in-order pipelines could potentially be vulnerable.  When we consider more obscure architectural features such as trace-caches, branch predictors, instruction fusion/fission, and translation units[0], it becomes almost a certainty that we will see new attacks in this family published well into the future.

A complicating factor in this is that CPU pipelines are extraordinarily complicated, particularly these days.  (I recall a tech talk I saw about the SPARC processor, and its scope was easily as large as the FreeBSD core kernel, or the HotSpot virtual machine.)  Moreover, these designs are typically closed.  There are a lot of different ways a core can be designed, even for a simple ISA, and we can expect entire classes of these design decisions to be vulnerable.

 

[0]: The x86 architecture tends to have translation units that are significantly more complex than the cores themselves, and are likely to be a nest of side-channels.

OS Security

The implications of this for OS (and even application) security are rather dismal.  The one saving virtue in this is that these attacks are read-only: they can be used to exfiltrate data from any memory immediately accessible to the pipeline, but it’s a pretty safe assumption that they cannot be used to write to memory.

That boon aside, this is devastating to OS security.  As a kernel developer, there aren’t any absolute defenses that we can implement without a complete overhaul of the entire paradigm of modern operating systems and major revisions to the POSIX specification.  (Even then, it’s not certain that this would work, as we can’t know how the processors implement things under the hood.)

Therefore, OS developers are left with a choice among partial solutions.  We can make the attacks harder, but we cannot stop them in all cases.  The only perfect defense is to replace the hardware, and hope nobody discovers a new side-channel attack.

Attack Surface and Vulnerable Assets

The primary function of these kinds of attacks is to exfiltrate data across various isolation boundaries.  The following is the most general description of the capabilities of the attack, as far as I can tell:

  1. Any data that can be loaded into a CPU register can potentially be converted into the execution of some number of events before a fault is detected
  2. These execution patterns can affect the performance of future operations, giving rise to a side-channel
  3. These side-channels can be read through various means (note that this need not be done by the same process)

This gives rise to the ability to violate various isolation boundaries (though only for the purposes of observing state):

  • Reads of kernel/hypervisor memory from a guest (aka “meltdown”)
  • Reads of another process’ address space (aka “spectre”)
  • Reads across intra-process isolation mechanisms, such as different VM instances (this attack is not named, but constitutes, among other things, a perfect XSS attack)

A salient feature of these attacks is that they are somewhat slow: as currently described, the attack will incur 2n + 1 cache faults to read out n bits (not counting setup costs).  I do not expect this to last, however.

The most significant danger of this attack is the exfiltration of secret data, ideally small and long-lived data objects.  Examples of key assets include:

  • Master keys for disk encryption (example: the FreeBSD GELI disk encryption scheme)
  • TLS session keys
  • Kerberos tickets
  • Cached passwords
  • Keyboard input buffers, or textual input buffers

More transient information is also vulnerable, though there is an attack window.  The most vulnerable assets are those keys which necessarily must remain in memory and unencrypted for long periods of time.  The best example of this I can think of is a disk encryption key.

Imperfect OS Defense Mechanisms

Unfortunately, operating systems are limited in their ability to adequately defend against these attacks, and moreover, many of these mechanisms incur a performance penalty.

Separate Kernel and User Page Tables

The solution adopted by the Linux KAISER patches is to unmap most of the kernel from user space, keeping only essential CPU metadata and the code to switch page tables mapped.  Unfortunately, this requires a TLB flush for every system call, incurring a rather severe performance penalty.  Additionally, this only protects from access to a kernel address; it cannot stop accesses to other process address spaces or crossing isolation boundaries within a process.

Make Access Attempts to Kernel Memory Non-Recoverable

An idea I proposed on the FreeBSD mailing lists is to cause attempted memory accesses to a kernel address range result in an immediate cache flush followed by non-recoverable process termination.  This avoids the cost of separate kernel and user page tables, but is not a perfect defense in that there is a small window of time in which the attack can be carried out.

Special Handling of Sensitive Assets

Another potential middle-ground is to handle sensitive kernel assets specially, storing them in a designated location in memory (or better yet, outside of main memory).  If a main-memory store is used, this range alone can be mapped and unmapped when crossing into kernel space, thus avoiding most of the overhead of a TLB flush.  This would only protect assets that are stored in this manner; however.  Anything else (most notably the stacks for kernel threads) would remain vulnerable.

Userland Solutions

Userland programs are even less able to defend against such attacks than the kernel; however, there are architectural considerations that can be made.

Avoid Holding Sensitive Information for Long Periods

As with kernel-space sensitive assets, one mitigation mechanism is to avoid retaining sensitive assets for long periods of time.  An example of this is GPG, which keeps the users’ keys decrypted only for a short period of time (usually 5 minutes) after they request to use a key.  While this is not perfect, it limits attacks to a brief window, and presents users with the ability create practices which ensure that they shut off other means of attack during this window.

Minimize Potential for Remote Execution

While this only works on systems that are effectively single-user, minimizing the degree to which remote code can execute on ones system closes many vulnerabilities.  This is obviously difficult in the modern world, where most websites pull in Javascript from dozens if not hundreds of sources, but it can be done with browser plugins like noscript.

JIT/Interpreter Mitigations

As JIT compilers and interpreters are a common mechanism for limited execution of remote code, they are in a position to attempt to mitigate many of these attacks (there is an LLVM patch out to do this).  This is once again an imperfect defense, as they are on the wrong side of Rice’s theorem.  There is no general mechanism for detecting these kinds of attacks, particularly where multiple threads of execution are involved.

Interim Hardware Mitigations

The only real solution is to wait for hardware updates which fix the vulnerabilities.  This take time, however, and we would like some interim solution.

One possible mitigation strategy is to store and process sensitive assets outside of the main memory.  HSMs and TPMs are an example of this (though not one I’m apt to trust), and the growth of the open hardware movement offers additional options.  One in particular- which I may put into effect myself -uses a small FPGA device (such as the PicoEVB) programmed with an open design as such a device.

More generally, I believe this incident highlights the value of these sorts of hardware solutions, and I hope to see increased interest in developing open implementations in the future.

Summary

The meltdown and spectre attacks- severe as they are by themselves -represent the beginning of something much larger.  A new class of attack has come to the forefront, which enables data exfiltration in spite of hardware security mechanisms, and we can and should expect to see more vulnerabilities of this kind in the future.  What makes these vulnerabilities so dangerous is that they directly compromise hardware protection mechanisms on which OS security depends, and thus there is no perfect defense at the OS level against them.

What can and should be done is to adapt and rethink security approaches at all levels.  At the hardware level, a paradigm shift is necessary to avoid vulnerabilities of this kind, as is the consideration that hardware security mechanisms are not as absolute as has been assumed.  At the OS level, architectural changes and new approaches can harden an OS against these vulnerabilities, but they generally cannot repel all attacks.  At the user level, similar architectural changes as well as minimization of attack surfaces can likewise reduce the likelihood and ease of attack, but cannot repel all such attacks.

As a final note, the trend in information security has been increasing focus on application, and particularly web application security.  This event, as well as the fact that a relatively simple vulnerability managed to evade detection by the entire field of computer science for half a century strongly suggests that systems security is not at all the solved problem it is commonly assumed to be, and that new thinking and new approaches are needed in this area.

Design of a Trust System for FreeBSD

About a month ago, I started a discussion on freebsd-hackers and freebsd-security about a system for signed executables, with a focus on signed kernels and kernel modules.  This is part of a larger agenda of mine to equip FreeBSD with OS-level tamper resistance features.

While the initial use of this is for signing the kernel and its modules, and checking signatures during the loader process as well as at runtime when kernel modules are loaded.  However, it is desirable to build a system that is capable of growing in likely directions, such as executable and library signing.

This article details the current state of the design of this system.

Desiderata

I originally outlined a number of goals for this system:

  1. Be able to check for a correct cryptographic signature for any kernel or modules loaded at boot time for some platforms (EFI at a minimum)
  2. Be able to check for a correct cryptographic signature for any kernel module loaded during normal operations (whether or not to do this could be controlled by a sysctl, securelevel, or some similar mechanism)
  3. Work with what’s in the base system already and minimize new additions (ideally, just a small utility to sign executables)
  4. Minimize administrative overhead and ideally, require no changes at all to maintain signed kernel/modules
  5. Have a clear path for supporting signed executables/libraries.
  6. The design must support the case where a system builds locally and uses its own key(s) for signing kernels and modules (and anything else) and must allow the administrator complete control over which key(s) are valid for a given system (ie. no “master keys” controlled by central organizations)
  7. The design must allow for the adoption of new ciphers (there is an inevitable shift to post-quantum ciphers coming in the near future)

I also specified a number of non-goals:

  • Hardware/firmware-based attacks are considered out-of-scope (there is no viable method for defending against them at the OS level)
  • Boot platforms that don’t provide their own signature-checking framework up to loader/kernel can’t be properly secured, and are considered out-of-scope
  • Boot platforms that impose size restrictions prohibiting incorporation of RSA and ED25519 crypto code (ex. i386 BIOS) are considered out-of-scope
  • GRUB support is desirable, however it is not necessary to support GRUB out-of-the-box (meaning a design requiring reasonable modifications to GRUB is acceptable

Considerations

There are several considerations that should weigh in on the design.

FreeBSD Base System

Unlike linux, FreeBSD has a base operating system which contains a number of tools and libraries which provide a set of operating system utilities.  Most notably, the base system contains the OpenSSL (or in some cases, LibreSSL) crypto suite.  This includes an encryption library as well as tools capable of creating and managing key-pairs and other cryptographic data in a variety of formats.

Additionally, the FreeBSD base system contains libelf, which is a library that provides mechanisms for manipulating ELF binaries.  Additionally, the base system provides the binutils suite, including objcopy, which are command-line tools capable of manipulating ELF binaries.

Note that only some of these components (namely the signelf tool) exist at the present; the rest of the components exist only as man pages that describe them at present.

Public-Key Cryptography

The FreeBSD kernel does not currently incorporate code for public-key cryptography, and direct incorporation of OpenSSL into the kernel has proven infeasible.  Additionally, parsing code needs to be incorporated into the kernel for any formats that are used.  Options here include incorporation of code from the NaCl library, which provides a very lightweight implementation of both RSA 4096 and Ed25519, as well as creating a minimal library out of code harvested from OpenSSL or LibreSSL.

A note on elliptic curve cryptography: the state of support for safe elliptic curves is sad.  In my drafts of the man pages, I have mandated that the only acceptable curves are those that satisfy the security properties described by the SafeCurves project.  At this time, these include M-221, E-222, Curve1174, Curve25519, E-382, M-383, Curve383187, Curve41417, Goldilocks-448, M-511, and E-521.  Unfortunately, none of these is supported by OpenSSL at this time, though Curve25519 support is supposedly coming soon.  However, I would prefer to write specs that mandate the right curves (and thus put pressure on crypto libraries) than cave to using bad ones.

Modifications to GRUB

GRUB provides the best option for FreeBSD coreboot support at this time.  It also provides an existing mechanism for signing binaries.  However, this mechanism is deficient in two ways.  First, it relies on external signatures, which would complicate administration and require modification of virtually all installer programs, as well as run the risk of stale signatures.  Second, it relies on the gnupg toolset, which is not part of the FreeBSD base system.  Thus, it is inevitable that GRUB will need to be patched to support the signed executables proposed by this design.  However, we should make efforts to keep the necessary changes as minimal as possible.

Signing and Trust System Design

The signing and trust system consists of a number of components, some of which are standards, some of which are interfaces, and some of which are tools.  The core feature, of course, is the signed ELF convention.  The signelf tool provides a one-stop tool for signing large numbers of executables.  The trust system provides a system-level mechanism for registering and maintaining verification keys that are used to check signatures on kernel modules.  Finally, the portable verification library provides a self-contained code package that can be dropped into the kernel, the loader, or a third-party codebase like GRUB.

Note that this design is not yet implemented, so it may be subject to change.  Also, it has not yet undergone review on the FreeBSD lists, so it should be considered more of a proposal.

Signed ELF Binaries

The ELF format is very flexible, and provides a generic mechanism for storing metadata.  The signed ELF convention utilizes this to store signatures in a special section within the binary itself.  A signed ELF binary contains a section named .sign, which contains a detached PKCS#7 signature in DER encoding for the file.  This signature is computed (and checked) on the entire file, with the .sign section itself being replaced by zero data of equal size and position.

Signing an ELF binary is somewhat involved, as it requires determining the size of a signature, creating a new section (along with its name), recomputing the ELF layout, computing the signature, and writing it into the section.  Checking a signature is considerably simpler: it involves merely copying the signature, overwriting the .sign section with zeros, and then checking the signature against the  entire file.

The PKCS#7 format was chosen because it is an established standard which supports detached signatures as well as many other kinds of data.  The signatures generated for signed ELF files are minimal and do not contain certificates, attributes, or other data (a signature for RSA-4096 is under 800 bytes); however, the format is extensible enough to embed other data, allowing for future extensions.

The signelf Tool

Signed ELF binaries can be created and checked by adroit usage of the objcopy and openssl command-line tools.  This is quite tedious, however.  Moreover, there are certain use cases that are desirable, like signing a batch of executables using an ephemeral key, discarding the key, and generating a certificate for verification.  The signelf tool is designed to be a simplified mechanism for signing batches of executables which provides this additional functionality.  It is a fairly straightforward use of libelf and OpenSSL, and should be able to handle the binaries produced by normal compilation.  Additionally, the signelf tool can verify signed ELF files.  The signelf code is currently complete, and works on a kernel as well as modules.

The Trust System

In order to check signatures on kernel modules (and anything else), it is necessary to establish and maintain a set of trusted verification keys in the kernel (as well as in the boot loader).  In order for this system to be truly secure, at least one trust root key must be built into the kernel and/or the boot loader, which can then be used to verify other keys.  The trust system refers to the combination of kernel interfaces, standard file locations, and conventions that manage this.

System Trust Keys and Signing Keys

The (public) verification keys used to check signatures as well as the (private) signing keys used to generate signatures are kept in the /etc/trust/ directory.  Verification keys are stored in /etc/trust/certs, in the X509 certificate format, and private keys are stored in /etc/trust/keys in the private key format.  Both are stored in the PEM encoding (as is standard with many OpenSSL applications).

There is no requirement as to the number, identity, or composition of verification or signing keys.  Specifically, there is not and will never be any kind of mandate for any kind of verification key not controlled by the owner of the machine.  The trust system is designed to be flexible enough to accommodate a wide variety of uses, from machines that only trust executables built locally, to ones that trust executables built on an in-house machine only, to those that trust executables built by a third party (such as the FreeBSD foundation), or any combination thereof.

The preferred convention, however, is to maintain a single, per-machine keypair which is then used to sign any additional verification keys.  This keypair should be generated locally for each machine, and never exported from the machine.

Trust Keys Library

Keys under /etc/trust/certs will be converted into C code constants and subsequently compiled into a static library providing the raw binary data for the keys during the buildworld process.  This provides the mechanism for building keys into the kernel, loader, and other components.  These keys are known as trust root keys, as they provide the root set for all trusted keys.

Kernel Trust Interface

The kernel trust interface provides access to the set of verification keys trusted by the kernel.  This consists of an in-kernel interface as well as a user-facing device interface.  The in-kernel interface looks like an ordinary key management system (KMS) interface.  The device interface provides two primary mechanisms: access to the current set of trusted keys and the ability to register new keys or revoke existing ones.

Access to the existing database is accomplished through a read-only device node which simply outputs all of the existing trusted keys in PEM-encoded X509 format.  This formatting allows many OpenSSL applications to use the device node itself as a CA root file.  Updating the key database is accomplished by writing to a second device node.  Writing an X509 certificate signed by one of the existing trusted keys to this device node will cause the key contained in the certificate to be added to the trusted key set.  Writing a certificate revocation list (CRL) signed by a trusted key to the device node will revoke the keys in the revocation list as well as any keys whose signature chains depend on them.  Trust root keys cannot be revoked, however.

This maintains the trusted key set in a state where any trusted key has a signature chain back to a trust root key.

Portable Verification Library

The final piece of the system is the portable verification library.  This library should resemble a minimal OpenSSL-like API that performs parsing/encoding of the necessary formats (PKCS#7, X509, CRL), or a reduced subset thereof and public-key signature verification.  I have not yet decided whether to create this from harvesting code from OpenSSL/LibreSSL or write it from scratch (with code from NaCl), but I’m leaning toward harvesting code from LibreSSL.

Operation

The trust system performs two significant roles in the system as planned, and can be expanded to do more things in the future.  First, it ensures that loader only loads kernels and modules that are signed.  Second, it can serve as a kind of system-wide keyring (hence the device node that looks like a typical PEM-encoded CAroot file for OpenSSL applications).  The following is an overview of how it would operate in practice.

Signature Checking in the loader

In an EFI environment, boot1.efi and loader.efi have a chain of custody provided by the EFI secure boot framework.  This is maintained from boot1.efi to loader.efi, because of the use of the EFI loaded image interface.  The continuation of the chain of custody must be enforced directly by loader.efi.  To accomplish this, loader will link against the trust key library at build time to establish root keys.  These in turn can either be used to check the kernel and modules directly, or they can be used to check a per-kernel key (the second method is recommended; see below).

Per-Kernel Ephemeral Keys

The signelf utility was designed with the typical kernel build process in mind.  The kernel and all of its modules reside in a single directory; it’s a simple enough thing to run signelf on all of them as the final build step.  Additionally, signelf can generate an ephemeral key for signing and write out the verification certificate after it finishes.

This gives rise to a use pattern where every kernel is signed with an ephemeral key, and a verification certificate is written into the kernel directory.  This certificate is in turn signed by the local trust root key (signelf does this as part of the ephemeral key procedure).  In this case, the loader first attempts to load the verification certificate for a kernel, then it loads the kernel and all modules.

Signed Configuration Files

The FreeBSD loader relies on several files such as loader.4th, loader.conf, loader.menu, and others that control its behavior in significant ways.  Additionally, one can foresee applications of this system that rely on non-ELF configuration files.  For loader, the simplest solution is to store these files as non-detached PKCS#7 messages (meaning, the message and file contents are stored together).  Thus, loader would look for loader.conf.pk7, loader.4th.pk7, and so on.  A loader built for secure boot would look specifically for the .pk7 files, and would require signature verification in order to load them.

The keybuf Interface

The kernel keybuf interface was added in a patch I contributed in late March 2017.  It is used by GELI boot support to pass keys from the boot phases to the kernel.  However, it was designed to support up to 64 distinct 4096-bit keys without modification; thus it can be used with RSA-4096.  An alternative to linking the trust key library directly into the kernel is to have it receive the trusted root key as a keybuf entry.

This approach has advantages and disadvantages.  The advantage is it allows a generic kernel to be deployed to a large number of machines without rebuilding for each machine.  Specifically, this would allow the FreeBSD foundation to publish a kernel which can make use of local trust root keys.  The primary disadvantage is that the trust root keys are not part of the kernel and thus not guaranteed by the signature checking.  The likely solution will be to support both possibilities as build options.

Key Management

The preferred scheme for trust root keys is to have a local keypair generated on each machine, with the local verification certificate serving as the sole trust root key.  Any vendor keys that might be used would be signed by this keypair and loaded as intermediate keys.  Every kernel build would produce an ephemeral key which would be signed by the local keypair.  Kernel builds originating from an organization would also be signed by an ephemeral key, whose certificate is signed by the organization’s keypair.  For example, the FreeBSD foundation might maintain a signing key, which it uses to sign the ephemeral keys of all kernel builds it publishes.  An internal IT organization might do the same.

It would be up to the owner of a machine whether or not to trust the vendor keys originating from a given organization.  If the keys are trusted, then they are signed by the local keypair.  However, it is always an option to forego all vendor keys and only trust locally-built kernels.

An alternate use might be to have no local signing key, and only use an organizational trust root key.  This pattern is suitable for large IT organizations that produce lots of identical machines off of a standard image.

Conclusion

This design for the trust system and kernel/module signing is a comprehensive system-wide public-key trust management system for FreeBSD.  Its initial purpose is managing a set of keys that are used to verify kernels and kernel modules.  However, the system is designed to address the issues associated with trusted key management in a comprehensive and thorough way, and to leave the door open to many possible uses in the future.

RISC-V (Crypto) Engines Extension

I discovered the RISC-V project over the holidays, and promptly fell in love with it.  RISC-V represents an expansion of the open-source ethos into the hardware space, and I believe it has the potential to be one of the most important open hardware projects in the long run.

Crypto is something I care about, and an open hardware project like RISC-V presents an excellent opportunity to introduce high-quality extensible hardware cryptographic functions.  As I’m no stranger to computer architecture, I decided to roll up my sleeves and throw together a cryptographic instruction set extension for RISC-V.

This article is an overview of my design as it stands.  I have a detailed draft specification and the beginnings of an implementation in a fork of Berkeley’s rocket-chip repository.

Possibilities

An on-chip crypto implementation affords a number of possibilities that can improve security.  For one, hardware crypto implementations are able to control many kinds of  side-channel attacks much more effectively than software.  Hardware implementations can completely avoid timing, cache, and branch-predictor side-channels.  In addition, my designs allow for fuzzing of physical side-channels through techniques such as pipelining and injection of random “dummy” operations.

In addition to side-channel mitigation, hardware crypto potentially allows for designs which specifically account for insecure memory and disk, keeping all unencrypted key material in the processor core and not allowing it to be exported as plaintext.  This is a key principle in the design of hardware security modules (HSMs), and it would be a welcome feature in a general CPU.

Hardware Crypto Paradigms

There are roughly three dominant paradigms for hardware crypto.  The first is a wholly-separate device connected via a system bus (such as PCI), which implements various functions.  One of the key advantages of this is that the sensitive data can remain on the device, never accessible to the rest of the system (of course, this is also a disadvantage in the case of closed-source hardware, as we can never be certain that the system isn’t weakened or back-doored).  However, this can’t rightly be considered an ISA extension, as it’s wholly-separate.

The other end of the spectrum is occupied by cipher-specific instructions such as Intel’s AESNI instruction set.  These are often very efficient, as many cryptographic ciphers can be implemented very efficiently in hardware.  However, they don’t do much for protection of sensitive data.  Moreover, writing specific ciphers into the ISA is generally a bad idea: ciphers are sometimes broken, and more often phased out and replaced by newer, better algorithms.  Moreover, such a practice can enshrine weak crypto, as is seen in the continuing use of weak and broken crypto like RC4, MD5, SHA1, DES, 3DES, and 1024-bit RSA in many hardware crypto offerings.

Coprocessors are a third possibility; however, a coprocessor still must design its own instruction set, and that design must still cope with the reality of changing cryptographic algorithms.  Moreover, the interface between a general CPU and a coprocessor is complicated and difficult to design well.

Engines

I began by attempting to generalize the instruction-based approach, initially planning for special secure registers and a generalized framework for crypto instructions.  This ended up evolving into a framework I call “engines” which is most similar to the device-based approach, except that it lives in the processor core and is directly integrated into the pipeline.  The engines instruction set is also designed to allow the entire mechanism to be virtualized in an OS, and to allow for any engine to be implemented in software within a kernel.

An engine is essentially a larger, more complex functional unit which is capable of performing a single complex operation or a limited set of them.  In a typical pipeline, an engine looks and feels essentially like a memory unit, and for most intents and purposes can be treated like one.  After an engine has been configured, it is interacted with by means of a set of commands, which may supply arguments and may return results.  These behave exactly like load and store instructions in that they may generate faults, and commands yielding results may stall until data is available.

Engines also exist in a number of states, and can be moved between states by a transition instruction.  The uninitialized state represents an engine that is being configured (for example, a crypto engine needs to be supplied its key).  Initialization may performing preprocessing on initialization data, and moves the engine into the ready state (for example, the AES cipher does precomputation of the key schedules).  A ready engine can be started, causing it to enter the running state.  This allows a distinction between engines that are simply prepared, and engines that may be executing random operations continuously to fuzz side-channels.   To facilitate fast context-switching, a pause transition moves a running engine into the paused state, and ignores all other states, and the unpause transition restarts a paused engine.  Lastly, engines can be transitioned into a saving state, where their state can be serialized, and an uninitialized engine can be placed in the resuming state, where a saved state can be reloaded.

Each core has a number of engine resources, which are referenced through engine handle registers.  An acquire instruction attempts to acquire an engine resource of a particular type, storing it to an engine handle register.  The namespace for engine resources is quite large, and can be further extended using CSRs to select a particular namespace.  This allows the engines framework to function as a flexible mechanism for the indefinite future.

Engine Instruction Detail

The following is a description of each instructions the proposed engine ISA extension.

Engine Management

eng.acquire   eh, code

The acquire instruction attempts to acquire an engine resource of type code, binding it to the engine handle eh if such a resource is available.  If no such resource is available, it generates a fault trap (an OS can possibly use this along with the rebind instruction to implement engines in software).

eng.release   eh

The release instruction releases the engine resource bound to eh.

eng.ehsave   eh, rd

The ehsave instruction saves the binding in eh to the ordinary register rd.  For all hardware engine resources, this is guaranteed to be represented as a 32-bit number with the lowest bit clear.  This convention allows an operating system to represent bindings to software implementations as numbers with the lowest bit set.

eng.rebind   eh, rs

The rebind instruction re-establishes a binding to eh using the data in the in ordinary register rs.  If the lowest bit is clear in rs, then the binding is checked against the actual hardware engine resources.  Otherwise, it is taken to refer to a software implementation.

State Transitions

eng.trans   eh, code

The trans instruction executes the state transition represented by code.  It may generate a fault trap for bad transitions.

Saving and Restoring

eng.savecnt   eh, rd

The savecnt instruction writes into rd the number of state words that needs to be saved in order to save the entire state of the engine handle eh.  This can only be executed if the engine resource bound to eh is in the saving state.

eng.save   eh, rd, rs

The save instruction writes the state word for the engine resource bound to eh at the index given in rs into the register rd.  The highest valid index is equal to one less than the value given by the savecnt instruction.  This can only be executed if the engine resource bound to eh is in the saving state.

eng.restore   eh, rs1, rs2

The restore instruction writes the state word in rs2 to the index rs1 in the engine handle eh.  The restore instruction must be executed for all indexes corresponding to a particular saved state in strictly ascending order.  This instruction can only be executed if the engine resource bound to eh is in the restoring state.

Command Instructions

The command instructions allow for varying numbers of arguments and results.  All command instructions may stall for a finite amount of time, and may generate faults.  Some command codes may be restricted to certain states.

eng.icmd   eh, code

The icmd instruction executes the imperative command given by code on the engine resource bound to eh.

eng.rcmd   eh, code, rd

The rcmd instruction executes the receive-only command given by code on the engine resource bound to eh.  The result of the command is stored into rd.

eng.rs1cmd   eh, code, rd, rs1

The rs1cmd instruction executes the send-receive command given by code on the engine resource bound to eh.  The argument to the command is given in the rs1 register.  The result of the command is stored into rd.

eng.rs2cmd   eh, code, rd, rs1, rs2

The rs2cmd instruction executes the send-receive command given by code on the engine resource bound to eh.  The arguments to the command are given in the rs1 and rs2 register.  The result of the command is stored into rd.

eng.s1cmd   eh, code, rs1

The s1cmd instruction executes the send-only command given by code on the engine resource bound to eh.  The argument to the command is given in the rs1 register.

eng.s2cmd   eh, code, rs1, rs2

The s2cmd instruction executes the send-only command given by code on the engine resource bound to eh.  The arguments to the command are given in the rs1 and rs2 register.

eng.s3cmd   eh, code, rs1, rs2, rs3

The s2cmd instruction executes the send-only command given by code on the engine resource bound to eh.  The arguments to the command are given in the rs1 and rs2 register.

Example Crypto Engines

There are several sketches for example crypto engines, which show how this framework can be used for that purpose.

True Random Number Generator

A true random number generator using a physical process (such as electron or photon polarization, thermal noise, or other mechanisms) to generate random bits, which it accumulates in a ring-buffer.  The generator is started with the start transition, and randomness can be read off with a receive-format command that blocks until enough randomness is available.

Symmetric Cipher Encryption/Decryption Pipeline

Symmetric cipher encryption and decryption demonstrates the side-channel fuzzing capabilities of hardware engines.  The key material is loaded during the uninitialized state, and initialization does whatever preprocessing is necessary.  When the engine is in the running state, it constantly generates “dummy” operations using pseudorandomly-generated keys, IVs, and data which are discarded from the pipeline upon completion.  The implementation uses a pipeline to allow very high throughput of operations.  Data is added to the pipeline with a two-argument send command, and read off with a receive command.  Send and receive commands can generate deadlock faults if there is insufficient buffer space or data available.

Elliptic-Curve Point Multiplier

Elliptic curve point multiplication for a restricted set of elliptic curves can be implemented in a fashion similar to the symmetric cipher pipeline, except that for an elliptic curve multiplier, the pipeline typically cannot be completely unrolled.  Finite field arithmetic modulo pseudo-mersenne primes can be implemented as a configurable circuit.  Point multiplication can then be implemented using a ladder algorithm (such as the Montgomery ladder).  The same random “dummy” operation injection suggested for symmetric ciphers can also be used here to fuzz side-channels.

Hash/MAC Algorithm

Cryptographic hashes and message authentication codes transform an arbitrary amount of data into a fixed-size code.  This is straightforward to implement in hardware, although the algorithms in question generally cannot be pipelined.  In order to fuzz side-channels, we simply maintain some number of “dummy” internal states and keys in combination with the “real” one, and feed data to them at the same time as real data.

Conclusions

My original intent was to provide an extensible, general mechanism for supporting crypto on RISC-V hardware.  The engines extension ended up becoming a much more general mechanism, however.  In its most general form, this could even be considered an entire I/O instruction set.  Regardless, the examples I have given clearly demonstrate that it can serve its original purpose very well.