Spicy Braised Bacon-Wrapped Center-Round

About a month ago at this point, an associate hosted a “hot-foods” party: an annual thing he does every February.  I resolved to actually cook something for it this year, and spent the week before developing concepts for a recipe.

The Concept

Some background on this party: it’s packed full of nerds, and if there’s one thing I’ve learned about cooking for a crowd like that, it’s that you can’t go wrong with bacon.  I started imagining something consisting of about a 3-4 pound piece of beef, rubbed, then wrapped in bacon.  I decided on a rub consisting of salt, garlic, cayenne, paprika, and chili powder.  I ended up making the chili powder myself by cutting up dried chilis (I have a really good knife set, and a large mortar and pestle).  I had considered cilantro, but ended up not adding it as it threw off the balance.

This would do fantastic as a roast, but roasting is a precise art that doesn’t lend itself to packing up the results, driving 20 miles, and then leaving it out all day.  Braising, I’ve found, is a process much more tolerant of this kind of thing.  Thus, I decided on doing a braise.

The Sauce

The liquid is arguably the most important element of a good braise.  Braising is all about putting all the right elements together, then letting them all melt down together into a nice, rich sauce while the meat turns into a beautiful, tender, juiciness that you can pull apart with a fork.

So I knew I had to get the sauce right.  Working out from the rub ingredients, I considered possible bases.  Something made from tomato paste, whiskey, and vinegar (remnants of my North Carolinian origins: vinegar-based barbecue sauce) started to come to mind.  Another idea came to me while eating at my favorite ramen joint: soy sauce, chili oil, and white vinegar (something I make as a dipping sauce for gyoza).  Then I got the idea to try to combine the two.

This seemed challenging, but making a sauce is fundamentally no different from mixing a cocktail: you have to blend flavor spectrums together in a way that balances and compliments.

The combination that ended up working was an even blend of organic soy sauce (this has a different flavor from pasteurized soy sauce), apple cider vinegar, and Rittenhouse Rye, with about one tablespoon of tomato paste per half cup of liquid.

The Preparation

I started out by making the rub by grinding up salt, pepper, and finely minced chilis and chipotle chilis (I used about a 2-1 ratio by volume of regular and chipotle chili), then added about 5-6 cloves of garlic and about as much paprika as I chipotle chili.  Note that if you’re using fresh spices, you really have to adjust the flavor yourself; potency varies too much by batch and by plant.

egami_content.__media_external_images_media_1343.jpeg

I debated adding a little sugar to this, but decided against it.  For people who like sweets more than me and aren’t as fond of salt and vinegar as I am, this might work.

Next, I applied some of this rub to the meat.  When applying a salt-based rub to meat, you need to put some amount into a pan and keep rubbing it in for about 30 minutes.  Most of it won’t stick at first, but if you keep at it, it eventually all will.

egami_content.__media_external_images_media_1344.jpeg

I was originally going to wrap it in bacon and let it sit overnight, but a coworker gave me the idea of soaking the bacon in rye instead.  So I put the roast in a container in the refrigerator overnight and put the bacon in a separate container with rye.

After about 18 hours of sitting, I took the roast out, seared it, wrapped it up in the bacon then seared it again.  I made a mistake here: I should have deglazed the pan so that I could give the bacon a good sear to the point that it started to get nice and crispy.  Instead, the carmelized bits in the pan from the first sear started to burn and I had to stop early.

I had doubts about the double-sear, but it turns out that it is possible to wrap up a seared roast in bacon without burning yourself, if you’re careful.egami_content.__media_external_images_media_1345.jpeg

After this, I deglazed the pan (add a bit of water on low heat and let it dislodge everything, save all the liquid for later), and sauteed one leek, one sweet onion, two dried chiles, one dried chipotle pepper, some mushrooms, some of the leftover rub, and some marrow bones until they were good and browned, then added back the liquid from deglazing the pan along with the braising liquid I’d prepared and let it boil down some.

egami_content.__media_external_images_media_1346.jpeg

After that, I put the roast in with everything, put the lid on, and let it braise at 300 degrees for about 3 hours.  I used a shallow pan that had just enough room with the lid on for the roast to fit inside.  But for a braise, the less open space you have inside the container, the better.

I was going for a good slow cook, so I chose a lower heat and a longer time.  I took it out of the oven about every 30 minutes or so to turn the meat over and spoon some of the liquid on to it.  However, braising is all about moist heat, so you don’t want to open the lid too often.

Because of the nature of braising, it’s hard to overcook, but I probably could have gone with a 2 1/2 or 2 hour cook time just as well.

At the end of the braise, I had to skim off quite a bit of oil.  This isn’t surprising, as bacon and marrow-bones tend to add a lot of oil and as they cook.  I had no use for the oil, but in a larger cooking process it could have been re-used in another dish that called for oil, as it would have soaked up quite a bit of chili and garlic flavor.

The Results

The results were quite pleasing.  After braising for about 3 hours, the sauce had mellowed out quite a bit into a lovely tangy mixture with the “slow-burn” effect one gets from cayenne pepper.  The meat was nothing short of amazing.

I had a 3 1/2 pound roast, and it lasted all of 30 minutes at this party, and people were literally scraping the pan to get every last bit of the sauce.  Someone had made some cornbread, which did quite well in combination with the sauce.

All in all, this was a definite success.

Recipe

This is reconstructing the recipe after the fact, but it should be relatively accurate:

  • 1 Leek
  • 1 Sweet onion
  • 2 Cups mixed aromatic mushrooms
  • 3 Dried chipotle peppers
  • 4 Dried mexican red chiles
  • 1/2 cup kosher salt
  • 1/2 cup mixed peppercorns
  • 2 tsp paprika
  • 1/2 cup organic soy sauce
  • 1/2 cup apple cider vinegar
  • Rittenhouse Rye (1/2 cup for liquid)
  • 3 tbsp tomato paste
  • 3-4 lb center round beef roast
  • Thick-cut bacon

Combine the salt, pepper, 2 chilis, and 1 chipotle pepper in a mortar and pestle and grind.  Mince the garlic and add it along with the paprika, then stir around until the garlic dries up.

Apply the rub to the roast, then cover it.  Also add a small amount to a separate container along with the bacon.  Cover and let both refrigerate for 12-24 hours.

Sear the roast in a pan with a small amount of olive oil, then deglaze the pan and set the liquid aside.  Carefully wrap the roast in the bacon, tie it, and then sear it again (no oil this time) until the bacon is crispy.  Set the seared meat aside.

Combine the soy sauce, vinegar, rye, tomato paste, and 2-4 tbsp of leftover rub.  When the flavor is right, add the liquid from deglazing the pan earlier.

Chop up the leek, the onion, the remaining peppers, and the mushrooms, sautee them in a pan along with the marrow bones and a small amount of olive oil.  Sautee until golden brown, then add the liquid and cook it down until it starts to thicken.

Add the roast, spoon some of the liquid on to the top, cover, and place in an oven at 300 degrees for 2-2 1/2 hours.  Turn the roast and the bones over every 30-45 minutes and spoon some liquid on top of them before covering and placing back in the oven.  For the last 10-15 minutes, remove the lid and place the uncovered container in the oven.

Advertisements

Librem 13 FreeBSD Port

When the Librem laptops were announced last year, I was quite excited and I ordered both the 15 and 13-inch models.  My 13-inch model arrived last week, and I have begun the process of porting FreeBSD to it.

I have to say, I am very excited to finally have a laptop from a fully-cooperative manufacturer, where I can get my hands on all the hardware specs and possibly even upstream fixes.  This is a very welcome boon after a decade of having to deal with flaky BIOS issues, black-box hardware, and other difficulties.

The Laptop

The physical laptop itself is very solid and rather light.  It doesn’t creak, and the lid stays put even better than a macbook.  My only complaints are that the camera/microphone and wireless kill-switches are unlabeled, and that ethernet cables tend to fall out of the drop-down port.  Aside from those minor issues, I’m quite pleased with the physical unit.

IMG_2016-03-02_13-03-19.65.jpeg

It’s hard to see the kill-switches in the photo below, but they are on the hinge under the screen.

My only other regret is that the dvorak keyboard option became available after I’d ordered mine.  Oh well; maybe I can sweet-talk them into swapping it for me at a conference 😉

It was also very nice to unpack a laptop without implicitly accepting a Microsoft license agreement by opening the box!

BIOS and FreeBSD Installation

The first thing I do when I get a new laptop is poke around in the BIOS menu (no photos yet).  The librem has a coreboot port, but I decided to get FreeBSD installed and check the system out a bit before diving into the art of flashing my BIOS, so I was looking at the proprietary American Megatrends BIOS menu.  Even still, I was pleased by the features it presented, most notably the ability to set up custom signing keys.  I am going to have to do some work on a signed FreeBSD boot and loader chain.

My FreeBSD installation went off without any serious issues.  I installed FreeBSD 11 from a bootable memstick option, setting up a pure-ZFS system.  I had ordered a 1TB spindle drive and a 250GB SSD.  I reserved 48GB of the SSD for swap (total of 64GB memory).  I then set up a ZFS pool with the spindle drive as the main storage, a 16GB intent log on the SSD, and the rest of the SSD as an L2ARC cache device.  (I will eventually set up the ZFS volume to make all writes synchronous, so as to really use the intent log.)  I realize some might consider ZFS on a laptop to be overkill; however, I have found it to be an extremely versatile and stable filesystem.  It is incredibly crash-resistant and corruption-resistant, and its snapshotting is invaluable for risky updates.  The transparent compression features are useful as well, and can effectively increase your available space by a sizable amount.  Lastly, I have used the ability to serialize and deserialize the entire filesystem more than once.

I did encounter one of the issues in this process: a sporadic boot-hang and USB timeout that I now strongly suspect to be a timing bug in the FreeBSD boot process.

FreeBSD did handle the hardware kill-switches rather well (I’ve heard reports of Linux kernel panicking from them).  Flipping them off causes some kernel messages about timeouts, but the bus re-initializes upon flipping them back on.  If you boot with them off, then flip them on, the kernel detects the hardware properly.

FreeBSD Setup

The first thing I do on a new FreeBSD system is grab the source tree and build world, followed by kernel customization.  I noticed that building Clang has gotten pretty slow these days (which doesn’t bother me too much; I’d rather the compiler have a lot of optimization machinery than not).

After that, I grabbed the latest ports tree and started building the usual suspects to test the system (also, to get to where I could test X11).  I also grabbed Jean-Sebastian’s Intel graphics patch to see if that driver worked with the Broadwell card.  Sadly, it didn’t.

Working Hardware

Most of the hardware Just Works™, which is nice.  I was particularly pleased that all the fn-key combinations work out-of-the-box.  I have never seen that happen with any other vendor.

The following is a list of the working hardware:

  • The EFI boot/loader
  • SD card reader (mmc driver)
  • Realtek Ethernet (re driver)
  • System management bus and CPU frequency/temperature (smb, smbus, ichsmb, coretemp, cpufreq drivers)
  • Intel High-Def Audio (snd_hda driver), though I haven’t tested the microphone yet.  Also, plugging into the headphone jack properly switches to headphones from the speakers (I’ve seen that not work).
  • Hard Drive and SSD (obviously)
  • USB ports
  • Bluetooth

Unfortunately, the Intel accelerated graphics drivers don’t support the Broadwell cards.  This will come eventually, but FreeBSD is in the midst of a graphics framework overhaul to better track the Linux drivers.  Looks like it’s going to be VESA for now.

Current Issues

There are currently some issues, which I will be working to fix:

  • The Atheros 9462 card is detected, but the radio doesn’t seem to be working.  The pciconf tool reports a few errors, and scans seem to run, but don’t pick up anything.  I have confirmed this is not a hardware issue by booting with a Kali linux memstick.
  • Blank screen on resume.  My initial investigations reveal some ACPI execution errors during resume, which may be related.  I need to get up in the kernel source and add some logging to see what’s going on.
  • VESA wierdness with X11.  The VESA X driver works mostly, but if you switch back to the terminal, there’s a couple of pixels around the border of the screen that stay the way they looked in X.  Also, when you shutdown X, the screen freezes and the logs indicate some kind of timeout.  Both of these seem to implicate the VGA BIOS.
  • Sporadic boot-hang and USB timeouts.  These seem to be specific to a kernel configuration, and go away when changing the verbosity level.  This strongly indicates a timing-related bug in the kernel initialization procedures.

Of these issues, the wireless card and blank screen are the most critical, followed by the X11 weirdness.  I will be in contact with the Librem developers should my initial attempts to fix these issues prove unsuccessful.

Following that, I want to see if there’s a way to make the kill-switches behave more gracefully.  If the USB driver could be connected to treat those devices as hot-pluggable, or else assume timeouts are disconnects.

In any case, stay tuned for updates…

The Complex Nature of the Security Problem

This article is an elaboration on ideas I originally developed in a post to the project blog for my pet programming language project here.  The ideas remain as valid (if not moreso) now as they did eight months ago when I wrote the original piece.

The year 2015 saw a great deal of publicity surrounding a number of high-profile computer security incidents.  While this trend has been ongoing for some time now, the past year marked a point at which the problem entered the public consciousness to the point where it has become a national news item and is likely to be a key issue in the coming elections and beyond.

“The Security Problem” as I have taken to calling it is not a simple issue and it does not have a simple solution.  It is a complex, multi-faceted problem with a number of root causes, and it cannot be solved without adequately addressing each of those causes in turn.  It is also a crucial issue that must be solved in order for technological civilization to continue its forward progress and not slip into stagnation or regression.  If there is a single message I would want to convey on the subject, it is this: the security problem can only be adequately addressed by a multitude of different approaches working in concert, each addressing an aspect of the problem.

Trust: The Critical Element

In late September, I did a “ride-along” of a training program for newly-hired security consultants.  Just before leaving, I spoke briefly to the group, encouraging them to reach out to us and collaborate.  My final words, however, were broader in scope: “I think every era in history has its critical problems that civilization has to solve in order to keep moving forward, and I think the security problem is one of those problems for our era.”

Why is this problem so important, and why would its existence have the potential to block forward progress?  The answer is trust.  Trust: specifically the ability to trust people about which we know almost nothing and indeed, may never meet is arguably the critical element that allows civilization to exist at all.  Consider what might happen, for example, if that kind of trust did not exist: we would be unable to create and sustain basic institutions such as governments, hospitals, markets, banks, and public transportation.

Technological civilization requires a much higher degree of trust.  Consider, for example, the amount of trust that goes into using something as simple as checking your bank account on your phone.  At a very cursory inspection, you trust the developers who wrote the app that allows you to access your account, the designers of the phone, the hardware manufacturers, the wireless carrier and their backbone providers, the bank’s server software and their system administrators, the third-party vendors that supplied the operating system and database software, the scientists who designed the crypto protecting your transactions and the standards organizations who codified it, the vendors who supplied the networking hardware, and this is just a small portion.  You quite literally trust thousands of technologies and millions of people that you will almost certainly never meet, just to do the simplest of tasks.

The benefits of this kind of trust are clear: the global internet and the growth of computing devices has dramatically increased efficiency and productivity in almost every aspect of life.  However, this trust was not automatic.  It took a long time and a great deal of effort to build.  Moreover, this kind of trust can be lost.  One of the major hurdles for the development of electronic commerce, for example, was the perception that online transactions were inherently insecure.

This kind of progress is not permanent, however; if our technological foundations prove themselves unworthy of this level of trust, then we can expect to see stymied progress or in the worst case, regression.

The Many Aspects of the Security Problem

As with most problems of this scope and nature, the security problem does not have a single root cause.  It is the product of many complex issues interacting to produce a problem, and therefore its solution will necessarily involve committed efforts on multiple fronts and multiple complimentary approaches to address the issues.  There is no simple cause, and no “magic bullet” solution.

The contributing factors to the security problem range from highly technical (with many aspects in that domain), to logistical, to policy issues, to educational and social.  In fact, a complete characterization of the problem could very well be the subject of a graduate thesis; the exposition I give here is therefore only intended as a brief survey of the broad areas.

Technological Factors

As the security problem concerns computer security (I have dutifully avoided gratuitous use of the phrase “cyber”), it comes as no surprise that many of the contributing factors to the problem are technological in nature.  However, even within the scope of technological factors, we see a wide variety of specific issues.

Risky Languages, Tools, and APIs

Inherently dangerous or risky programming language or API features are one of the most common factors that contribute to vulnerabilities.  Languages that lack memory safety can lead to buffer overruns and other such errors (which are among the most common exploits in systems), and untyped languages admit a much larger class of errors, many of which lead to vulnerabilities like injection attacks.  Additionally, many APIs are improperly designed and lead to vulnerabilities, or are designed in such a way that safe use is needlessly difficult.  Lastly, many tools can be difficult to use in a secure manner.

We have made some headway in this area.  Many modern frameworks are designed in such a way that they are “safe by default”, requiring no special configuration to satisfy many safety concerns and requiring the necessary configuration to address the others.  Programming language research over the past 30 years has produced many advanced type systems that can make stronger guarantees, and we are starting to see these enter common use through languages like Rust.  My current employer, Codiscope, is working to bring advanced program analysis research into the static program analysis space.  Initiatives like the NSF DeepSpec expedition are working to develop practical software verification methods.

However, we still have a way to go here.  No mature engineering discipline relies solely on testing: civil engineering, for example, accurately predicts the tolerances of a bridge long before it is built.  Software engineering has yet to develop methods with this level of sophistication.

Configuration Management

Modern systems involve a dizzying array of configuration options.  In multi-level architectures, there are many different components interacting in order to implement each bit of functionality, and all of these need to be configured properly in order to operate securely.

Misconfigurations are a very frequent cause of vulnerabilities.  Enterprise software components can have hundreds of configuration options per component, and we often string dozens of components together.  In this environment, it becomes very easy to miss a configuration option or accidentally fail to account for a particular case.  The fact that there are so many possible configurations, most of which are invalid further exacerbates the problem.

Crypto has also tended to suffer from usability problems.  Crypto is particularly sensitive to misconfigurations: a single weak link undermines the security of the entire system.  However, it can be quite difficult to develop and maintain hardened crypto configurations over time, even for the technologically adept.  The difficulty of setting up software like GPG for non-technical users has been the subject of actual research papers.  I can personally attest to this as well, having guided multiple non-technical people through the setup.

This problem can be addressed, however.  Configuration management tools allow configurations to be set up from a central location, and managed automatically by various services (CFEngine, Puppet, Chef, Ansible, etc.).  Looking farther afield, we can begin to imagine tools that construct configurations for each component from a master configuration, and to apply type-like notions to the task of identifying invalid configurations.  These suggestions are just the beginning; configuration management is a serious technical challenge, and can and should be the focus of serious technical work.

Legacy Systems

Legacy systems have long been a source of pain for technologists.  In the past, they represent a kind of debt that is often too expensive to pay off in full, but which exacts a recurring tax on resources in the form of legacy costs (compatibility issues, bad performance, blocking upgrades, unusable systems, and so on).  To most directly involved in the development of technology, legacy systems tend to be a source of chronic pain; however, from the standpoint of budgets and limited resources, they are often a kind of pain to be managed as opposed to cured, as wholesale replacement is far took expensive and risky to consider.

In the context of security, however, the picture is often different.  These kinds of systems are often extremely vulnerable, having been designed in a time when networked systems were rare or nonexistent.  In this context, they are more akin to rotten timbers at the core of a building.  Yes, they are expensive and time-consuming to replace, but the risk of not replacing them is far worse.

The real danger is that the infrastructure where vulnerable legacy systems are most prevalent: power grids, industrial facilities, mass transit, and the like are precisely the sort of systems where a breach can do catastrophic damage.  We have already seen an example of this in the real world: the Stuxnet malware was employed to destroy uranium processing centrifuges.

Replacing these legacy systems with more secure implementations is a long and expensive proposition, and doing it in a way that minimizes costs is a very challenging technological problem.  However, this is not a problem that can be neglected.

Cultural and Policy Factors

Though computer security is technological in nature, its causes and solutions are not limited solely to technological issues.  Policy, cultural, and educational factors also affect the problem, and must be a part of the solution.

Policy

The most obvious non-technical influence on the security problem is policy.  The various policy debates that have sprung up in the past years are evidence of this; however, the problem goes much deeper than these debates.

For starters, we are currently in the midst of a number of policy debates regarding strong encryption and how we as a society deal with the fact that such a technology exists.  I make my stance on the matter quite clear: I am an unwavering advocate of unescrowed, uncompromised strong encryption as a fundamental right (yes, there are possible abuses of the technology, but the same is true of such things as due process and freedom of speech).  Despite my hard-line pro-crypto stance, I can understand how those that don’t understand the technology might find the opposing position compelling.  Things like golden keys and abuse-proof backdoors certainly sound nice.  However, the real effects of pursuing such policies would be to fundamentally compromise systems and infrastructure within the US and turn defending against data breaches and cyberattacks into an impossible problem.  In the long run, this erodes the kind of trust in technological infrastructure of which I spoke earlier and bars forward progress, leaving us to be outclassed in the international marketplace.

In a broader context, we face a problem here that requires rethinking our policy process.  We have in the security problem a complex technological issue- too complex for even the most astute and deliberative legislator to develop true expertise on the subject through part-time study -but one where the effects of uninformed policy can be disastrous.  In the context of public debate, it does not lend itself to two-sided thinking or simple solutions, and attempting to force it into such a model loses too much information to be effective.

Additionally, the problem goes deeper than issues like encryption, backdoors, and dragnet surveillance.  Much of the US infrastructure runs on vulnerable legacy systems as I mentioned earlier, and replacing these systems with more secure, modern software is an expensive and time-consuming task.  Moreover, this need to invest in our infrastructure this way barely registers in public debate, if at all.  However, doing so is essential to fixing one of the most significant sources of vulnerabilities.

Education

Education, or the lack thereof also plays a key role in the security problem.  Even top-level computer science curricula fail to teach students how to think securely and develop secure applications, or even to impress upon students the importance of doing so.  This is understandable: even a decade ago, the threat level to most applications was nowhere near where it is today.  The world has changed dramatically in this regard in a rather short span of time.  The proliferation of mobile devices and connectedness combined with a tremendous upturn in the number of and sophistication of attacks launched against systems has led to a very different sort of environment than what existed even ten year ago (when I was finishing my undergraduate education).

College curricula are necessarily a conservative institution; knowledge is expected to prove its worth and go through a process of refinement and sanding off of rough edges before it reaches the point where it can be taught in an undergraduate curriculum.  By contrast, much of the knowledge of how to avoid building vulnerable systems is new, volatile, and thorny: not the sort of thing traditional academia likes to mix into a curriculum, especially in a mandatory course.

Such a change is necessary, however, and this means that educational institutions must develop new processes for effectively educating people about topics such as these.

Culture

While it is critical to have a infrastructure and systems built on sound technological approaches, it is also true that a significant number of successful attacks on both large enterprises and individuals alike make primary use of human factors and social engineering.  This is exacerbated by the fact that we, culturally speaking, are quite naive about security.  There are security-conscious individuals, of course, but most people are naive to the point that an attacker can typically rely on social engineering with a high success rate in all but the most secure of settings.

Moreover, this naivety affects everything else, ranging policy decisions to what priorities are deemed most important in product development.  The lack of public understanding of computer security allows bad policy such as back doors to be taken seriously and insecure and invasive products to thrive by publishing marketing claims that simply don’t reflect reality (SnapChat remains one of the worst offenders in this regard, in my opinion).

The root cause behind this that cultures adapt even more slowly than the other factors I’ve mentioned, and our culture has yet to develop effective ways of thinking about these issues.  But cultures do adapt; we all remember sayings like “look both ways” and “stop, drop, and roll” from our childhood, both of which teach simple but effective ways of managing more basic risks that arise from technological society.  This sort of adaptation also responds to need.  During my own youth and adolescence, the danger of HIV drove a number of significant cultural changes in a relatively short period of time that proved effective in curbing the epidemic.  While the issues surrounding the security problem represent a very different sort of danger, they are still pressing issues that require an amount of cultural adaptation to address.  A key step in addressing the cultural aspects of the security problem comes down to developing similar kinds of cultural understanding and awareness, and promoting behavior changes that help reduce risk.

Conclusion

I have presented only a portion of the issues that make up what I call the “computer security problem”.  These issues are varied, ranging from deep technological issues obviously focused on security to cultural and policy issues.  There is not one single root cause to the problem, and as a result, there is no one single “silver bullet” that can solve it.

Moreover, if the problem is this varied and complex, then we can expect the solutions to each aspect of the problem to likewise require multiple different approaches coming from different angles and reflecting different ways of thinking.  My own work, for example, focuses on the language and tooling issue, coming mostly from the direction of building tools to write better software.  However, there are other approaches to this same problem, such as sandboxing and changing the fundamental execution model.  All of these angles deserve consideration, and the eventual resolution to that part of the security problem will likely incorporate developments from each angle of approach.

If there is a final takeaway from this, it is that the problem is large and complex enough that it cannot be solved by the efforts or approach of a single person or team.  It is a monumental challenge requiring the combined tireless efforts of a generation’s worth of minds and at least a generation’s worth of time.