All Featured Liqid Sponsored Tech Note

Spectre, Meltdown, and Flexible Scaleout

  1. GPUs, HPC, and PCI Switching
  2. Spectre, Meltdown, and Flexible Scaleout
  3. GPUs and Composable Computing
  4. Openness and Composable Systems
This post is in response to Liqid CEO Jay Breakstone’s recent post, What’s Next for Infrastructure in a Post-Meltdown Reality? Make sure to check it out and join in the conversation using #LiqidHPC on Twitter. 

The recent Meltdown and Spectre attacks illustrate the problematic nature of modern computing systems. While the earlier Rowhammer attack could read or attack one process running in a virtual environment from another process running on the same processor, the Meltdown and Spectre attacks are of a completely different class, enabling a process to read large amounts of information from another process’ memory space.

A Speculative Trade Off

This is all caused, as has been amply documented elsewhere, by speculative execution. Speculative execution, in turn, is an optimization designed to improve the speed at which a processor can execute a particular piece of software. The processor consumes more energy by speculatively executing two branches, effectively trading power for time.

Image Credit: CreditDebitPro

But it turns out the processor is not only trading power for time, it is also trading security for time. These two attacks, in fact, starkly illustrate the tradeoff between security and performance. Rarely is the tradeoff put on display more clearly. The simple answer, stopping the process from speculatively executing branches, also causes a performance hit of up to 25%. Other solutions have been implemented in the processor space, as well-but as an engineering exercise, it is useful to consider some other way to solve this problem. Assume, for a moment, that there will always be some kind of vulnerability like Rowhammer, Meltdown, or Spectre. What kinds of solutions could be deployed that would allow processors to perform speculative execution, and yet not expose the system to these kinds of interprocess vulnerabilities?

Scale Up vs. Scale Out

A good place to start might be with the way processing is designed and deployed. The original counter to the mainframe, and then the mini, was to scale out, rather than to scale up. Rather than increasing the processing power in a single large computer, as a scale up system does, scale out designs split the load across multiple smaller processors. This allows capacity to be scaled more flexibly over time; additional processors can be added or removed to the system, or particular jobs, as needed. Over time, however, it can be argued that the scale up model has made a bit of a comeback. Modern data centers have moved from relying on single core processors to dual core processors, to four core processors, and multiple processors on a board. The host is scaled up by scaling out inside the host itself.

At the same time, processor speeds continue to increase, and increasingly complex optimizations are used to allow a single processor core to support an ever greater number of loads. Carrying multiple cores in a single die, adding optimizations to trade power against speed of execution, and carrying multiple dies on a single system board, all introduce new interaction surfaces between the various components. These interaction surfaces are not visible to the end user; they are abstracted into “the host,” or “the server.”

Scale Out Isolation

This entire process could be reversed. It seems possible to design a system that follows scale out ideas more strictly. Such a system might have memory separated from the processor, single processors on boards, and possibly even more chipsets, each with a lower core count. The result would be the same, in terms of processing power. The principle tradeoff might be using more power for support chipsets, or perhaps more physical real estate.

On the other side, however, such a system would present itself as a larger group of smaller resources, with each resource more fully isolated from one another. In such a system, it might be possible to isolate related jobs onto single processor sets, so the applications that make up one job do not run on the same processor as the applications that make up another job. In this environment, applications might have fewer reasons to steal information from one another.

Reducing the complexity of the attack surfaces by reducing the number of cores on a single die, and the number of dies on in a single server, in other words, can help reduce attack surfaces by reducing interaction surfaces. At the same time, scaling out with smaller resources might make positioning workloads in a way that reduce the opportunity for interworkload security breaches.

Building the Right Network

Liqid’s PCIe Switch

The additional complexity added in such a scheme would be building a network that would be able to connect all of these components together, and allow loads to be spread across the available resources in a rational way. There are such systems being built today, such as Liqid’s composable systems. These systems rely on a PCI switch to interconnect compute, storage, network, and even GPU resources so they appear to be one larger system, while allowing groups of these components to be used for a particular job, redeployment, or automation as required.

This entire solution is, of course, speculative-much like the speculative processing of a branch by a processor-the root of the Meltdown and Spectre attacks. But thinking through this sort of problem, even though the solution might not be realistic, can often expose different sets of tradeoffs than what you saw when you first looked at the problem, and designed a solution for it.

And if there is anything Spectre and Meltdown should teach us, it is this: If you haven’t found the tradeoffs, you haven’t looked hard enough.

About the author

Russ White

Russ White has more than twenty years’ experience in designing, deploying, breaking, and troubleshooting large scale networks. Across that time, he has co-authored more than forty software patents, spoken at venues throughout the world, participated in the development of several internet standards, helped develop the CCDE and the CCAr, and worked in Internet governance with the Internet Society. Russ is currently a member of the Architecture Team at LinkedIn, where he works on next generation data center designs, complexity, security, and privacy. He is also currently on the Routing Area Directorate at the IETF, and co-chairs the IETF I2RS and BABEL working groups.

Leave a Comment