ReFlex: remote storage with local performance in a flash

0
136

0

Pudding proof

As implied above, a two-core server using ReFlex can fully saturate a 1 million IOP SSD. That compares favorably to the current Linux architecture that

. . . uses libaio and libevent, [and] achieves only 75K IOPS/core and at higher latency due to higher compute intensity, requiring over 10× more CPU cores to achieve the throughput of ReFlex.

In addition, testing found that ReFlex can support thousands of remote tenants, a vital consideration where a single cloud datacenter may have a hundred thousand or more servers.

The Storage Bits take

Fast flash has been with us for over a decade, yet system architects are still figuring out how to optimize its use in the real world. Of course, a decade ago, the massive scale-out architectures that ReFlex is designed for were the province of only a few internet-scale service providers.

However, once ReFlex – or something like it – is built into system kernels, many of us, even with much more modest hardware footprints, will be able to take advantage of shared SSDs. Imagine the performance gains, for example, of an eight node video render farm equipped with two high-performance PCIe/NVMe SSDs and a 10Gb Ethernet fabric.

ReFlex-type capabilities become even more important as newer, higher performance – and costlier – non-volatile memory technologies, such as Intel’s 3D XPoint, come into wider use. The economic benefits of shared remote SSDs will be even greater than they are today.

Courteous comments welcome, of course. ReFlex won the NVMW’18 Memorable Paper Award.

Related Topics:

Cloud

Hardware

Reviews

Mobility

Data Centers

0