Design of data management systems upon new memory and storage hardware

Loading...
Thumbnail Image
Authors
Qiao, Yifan
Issue Date
2022-05
Type
Electronic thesis
Thesis
Language
en_US
Keywords
Computer Systems engineering
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
The objective of this thesis is to develop design techniques that can improve the performance and efficiency of data management systems by leveraging new memory and storage hardware. In particular, this thesis centers around new heterogeneous DRAM and new solid-state storage hardware with built-in transparent data compression. This thesis first investigates the design of relational database caching in the presence of a heterogeneous DRAM hierarchy that consists of convenient but expensive byte-addressable DRAM and large-capacity low-cost DRAM with coarse access granularity (e.g., 1K-byte). Regardless of specific memory technology, one can always reduce the manufacturing cost by sacrificing the memory raw reliability, and apply error correction code (ECC) to restore the data storage integrity. The efficiency of ECC significantly improves as the codeword length increases, which enlarges the memory access granularity. This leads to a fundamental trade-off between memory cost and access granularity. Following this principle, Intel 3DXP-based Optane memory DIMM internally operates with a 256-byte ECC codeword length (hence 256-byte access granularity), and Hynix recently demonstrated low-cost DRAM DIMM with a 64-byte access granularity. This thesis develops a design approach that enables relational databases to take full advantage of such low-cost heterogeneous DRAM fabric to improve performance with only minimal database source code modification. Using MySQL as a test vehicle, this thesis implements a prototyping system, based on which this thesis successfully demonstrates the effectiveness of the proposed design approach under TPC-C and Sysbench OLTP benchmarks. This thesis further investigates how relational databases can take full advantage of modern storage hardware with built-in transparent compression. Advanced storage appliances (e.g., all-flash array) and some latest SSDs (solid-state drives) can perform hardware-based data compression, transparently from OS and applications. Moreover, the growing deployment of hardware-based compression capability in Cloud storage infrastructure leads to the imminent arrival of cloud-based storage hardware with built-in transparent compression. To make relational database better leverage modern storage hardware, this thesis proposes to deploy a dual in-memory vs.~on-storage page format: While pages in database cache memory retain the conventional row-based format, each page on storage devices has a column-based format so that it can be better compressed by storage hardware. This thesis develops design techniques that can further improve the on-storage page data compressibility through additional light-weight column data transformation. This thesis studies the impact of compression algorithms on the selection of column data transformation techniques. This thesis integrated the design techniques into MySQL/InnoDB by appropriately modifying its source code, and ran Sysbench OLTP workloads on a commercial SSD with built-in transparent compression. The results show that the proposed solution can bring up to 45% additional reduction on the storage cost at only a few percentage of performance degradation. This thesis also studies how B+-tree could take full advantage of modern storage hardware with built-in transparent compression. Recent years witnessed significant interest in applying log-structured merge tree(LSM-tree) as an alternative to B+-tree, driven by the widely accepted belief that LSM-tree has distinct advantages in terms of storage cost and write amplification. This work aims to revisit this belief upon the arrival of storage hardware with built-in transparent compression. Advanced storage appliances and emerging computational storage drives perform hardware-based lossless data compression, transparent to OS and user applications. Beyond straightforwardly reducing the storage cost gap between B+-tree and LSM-tree, such storage hardware creates new opportunities to re-think the implementation of B+-tree. This work presents three simple design techniques that can leverage such modern storage hardware to significantly reduce the B+-tree write amplification. Experiments on a commercial storage drive with built-in transparent compression show that the proposed design techniques can reduce the B+-tree write amplification by over 10 times. Compared with RocksDB (a key-value store built upon LSM-tree), the enhanced B+-tree implementation can achieve similar or even smaller write amplification. Finally, this thesis studies the design of storage nodes in cloud-native relational database systems upon the arrival of new storage hardware with built-in transparent compression. Being deployed on compute/storage-disaggregated infrastructure, modern cloud-native relational databases employ the {\it log-is-the-database} principle. By shipping redo log records, instead of entire dirty pages, from compute nodes to storage nodes, it significantly reduces the network traffic at the penalty of higher workloads on storage nodes. As a result, storage nodes are subject to nontrivial trade-offs among write amplification, storage cost, and page service latency. Beyond seamlessly reducing storage cost at zero CPU usage, new storage hardware with built-in transparent compression enables data management software employ sparse on-disk data structure without sacrificing the physical storage cost. This thesis presents two simple yet effective design techniques that, by leveraging the zero-cost sparse on-disk data structure, make storage nodes achieve better design trade-offs and further reduce the network traffic. Experimental results show that, compared with the conventional design practice, the proposed design techniques could reduce the storage node write amplification and storage cost by up to 73% and 75%, and reduce the storage-to-compute network traffic in proportion to the page data compressibility.
Description
May 2022
School of Engineering
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN