It is the second part of our serial blogs on In-storage Transparent Compression: A Catalyst for Innovating Infrastructure Software. Part 2: Improve the TPS performance of PostgreSQL.
Improve the TPS performance of PostgreSQL
As one of the most widely used relational databases, PostgreSQL applies the B+ tree index to manages its data storage (with the default B+ tree page size of 8KB) and realizes MVCC (multi-version concurrency control) by storing all the row versions in the tablespace. Hence, instead of directly in-place updating a row in one B+ tree page, PostgreSQL always first stores the new row version at a new location and relies on a background vacuum process to reclaim the tablespace occupied by dead row versions. As a result, the TPS (transactions per second) performance of update operations strongly depends on whether PostgreSQL can store the new row version on the same page as the old row version:
If the page hosting the old row version is full, PostgreSQL must store the new row version in another page. As a result, PostgreSQL must accordingly modify the B-tree structure by manipulating (and splitting or creating) one or multiple additional pages. This causes extra CPU usage and TPS performance degradation.
If the page hosting the old row version has sufficient empty space, PostgreSQL simply appends the new row version in that page. By keeping the B-tree structure intact, this causes very low CPU usage, leading to a higher TPS performance.
The above fact reveals a fundamental trade-off between TPS performance and storage space usage: When loading data into PostgreSQL, if we do not completely fill the B+ tree pages and reserve a certain amount of empty space to absorb future updates, we can improve the TPS performance. Nevertheless, this meanwhile leads to a larger number of B+ tree pages and hence a higher storage space usage. PostgreSQL allows users to configure such a trade-off by exposing a parameter called fillfactor, as illustrated in Fig. 1. Being a percentage value between 10 and 100, fillfactor controls how full each page will be filled with inserted rows. Its default value is 100, i.e., each page is 100% filled with inserted rows and hence does not reserve any space for future updates.
Figure 1: PostgreSQL uses the parameter fillfactor to configure the TPS performance vs. storage cost trade-off.
When PostgreSQL runs on normal SSDs/HDDs, regardless of how full one page is filled with user data, each page always consumes 8KB physical storage space. This leads to a clear TPS performance vs. storage cost trade-off configured by the fillfactor. As we discussed in the previous blog, in-storage transparent compression enables virtually variable-size block I/O in the presence of a 4KB block I/O interface. Therefore, when PostgreSQL runs on storage drives with built-in transparent compression, how full one page is filled with user data directly determines the amount of physical storage space being consumed by the page. This can largely relieve PostgreSQL from the TPS performance vs. storage cost trade-off, i.e., as we reduce the value of fillfactor to leave more space in each page for future updates, the physical storage usage will not proportionally increase. Therefore, by enabling virtually variable-size block I/O, in-storage transparent compression allows PostgreSQL to aggressively reduce the fillfactor to improve the TPS performance without sacrificing the physical storage cost.
To further demonstrate this concept, we carried out experiments on PostgreSQL using the Percona Sysbench-TPCC OLTP benchmark. We used a server with a 32-core 3.3GHz Xeon CPU. We considered two different dataset sizes: 740GB and 1.4TB, and carried out experiments (64 client threads) on one normal NVMe SSD and our CSD 2000 with built-in transparent compression. Fig. 2 shows the measured (normalized) TPS performance and physical storage space usage when setting fillfactor as 100 and 75, respectively. Under the same fillfactor, PostgreSQL achieves the same TPS on normal NVMe SSD and our CSD 2000 drive. When reducing fillfactor from 100 to 75, the TPS performance increases by about 33%. In the case of 740GB dataset, under fillfactor of 100, in-storage transparent compression could reduce the physical storage usage from 740GB to 178GB. When reducing fillfactor from 100 to 75, the physical storage usage on normal NVMe SSD proportionally increases from 740GB to 905GB, while the physical storage usage on our CSD 2000 only slightly increases from 178GB to 189GB. A similar phenomenon can be observed in the results of the 1.4TB dataset. The experimental results clearly show that, by simply configuring the fillfactor parameter, PostgreSQL can significantly benefit from in-storage transparent compression without any source code changes.
Figure 2: Normalized TPS performance vs. physical storage usage under different fillfactor when PostgreSQL runs on CSD 2000 and normal NVMe SSD.