Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 28 Next »

SZ: Fast Error-Bounded Lossy HPC Data Compressor

Today’s HPC applications are producing extremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems.

We developed the error-bounded HPC data compressor, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets.

The compression method starts by linearizing multi-dimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curve-fitting model. As for the unpredictable data that cannot be approximated by curve-fitting models, we perform an inoptimized lossy compression via a binary representation analysis.

The key features of SZ are listed below. 

1. Compression: Input: a data set (or a floating-point array with any dimensions) ; Output: the compressed byte stream

    Decompression: input: the compressed byte stream ; Output: the original data set with the compression error of each data point being within a pre-specified error bound ∆.

2. SZ supports C, Fortran, and Java. 

3. SZ supports two types of error bounds. The users can set either absolute error bound or relative error bound, or a combination of the two bounds (with operator AND or OR).

  • The absolute error bound (denoted δ) is a constant, such as 1E-6. That is,  the decompressed data Di′ must be in the range [Di − δ,Di + δ], where  Di′ is referred as the decompressed value and Di is the original data value. 
  • As for the relative error bound, it is a linear function of the global data value range size, i.e., ∆=λr, where λ(∈(0,1)) and r refer to error bound ratio and range size respectively. For example, given a set of data, the range size r is equal to max (Di )− min (Di ), and the error bound can be written as λ( max (Di )− min (Di )). The relative error bound allows to make sure that the compression error for any data point must be no greater than λ×100 percentage of the global data value range size.

4. Detailed usage and examples can be found under the directories doc/user-guide.pdf and example/ respectively, in the package. 

5. Version history: We recommend the latest version SZ 1.1

  • SZ 1.1 This version improved the compression performance by 50% compared to SZ1.0. A few bugs that may make the compression disrespect the error-bound are also fixed. 
  • SZ 1.0 This version is coded in C programming language, unlike the previous version coded in Java. It also allows to set the endianType for the data to compress.
  • SZ 0.5.14 fixed a design bug, which improves the compression ratio further.
  • SZ 0.5.13 improves compression performance, by replacing the implementation with classes by that of primitive data types.
  • SZ 0.5.12 allows users to set "offset" parameter in the configuration file sz.config. The value of offset is an integer in [1,7]. Generally, we recommend offset=2 or 3, while we also find that some other settings (such as offset=7) may lead to better compression ratios in some cases. How to automize/optimize the selection of offset value would be the future work. In addition, the compression speed is improved, by replacing java List by array implementation in the code.
  • SZ 0.5.11 improved SZ 0.5.10 on the level of guaranteeing user-specified error bounds. In a very few cases, SZ 0.5.10 cannot guarantee the error-bounds to a certain user-specified level. For example, when absolute error bound = 1E-6, the maximum decompression error may be 0.01(>>1E-6) because of the huge value range even in the optimized segments such that the normalized data cannot reach the required precision even storing all of the 64 or 32 mantissa bits. SZ 0.5.11 fixed the problem well, with compression ratio degraded by less than 1% in that case.
  • SZ 0.5.10 optimizes the offset by using the optimized formula of computing the median_value based on optimized right-shifting method. Anyway, this version improves compression ratio a lot for hard-to-compress datasets. (Hard-to-compress datasets refer to the cases whose compression ratios are usually very limited)
  • SZ 0.5.9 optimize the offset by using simple right-shifting method. Experiments show that this cannot improve compression ratio actually, because simple right-shifting actually make each data be multiplied by 2^{-k}, where k is # right-shifting bits. The pros is to save bits because of more leading-zero bytes, but the cons is much more required bits to save. See SZ 0.5.10 for the better solution on this issue!
  • SZ 0.5.8 Refine the leading-zero granularity (change it from byte to bits based on the distribution). For example, in SZ0.5.7, the leading-zero is always in bytes, 0, 1, 2, or 3. In SZ0.5.8, the leading-zero part could be xxxx xxxx xx xx xx xx xxxx xxxx (where each x means a bit in the leading-zero part)
  • SZ 0.5.7 improve the decompression speed for some cases
  • SZ 0.5.6 improve compression ratio for some cases (when the values in some segementation are always the same, this segment will be merged forward)
  • SZ 0.5.5 runtime memory is shrinked (by changing int xxx to byte xxx in the codes. The bug that writing decompressed data may encounter exceptions is fixed. Memory leaking bug for ppc architecture is fixed.
  • SZ 0.5.4 Gzip_mode: defaut --> fast_mode ; Support reserved value
  • SZ 0.5.3 Integrate with the dynamic segmentation support
  • SZ 0.5.2 finer compression granularity for unpredictable data, and also remove redundant Java storage bytes
  • SZ 0.5.1 Support version checking
  • SZ 0.2-0.4 Compression ratio is the same as SZ 0.5. The key difference is different implementation ways, such that SZ 0.5 is much faster than SZ 0.2-0.4.

6. Download

-->>> Code download <<<-- (soon, pending DoE approval of distribution license)

(The code is ready to use, but it cannot be released now because the BSD license is under approval process. Before the official release, the code is available upon request. Contact: disheng222@gmail.com or sdi1@anl.gov)

If you download the code, please let us know who you are. We are very keen of helping you using the SZ library.

A paper describing SZ is to appear in IPDPS16, and its technical report is available to download.

  • No labels