Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 48 Next »

SZ: Fast Error-Bounded Floating-point Data Compressor for Scientific Applications

Sheng Di, Dingwen Tao, Franck Cappello

Today’s HPC applications are producing extremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems.

We developed the error-bounded HPC data compressor, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets.

The key features of SZ are listed below. 

1. Compression: Input: a data set (or a floating-point array with any dimensions) ; Output: the compressed byte stream

    Decompression: input: the compressed byte stream ; Output: the original data set with the compression error of each data point being within a pre-specified error bound ∆.

2. SZ supports C, Fortran, and Java. It has been tested on Linux and Mac, with different architectures (x86, x64, ppc, etc.).

3. SZ supports two types of error bounds. The users can set either absolute error bound or value-range based relative error bound, or a combination of the two bounds (with operator AND or OR).

  • The absolute error bound (denoted δ) is a constant, such as 1E-6. That is,  the decompressed data Di′ must be in the range [Di − δ,Di + δ], where  Di′ is referred as the decompressed value and Di is the original data value. 
  • As for the relative error bound, it is a linear function of the global data value range size, i.e., ∆=λr, where λ(∈(0,1)) and r refer to error bound ratio and range size respectively. For example, given a set of data, the range size r is equal to max (Di )− min (Di ), and the error bound can be written as λ( max (Di )− min (Di )). The relative error bound allows to make sure that the compression error for any data point must be no greater than λ×100 percentage of the global data value range size.

4. SZ supports three compression modes (similar to Gzip): SZ_BEST_SPEED, SZ_DEFAULT_COMPRESSION, and SZ_BEST_COMPRESSION. SZ_BEST_SPEED results in the fastest compression. The best compression factor will be reached when using SZ_BEST_COMPRESSION and Gzip_BEST_COMPRESSION meanwhile. SZ_DEFAULT_COMPRESSION is a tradeoff between the SZ_BEST_SPEED and SZ_BEST_COMPRESSION.

5. More detailed usage and examples can be found under the directories doc/user-guide.pdf and example/ respectively, in the package. 

6. Download

Version SZ 1.4.7-beta

-->>> Pakcage Download (including everything) <<<--

-->>> Github of SZ <<<-

(Contact: or

If you download the code, please let us know who you are. We are very keen of helping you using the SZ library.

A paper describing SZ is to appear in IPDPS16, and its technical report is available to download.

7. Version history: We recommend the latest version SZ 1.4.7

  • SZ 1.4.7-beta Fix some memory leakage bugs (related to Huffman encoding). Fix the bugs about memory crash or segmentation faults when the number of data points is pretty large. Fix the sementation fault bug happening when the data size is super small. Fix the issue that decompressed data may be largely skewed from the original data in some special cases (especially when the data are not smooth at all).
  • SZ 1.4.6-beta The compression ratio and speed are further improved than SZ 1.3 in most cases. We also provide three compression modes: SZ_BEST_SPEED, SZ_DEFAULT_COMPRESSION, and SZ_BEST_COMPRESSION. Please read the user guide for details. 
  • SZ 1.3 The compression ratio and speed are further improved than SZ 1.2.
  • SZ 1.2 The compression ratio is improved significantly compared with SZ 1.1.
  • SZ 1.1 This version improved the compression performance by 50% compared to SZ1.0. A few bugs that may make the compression disrespect the error-bound are also fixed. 
  • SZ 1.0 This version is coded in C programming language, unlike the previous version coded in Java. It also allows to set the endianType for the data to compress.
  • SZ 0.5.14 fixed a design bug, which improves the compression ratio further.
  • SZ 0.5.13 improves compression performance, by replacing the implementation with classes by that of primitive data types.
  • SZ 0.5.12 allows users to set "offset" parameter in the configuration file sz.config. The value of offset is an integer in [1,7]. Generally, we recommend offset=2 or 3, while we also find that some other settings (such as offset=7) may lead to better compression ratios in some cases. How to automize/optimize the selection of offset value would be the future work. In addition, the compression speed is improved, by replacing java List by array implementation in the code.
  • SZ 0.5.11 improved SZ 0.5.10 on the level of guaranteeing user-specified error bounds. In a very few cases, SZ 0.5.10 cannot guarantee the error-bounds to a certain user-specified level. For example, when absolute error bound = 1E-6, the maximum decompression error may be 0.01(>>1E-6) because of the huge value range even in the optimized segments such that the normalized data cannot reach the required precision even storing all of the 64 or 32 mantissa bits. SZ 0.5.11 fixed the problem well, with compression ratio degraded by less than 1% in that case.
  • SZ 0.5.10 optimizes the offset by using the optimized formula of computing the median_value based on optimized right-shifting method. Anyway, this version improves compression ratio a lot for hard-to-compress datasets. (Hard-to-compress datasets refer to the cases whose compression ratios are usually very limited)
  • SZ 0.5.9 optimize the offset by using simple right-shifting method. Experiments show that this cannot improve compression ratio actually, because simple right-shifting actually make each data be multiplied by 2^{-k}, where k is # right-shifting bits. The pros is to save bits because of more leading-zero bytes, but the cons is much more required bits to save. See SZ 0.5.10 for the better solution on this issue!
  • SZ 0.5.8 Refine the leading-zero granularity (change it from byte to bits based on the distribution). For example, in SZ0.5.7, the leading-zero is always in bytes, 0, 1, 2, or 3. In SZ0.5.8, the leading-zero part could be xxxx xxxx xx xx xx xx xxxx xxxx (where each x means a bit in the leading-zero part)
  • SZ 0.5.7 improve the decompression speed for some cases
  • SZ 0.5.6 improve compression ratio for some cases (when the values in some segementation are always the same, this segment will be merged forward)
  • SZ 0.5.5 runtime memory is shrinked (by changing int xxx to byte xxx in the codes. The bug that writing decompressed data may encounter exceptions is fixed. Memory leaking bug for ppc architecture is fixed.
  • SZ 0.5.4 Gzip_mode: defaut --> fast_mode ; Support reserved value
  • SZ 0.5.3 Integrate with the dynamic segmentation support
  • SZ 0.5.2 finer compression granularity for unpredictable data, and also remove redundant Java storage bytes
  • SZ 0.5.1 Support version checking
  • SZ 0.2-0.4 Compression ratio is the same as SZ 0.5. The key difference is different implementation ways, such that SZ 0.5 is much faster than SZ 0.2-0.4.

8. Other versions are available upon request


  • No labels