The issue is that when you compress something, you need to add metadata so that you can figure out what it looked like originally. Compression algorithms make assumptions about what patterns will occur in the original data in order to optimize the metadata. If the assumptions are right, the compressed data (including the metadata) is much smaller than the original data. But if the assumptions are wrong, the “compressed” data can even end up bigger than it was before! Just like how JIT compilers necessarily need more memory than normal compilers, for metadata.

faviconJake Lazaroff | Making CRDTs 98% More Efficient