The issue is that when you compress something, you need to add metadata so that you can figure out what it looked like originally. Compression algorithms make assumptions about what patterns will occur in the original data in order to optimize the metadata. If the assumptions are right, the compressed data (including the metadata) is much smaller than the original data. But if the assumptions are wrong, the “compressed” data can even end up bigger than it was before! Just like how JIT compilers necessarily need more memory than normal compilers, for metadata.
Jake Lazaroff | Making CRDTs 98% More Efficient
Filed under:
Related Notes
- There’s something qualitative and important that happens when the e...from marcbrooker@gmail.com (Marc Brooker)
- it's neat how *adding* affine measures is mathematically invali...from buttondown.email
- Direct manipulation of data - something like sketch pad - where you...from youtube.com
- facts incorporate time [[Think about how we structure data for ours...from InfoQ
- the smaller the interface, is the more useful it is [[See also [dee...from The Go Programming Language
- errors are values The Go Programming Language | Gopherfest 2015 |...from The Go Programming Language
- don't communicate by sharing memory share memory by communicat...from The Go Programming Language
- General notes: > Instead of writing code, directly manipulate da...from youtube.com