A bit more detail: we're already trying to take the most advantage of zipmaps, ziplists, etc, and I'm wondering whether these representations are already compressed, or are just serialized hashes and lists; does compression significantly reduce memory usage?
Also, does compression overhead at the app server layer get offset by lower network usage? StackOverflow's experience suggests it does, any other opinions?
In brief, does it make sense - for both short and longer strings?
Redis does not compress your values, and if you should compress them yourself depends a lot on the size of the strings you are going to store. For big strings, hundreds of K's and more it's probably worth the extra CPU cycles on the client side, just like it is when you serve web pages, but for shorter strings it's likely a waste of time. Short strings generally don't compress much, so the gain would be too small.
There's a practical way to get good compression, even for very small strings (50 bytes!) -
If your values are somewhat similar to each other - for example, they're JSON representations of a few related classes of objects - you can precompute a compressor/decompressor dictionary based on some example text.
It sounds complicated, but it's simple in practice - and simpler still with the right wrapper code to handle it.
Here's a Python implementation:
https://github.com/internetarchive/openlibrary/blob/master/openlibrary/utils/compress.py
and here's a wrapper for compressing a specific class of strings: (short JSON records)
https://github.com/internetarchive/openlibrary/blob/master/openlibrary/utils/olcompress.py
One catch: to do this efficiently, your compression library must support 'cloning' the internal state. (The Python library does) You can implement something similar by prepending the example text when compressing, but this means paying an extra computation cost.
Thanks to solrize for this awesome trick.
Redis and clients are typically IO bound and the IO costs are typically at least 2 orders of magnitude in respect to the rest of the request/reply sequence. Smaller payloads will give you higher throughput and lower latencies.
I do not believe there are any hard and fast rules beyond: cost of compression << IO gains. You should bench it and find the sweat spot in setting the lower bound, but the MTU of your network is not a bad starting point for the lower bound.
Related
I am looking for suggestions on how best to implement my code for the following requirements. During execution of my c++ code, I frequently need to access data stored in a dictionary, which itself is stored in a text file. The dictionary contains 100 million entries, and at any point in time, my code would query data corresponding to some particular entry among those 100 million entries. There is no particular pattern in which those queries are made, and further during the lifetime of the program execution, not all entries in the dictionary are queried. Also, the dictionary will remain unchanged during the program's lifetime. The data corresponding to each entry is not all of the same length. The file size of my dictionary is ~24 GB, and I have only 16 GB of RAM memory. I need my application to be very fast, so I would like to know how best to implement such a system so that read access times can be minimized.
I am also the one who is creating the dictionary, so I do have the flexibility in breaking down my dictionary into several smaller volumes. While thinking about what I can do, I came up with the following, but not sure if either are good.
If I store the line offset for each entry in my dictionary from the beginning of the file, then to read the data for the corresponding entry, I can directly jump to the corresponding offset. Is there a way to do this using say ifstream without looping through all lines until the offset line? A quick search on the web seems to suggest this is not possible atleast with ifstream, are there are other ways this can be done?
The other extreme thought was to create a single file for each entry in the dictionary, so I would have 100 million files. This approach has the obvious drawback of overhead in opening and closing the file stream.
In general I am not convinced either of the approaches I have in mind are good, and so I would like some suggestions.
Well, if you only need key value accesses, and if the data is larger than what can fit in memory, the answer is a NoSQL database. That mean a hash type index for the key and arbitrary values. If you have no other constraint like concurrent accesses from many clients or extended scalability, you can roll your own. The most important question for a custom NoSQL database is the expected number of keys that will give the size of the index file. You can find rather good hashing algorithms around, and will have to make a decision between a larger index file and a higher risk of collisions. Anyway, unless you want to use a tera bytes index files, your code must be prepared to possible collisions.
A detailed explaination with examples is far beyond what I can write in a SO answer, but it should give you a starting point.
The next optimization will be what should be cached in memory. It depends on the way you expect the queries. If it is unlikely to query more than one time the same key, you can probably just rely on the OS and filesystem cache, and a slight improvement would be memory mapped files, else caching (of index and/or values) makes sense. Here again you can choose and implement a caching algorithm.
Or if you think that it is too complex for little gain, you can search if one of the free NoSQL databases could meet your requirement...
Once you decide using on-disk data structure it becomes less a C++ question and more a system design question. You want to implement a disk-based dictionary.
You should consider the following factors from now on are - what's your disk parameters? is it SSD? HDD? what's your average lookup rate per second? Are you fine having 20usec - 10ms latencies for your Lookup() method?
On-disk dictionaries require random disk seeks. Such seeks have a latency of dozens of microseconds for SSD and 3-10ms for HDD. Also, there is a limit on how many such seeks you can make a second. You can read this article for example. CPU stops being a bottleneck and IO becomes important.
If you want to pursue this direction - there are state of art C++ libraries that give you on-disk key-value store (no need for out-of- process database) or you can do something simple yourself.
If your application is a batch process and not a server/UI program, i.e. you have another finite stream of items that you want to join with your dictionary then I recommend reading about external algorithms like Hash Join or a MapReduce. In these cases, it's possible to organize your data in such way that instead of having 1 huge dictionary of 24GB you can have 10 dictionaries of size 2.4GB and sequentially load each one of them and join. But for that, I need to understand what kind of problem you are trying to solve.
To summarize, you need to design your system first before coding the solution. Using mmap or tries or other tricks mentioned in the comments are local optimizations (if at all), they are unlikely game-changers. I would not rush exploring them before doing back-on-the-envelope computations to understand the main direction.
I understand that GZIP is a combination of LZ77 and Huffman coding and can be configured with a level between 1-9 where 1 indicates the fastest compression (less compression) and 9 indicates the slowest compression method (best compression).
My question is, does the choice of level only impact the compression process or is there an additional cost also incurred in decompression depending on the level used to compress?
I ask because typically many web servers will GZIP responses on the fly if the client supports it, e.g. Accept-Encoding: gzip. I appreciate that when doing this on the fly a level such as 6 might be the good choice for the average case, since it gives a good balance between speed and compression.
However, if I have a bunch of static assets that I can GZIP just once ahead of time - and never need to do this again - would there be any downside to using the highest but slowest compression level? I.e. is there now an additional overhead for the client that would not have been incurred had a lower compression level been used.
Great question, and an underexposed issue. Your intuition is solid – for some compression algorithms, choosing the max level of compression can require more work from the decompressor when it's unpacked.
Luckily, that's not true for gzip – there's no extra overhead for the client/browser to decompress more heavily compressed gzip files (e.g. choosing 9 for compression instead of 6, assuming the standard zlib codebase that most servers use). The best measure for this is decompression rate, which for present purposes is in units of MB/sec, while also monitoring overhead like memory and CPU. Simply going by decompression time is no good because the file is smaller at higher compression settings, and we're not controlling for that factor if we're only using a stopwatch.
gzip decompression quickly gets asymptotic in terms of both time-to-decompress and memory usage once you get past level 6 compressed content. The time-to-decompress flatlines for levels 7, 8, and 9 in the test results linked by Marcus Müller, though that's coarse-grained data given in whole seconds.
You'll also notice in those results that the memory requirements for decompression are flat for all levels of compression at 0.1 MiB. That's almost unbelievable, just a degree of excellence in software that we rarely see. Mark Adler and colleagues deserve massive props for what they achieved. gzip is a very nice format.
The memory use gets at your question about overhead. There really is none. You don't gain much with level 9 in terms of browser decompression speed, but you don't lose anything.
Now, check out these test results for a bit more texture. You'll see how the gzip decompression rate is slightly faster with level 9 compressed content than with lower levels (at level 9, decomp rate is about 0.9% faster than at level 6, for example). That is interesting and surprising. I wouldn't expect the rate to increase. That was just one set of test results – it may not hold for other scenarios (and the difference is quite small in any case).
Parting note: Precompressing static files is a good idea, but I don't recommend gzip at level 9. You'll get smaller files than gzip-9 by instead using zopfli or libdeflate. Zopfli is a well-established gzip compressor from Google. libdeflate is new but quite excellent. In my testing it consistently beats gzip-9, but still trails zopfli. You can also use 7-Zip to create gzip files, and it will consistently beat gzip-9. (In the foregoing, gzip-9 refers to using the canonical gzip or zlib application that Apache and nginx use).
No, there is no downside on the decompression side when using the maximum compression level. In fact, there is a slight upside, in that better-compressed data decompresses faster. The reason is simply fewer compressed bits that the decompressor has to process.
Actually, in real world measurements a higher compression level yields lower decompression times (which might be primarily caused by the fact that you need to handle less permanent storage and less RAM access).
Since, actually, most things that happen at a client with the data are rather expensive compared to gunzipping, you shouldn't really care about that, at all.
Also be advised that for static assets that are images, usually huffman/zlib coding (PNG simply uses zlib!) is already applied, and you won't gain much by gzipping these. Actually, often small images (for example, icons) fit into a single TCP packet (ignoring the HTTP header, which sometimes is bigger than the image itself) and therefore you don't get any speed gain (but save money on transfer volume -- if you deliver terabytes of small images. Now, may I presume you're not Google itself...
Also, I'd like to point you to higher level optimization, like tools that can transform your javascript code into a compacter shape (eg. removing whitespace, renaming private variables from my_mother_really_likes_this_number_of_unicorns to m1); also, things like JQuery come in a "precompressed" form. The same exists for HTML. Doesn't make things easier to debug, but since you seem to be interested in ultimate space saving...
MonetDB is very efficient column oriented database. I came to know that it follows light weight compression algorithms to speed it up. Can someone tell me more about the implementation of these compression/decompression algorithms in monetDB?
There is currently no compression on primitive values such as integers and floating point numbers. Thus, choosing the appropriate type for your data will make a difference once your tables get large.
The string storage uses pointers to a string heap. Hence, for categorical string values that only contain few distinct values, storage will generally be efficient. More advanced compression methods are in the works, but I do not expect them to be available in the next six months.
Finally, we had great experiences running MonetDB on a force-compressed file system (e.g. BTRFS). This greatly reduces the storage footprint of databases and also reduces the IO time, especially on spinning hard disks.
I'm trying to optimize a program which needs to compute a hash for a constant size window in a data stream at every position (byte) of stream. It is needed for a lookup of repetitions in disk files much larger than available RAM. Currently I compute separate md5 hash for every window, but it costs a lot of time (window size is a few kilobytes, so each byte of data is processed few thousand times). I wonder if there exist a way to compute every subsequent hash in constant (window-size-independent) time (like addition and subtraction of 1 element in moving average)? The hash function may be anything as long as it gives not to long hashes (50-100 bits is ok) and its computation is reasonably fast. It also must give virtually no colisions on up to trillions of not-so-random windows (TB of data) - every collision means a disk access in my case (crc32 is much to weak, md5 is OK in this aspect).
I'll be thankful if you point me to an existing library function available on linux if there is one.
This is my first question here, so please be tolerant if I did something wrong.
regards,
bartosz
The Wikipedia article on rolling hashes has a link to ngramhashing which implements a few different techniques in C++, including:
Randomized Karp-Rabin (sometimes called Rabin-Karp)
Hashing by Cyclic Polynomials (also known as Buzhash)
Hashing by Irreducible Polynomials
(Also available on GitHub)
What you describe, is pretty near to the basic approach used in data deduplication storage.
The data deduplication systems, we usually use Rabin's fingerprinting method as fast, rolling hash function.
However, while Rabin fingerprints are good and well understood collision properties, it is not cryptographically secure, i.e., there will be collisions. Check e.g. how Bentley et al. used such a method in their compression method. The question is if and how much collisions you can tolerate. If you can tolerate an occasional collision, a good Rabin fingerprint implementation might be a good idea. Good implementations can process more then 200 MB per second per core.
I am not aware of any approach with virtually no collisions (aka cryptographically secure) and rolling at the same time. As PlasmaHH, I have serious doubts that this is actually possible.
Think if you can relax your restrictions. Maybe you can allow to miss some duplicates. In these cases, faster ways are possible.
In one of our softwares we are creating records and storing them in a binary file. Once the writing operation is completed we read back this binary file. The issue is if this binary file is less than 100 MB then its performance is good enough, but once this file grows larger its performance is hit.
So, I thought of splitting this large binary file ( > 100 MB) into smaller ones ( < 100 MB). But it seems this solution is not gaining the performance. So, I was just thinking what can be the better approach to handle this scenario?
It will be really great help from you guys to comment on this.
Thanks
Maybe you could try using an Sqlite database instead.
It is always quite the difficult to provide accurate answers with only a glimpse of the system, but have you actually tried to check the actual throughput ?
As a first solution, I would simply recommend using a dedicated disk (so there are no concurrent read/write actions from other processes), and a fast one at that. This way it would be just some cost of hardware upgrade, and we all know hardware is usually cheaper that software ;) You may even go to a RAID controller for maximizing throughput.
If you are still limited by the disk throughput, there are new technologies out there using the Flash technology: USB keys (though it may not seem very professional) or the "new" Solid State Drives may provide more throughput than a mechanical disk.
Now, if the disks approach are not fast enough or you can't get your hands on good SSDs, you have other solutions, but they involve software changes, and I propose them off the top of my hat.
A socket approach: the second utility is listening on a port and you send it the data there. On a local machine it's relatively fast, and you parallelize the work too, so even if the size of the data grows, you will still begin treating fairly quickly.
A memory mapping approach: write to a dedicated area in live memory and have the utility read from that area (Boost.Interprocess may help, there are other solutions).
Note that if the read is sequential, I find it more "natural" to try a 'pipe' approach (ala Unix) so that the two processes execute concurrently. In a traditional pipe, the data may not hit the disk after all.
A shame, isn't it, that in this age of overwhelming processing power, we are still struggling with our disk IO ?
If your App is reading the data sequential migrating to a DB would not help to increase performance. If random access is used you should consider to move the data into a DB,especially if different indices are used. You should check whether enough resources are available, if loaded completly into memory virtual memory management could have an impact to performance (swapping,paging). Depending on your OS setting a limit for file io buffers could be reached. The file system itself could be fragmented.
To get a higer quality answer you should provide informations about hardware,os,memory and file system. And the way your data file is used. Than you could get hints about kernel tuning etc.
So what is the retrieval mechanism here? How does your application know which of the smaller files to look in to find a record? If you have split up the big file without implementing some form of keyed lookup - indexing, partitioning - you have not addressed the problem, just re-arranged it.
Of course, if you have implemented some form of indexing then you have started down the road of building your own database.
Without knowing more regarding your application it would be rash for us to offer specific advice. Maybe the solution would be to apply an RDBMS solution. Possibly a NoSQL approach would be better. Perhaps you need a text indexing and retrieval engine.
So...
How often does your application need to retrieve records? How does it decide which records to get? What is your definition of poor performance? Why did you (your project) decide to use flat files rather than a database in the first place? What sort of records are we talking about?