Are there any compression programs (e.g. gzip, 7zip, xz) that can use GPUs to accelerate compression? I've been playing with a 5.7 GB qcow2 image of an OpenStack instance, and with xz using extra options I could get the file compressed to only 1.6 GB, but it took almost 2 hours to get that, compared to ~10 minutes when the compressed file size was 4.5 GB. It got me to wondering if any programs use GPUs to speed it up. I saw some discussion around NVIDIA's CUDA library (Compression library using Nvidia's CUDA). Has anything been developed since then?
Well, there is new WinZip, uses GPU by OpenCL. You need to test to see if is good to you.
Related
I have a large collection of ISO files (around 1GB each) that have shared 'runs of data' between them. So, for example, one of the audio tracks may be the same (same length and content across 5 isos), but it may not necessarily have the same name or location in each.
Is there some compression technique I can apply that will detect and losslessly deduplicate this information across multiple files?
For anyone reading this, after some experimentation it turns out that by putting all the similar ISO or CHD files in a single 7zip archive (Solid archive, with maximum dictionary size of 1536MB), I was able to achieve extremely high compression via deduplication on already compressed data.
The lrzip program is designed for this kind of thing. It is available on most Linux/BSD systems package mangers, or via Cygwin for Windows.
It uses an extended version of rzip to first de-duplicate the source files, and then compresses them. Because it uses mmap() it does not have issues with the size of your RAM, like 7zip does.
In my tests lrzip was able to massively de-duplicate similar ISOs, bringing a 32GB set of OS installation discs down to around 5GB.
I have a lot of rrd files which got generated on a 1st Gen Cubieboard (1 GHz CPU, 1 Core, 1 GB RAM), and about a year ago, when I migrated the data loggers to an x86_64 machine, I noticed that I can no longer read those old files. I didn't know they were platform specific.
I know that there is a workflow where I export the data from the files into XML files and then import them on the other architecture, but this is not my first choice as the old board is painfully slow and has other important work to do, like be a DNS server. The rrdtool version is stuck at 1.4.7 and there are 1.4 gigs worth of files to get processed.
Is there a way to emulate the Cubieboard on a fast Intel machine or some x86_64 based tool which can convert those rrd files?
RRD File are not portable between architectures, as you have noticed. The format depends not only on the 32/64 bit integer size, but also on the 'endian' characteristics, and even on the compiler behaviour with structure padding. It may be possible to compile the library in 32-bit mode on your new platform, but it is still not likely to be compatible with your old RRD files as there are other hardware differences to consider.
In short, your best option is to (slowly?) export to XML and then re-import in the new architecture, as you already mentioned. I have previously done this on a large RRD installation, running in parallel for a while to avoid gaps in the data, but it takes time.
I seem to remember that Tobi was, at one time, planning on a new architecture-independent RRD format in RRD 1.6, but even if this comes to pass then it won't help you with your legacy data.
For a scientific project i need to compress video data. The video however doesn't contain natural video and the quality characteristics of the compression will be different than for natural footage (preservation of hard edges for example is more important than smooth gradients or color correctness).
I'm looking for a library that can be easily integrated in an existing c++ project and that let's me experiment with different video compression algorithms.
Any suggestions?
Look at FFmpeg. It is the the most mature open source tool for video compression and decompression. It comes with a command line tool, and with libraries for codecs and muxers/demuxers that can be statically or dynamically linked.
As satuon already answered, FFmpeg is the go-to solution for all things multimedia. However, I just wanted to suggest an easier path for you than trying to hook your program up to its libraries. It would probably be far easier for you to generate a sequence of raw RGB images within your program, dump each out to disc (perhaps using a ridiculously simple format like PPM), and then use FFmpeg from the command like to compress them into a proper movie.
This workflow might cut down on your prototyping and development time.
As for the specific video codec you will want to use, you have a plethora of options available to you. One of the most important considerations will be: Who needs to be able to play your video and what software will they have available?
I've been wanting to play around with audio parsing for a while now but I haven't really been able to find the correct library for what I want to do.
I basically just want to parse through a sound file and get amplitudes/frequencies and other relevant information at certain times during the song (like every 10 ms or so) so I can graph the data for example where the song speeds up a lot and where it gets really loud.
I've looked at OpenAL quite a bit but it doesn't look like it provides this ability, other than that I have not had much luck with finding out where to start. If anyone has done this or used a library which can do this a point in the right direction would be greatly appreciated. Thanks!
For parsing and decoding audio files I had good results with libsndfile, which runs on Windows/OSX/Linux and is open source (LGPL license). This library does not support mp3 (the author wants to avoid licensing issues), but it does support FLAC and Ogg/Vorbis.
If working with closed source libraries is not a problem for you, then an interesting option could be the Quicktime SDK from Apple. This SDK is available for OSX and Windows and is free for registered developers (you can register as an Apple developer for free as well). With the QT SDK you can parse all the file formats that the Quicktime Player supports, and that includes .mp3. The SDK gives you access to all the codecs installed by QuickTime, so you can read .mp3 files and have them decoded to PCM on the fly. Note that to use this SDK you have to have the free QuickTime Player installed.
As far as signal processing libraries I honestly can't recommend any, as I have written my own functions (for speech recognition, in case you are curious). There are a few open source projects that seem interesting listed in this page.
I recommend that you start simple, for example working on analyzing amplitude data, which is readily available from the PCM samples without having to do any processing. Being able to visualize the data is very useful, I have found Audacity to be an excellent visualization tool, and since it is open source you can build your own tests inside it.
Good luck!
I have a 500Mhz CPU and 256MB RAM machine running 32bit Linux.
I have a large number of files around 300KB in size. I need to compress them very fast. I have set up the compression level for zlib at Z_BEST_SPEED. Is there any other measure I could take?
Is it possible to compress 25-30 files like this in a second on such a machine?
You are essentially talking about a 10MB/sec speed. Even if you were to only copy the files from one place to another I would doubt that your slow hardware could do it. So, for compression I would vote No - it's not possible "to compress 25-30 files like this in a second on such a machine".