efficient TIFF tile extraction C++ - c++

I am working with 1gb large tiff images of around 20000 x 20000 pixels. I need to extract several tiles (of about 300x300 pixels) out of the images, in random positions.
I tried the following solutions:
Libtiff (the only low level library I could find) offers TIFFReadline() but that means reading in around 19700 unnecesary pixels.
I implemented my own tiff reader which extracts a tile out of the image without reading in unnecesary pixels. I expected it to be faster, but doing a seekg for every line of the tile makes it very slow. I also tried reading to a buffer all the lines of the file that include my tile, and then extracting the tile from the buffer, but results are more or less the same.
I'd like to receive suggestions that would improve my tile extraction tool!
Everything is welcome, maybe you can propose a more efficient library I could use, some tips about C/C++ I/O, some higher level strategy for my needs, etc.
Regards,
Juan

[Major edit 14 Jan 10]
I was a bit confused by your mention of tiles, when the tiff is not tiled.
I do use tiled/pyramidical TIFF images. I've created those with VIPS
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,pyramid
I think you can do this with :
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,flat
You may want to experiment with tile size. Then you can read using TIFFReadEncodedTile.
Multi-resolution storage using pyramidical tiffs are much faster if you need to zoom in/out. You may also want to use this to have a coarse image nearly immediately followed by a detailed picture.
After switching to (appropriately sized) tiled storage (which will bring you MASSIVE performance improvements for random access!), your bottleneck will be disk io. File read is much faster if read in sequence. Here mmapping may be the solution.
Some useful links:
VIPS
IIPImage
LibTiff.NET stackoverflow
VIPS is a image handling library which can do much more than just read/write. It has its own, very efficient internal format. It has a good documentation on the algorithms. For one, it decouples processing from filesystem, thereby allowing tiles to be cached.
IIPImage is a multi-zoom webserver/browser library. I found the documentation a very good source of information on multi-resolution imaging (like google maps)
The other solution on this page, using mmap, is efficient only for 'small' files. I've hit the 32-bit boundaries often. Generally, allocating a 1 GByte chunk of memory will fail on a 32-bit os (with 4 GBytes RAM installed) due to the fact that even virtual memory gets fragemented after one or two application runs. Still, there is sufficient memory to cache parts or whole of the image. More memory = more performance.

Just mmap your file.
http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html

Thanks everyone for the replies.
Actually a change in the way tiles were required, allowed me to extract the tiles from the files in hard disk, in a sequential way, instead of a random way. This allowed me to load a part of the file into ram, and extract the tiles from there.
The efficiency gain was huge. Otherwise, if you need random access to a file, mmap is a good deal.
Regards,
Juan

I did something similar to this to handle an arbitrarily large TARGA(TGA) format file.
The thing that made it simple for that kind of file is that the image is not compressed. You can calculate the position of any arbitrary pixel within the image and find it with a simple seek. You might consider targa format if you have the option to specify the image encoding.
If not there are many varieties of TIFF formats. You probably want to use a library if they've already gone through the pain of supporting all the different formats.

Did you get a specific error message? Depending on how you used that command line, you could have been stepping on your own file.
If that wasn't the issue, try using imagemagick instead of vips if it's an option.

Related

Is there a buffer method for synchronization?

I am trying to do streaming one video to multiple clients i need a buffer to keep frames how can i write this buffer?
I am studying on Visual Studio . I need help about that
Any image (e.g. frame) is just a number of bytes (and in most formats, this is now they presented in their structure anyway). Basically, all that you need is to find HOW to get that representation out of objects of your framework.
std::vector<std::vector<unsigned char>> is your friend, if you keep your images in compressed format (like JPEG).
std::vector<std::vector<vec3b>> is your friend, if you keep your images in uncompressed format (RGB, YUV, HSV, HSL, etc).
Here I suggest that you keep a single image in a single element of a higher-level vector. As you're into image/video processing, I suppose that you already know how to work with vectors =3
Be careful that this method takes whole lot of memory due to keeping full decoded images in memory. If you want to limit the maximum amount of spent memory, use circular buffer pattern (free in terms of speed and memory,effective abstraction on top of std::vector can be written in something like 15 minutes).
P.S. Also, when you ask a question on SO, try to put as much information as you can - piece of code, framework used, attempted (and failed) approaches to solve the problem. It makes it a lot easier to respond correctly.

Most efficient way to store video data

In order to accomplish some specific editing on some .avi files, I'd like to create an application (in C++) that is able to load, edit, and save those .avi files. But, what is the most efficient way? When first thinking about it, a simple 3D-Array containing a 2D-array of pixels for every frame seems the simplest solution; But then its size would be ENORMOUS. I mean, let's assume that a pixel only needs a color. One color would mean 3bytes (1char r, 1char b, 1char g). If I now have a 1920x1080 video format, this would mean 2MEGABYTES for only one frame! This data may or may not be smaller if using pointers for the colors, so that alreay used colors wont take more size - I don't really know, since I'm pretty new to C++ and the whole low-level stuff. (As a comparison: One of my AVI files recorded with Xvid codec is 40seconds long, 30fps, and only has 2MB.)
So how would you actually store the video data (Not even the audio, just the video) efficiently (while still being easily able to perform per-frame-changes on it)?
As you have realised, uncompressed video is enormous and it is not practical to store an entire video in this way.
Video compression is an extremely complex topic, but more-or-less, it works as follows: certain "key-frames" are compressed using fairly standard compression techniques similar or identical to still-photo compression such as JPEG. Frames following key-frames are compressed by comparing the frame with the previous one and looking for changes (such as moving blocks). Every now and again, a new key-frame is used.
You don't really have to worry much about that as you are not going to write your own video coder/decoder (codec). There are standard ones.
What will happen is that your program will decode the compressed video frame-by-frame and keep a certain number of frames in memory while you are working on them and then re-encode them when it is finished. In the uncompressed form, you will have access to the individual pixels and can work on them how you want.
You are probably not going to do that either by yourself - it is very hard. You probably need to use a framework, such as OpenCV. There are a huge number of standard filters and tools built in to these frameworks, and it may be that what you want to do is already implemented somewhere.
The OpenCV framework can return individual frames in a Mat object and you can then access the pixels. See this post Get Pixels from Mat
OpenCV
Tutorial page: Open CV Tutorial

Cropping large jpeg

There is a task, to write a programm that will be crope a JPEG files. But the problem is that some jpeg files has large sizes - hundreds of MegaBytes. So the question: Is it possible to crop a jpeg file, but without loading all file to the RAM, using something like fseek(), and decoding only the parts that needed.
Is that possible? If yes, maybe there is some libraries do the same.
Upd. All this will be used for the deep zoom technology. So when deep zoom will asking for a file, this program will give it, but this should be in real time
There are two ways to accomplish this.
The first is lossless cropping, where you don't decode the file all the way but work with the 8x8 DCT blocks. You'll need to use a library that has this capability, and it places some restrictions on the cropping ability. You can't crop to a boundary that isn't on the DCT square, which limits you to multiples of 8 or 16 depending on the subsampling in the file.
The second way is to use a library that allows you to read and write one line at a time. I know that the IJG library can do this, and probably others as well. This is the easy way, but the downside is that the image goes through a decompression/recompression pass and will lose quality and/or be larger.

Appropriate image file format for losslessly compressing series of screenshots

I am building an application which takes a great many number of screenshots during the process of "recording" operations performed by the user on the windows desktop.
For obvious reasons I'd like to store this data in as efficient a manner as possible.
At first I thought about using the PNG format to get this done. But I stumbled upon this: http://www.olegkikin.com/png_optimizers/
The best algorithms only managed a 3 to 5 percent improvement on an image of GUI icons. This is highly discouraging and reveals that I'm going to need to do better because just using PNG will not allow me to use previous frames to help the compression ratio. The filesize will continue to grow linearly with time.
I thought about solving this with a bit of a hack: Just save the frames in groups of some number, side by side. For example I could just store the content of 10 1280x1024 captures in a single 1280x10240 image, then the compression should be able to take advantage of repetitions across adjacent images.
But the problem with this is that the algorithms used to compress PNG are not designed for this. I am arbitrarily placing images at 1024 pixel intervals from each other, and only 10 of them can be grouped together at a time. From what I have gathered after a few minutes scanning the PNG spec, the compression operates on individual scanlines (which are filtered) and then chunked together, so there is actually no way that info from 1024 pixels above could be referenced from down below.
So I've found the MNG format which extends PNG to allow animations. This is much more appropriate for what I am doing.
One thing that I am worried about is how much support there is for "extending" an image/animation with new frames. The nature of the data generation in my application is that new frames get added to a list periodically. But I do have a simple semi-solution to this problem, which is to cache a chunk of recently generated data and incrementally produce an "animation", say, every 10 frames. This will allow me to tie up only 10 frames' worth of uncompressed image data in RAM, not as good as offloading it to the filesystem immediately, but it's not terrible. After the entire process is complete (or even using free cycles in a free thread, during execution) I can easily go back and stitch the groups of 10 together, if it's even worth the effort to do it.
Here is my actual question that everything has been leading up to. Is MNG the best format for my requirements? Those reqs are: 1. C/C++ implementation available with a permissive license, 2. 24/32 bit color, 4+ megapixel (some folks run 30 inch monitors) resolution, 3. lossless or near-lossless (retains text clarity) compression with provisions to reference previous frames to aid that compression.
For example, here is another option that I have thought about: video codecs. I'd like to have lossless quality, but I have seen examples of h.264/x264 reproducing remarkably sharp stills, and its performance is such that I can capture at a much faster interval. I suspect that I will just need to implement both of these and do my own benchmarking to adequately satisfy my curiosity.
If you have access to a PNG compression implementation, you could easily optimize the compression without having to use the MNG format by just preprocessing the "next" image as a difference with the previous one. This is naive but effective if the screenshots don't change much, and compression of "almost empty" PNGs will decrease a lot the storage space required.

If I take a loss-compressed file and save it again (e.g. JPEG) will there be loss of quality?

I've often wondered, if I load a compressed image file, edit it and the save it again, will it loose some quality? What if I use the same quality grade when saving, will the algorithms somehow detect that the file has already be compressed as a JPEG and therefore there is no point trying to compress the displayed representation again?
Would it be a better idea to always keep the original (say, a PSD) and always make changes to it and then save it as a JPEG or whatever I need?
Yes, you will lose further file information. If making multiple changes, work off of the original uncompressed file.
When it comes to lossy compression image formats such as JPEG, successive compression will lead to perceptible quality loss. The quality loss can be in the forms such as compression artifacts and blurriness of the image.
Even if one uses the same quality settings to save an image, there will still be quality loss. The only way to "preserve quality" or better yet, lose as little quality as possible, is to use the highest quality settings that is available. Even then, there is no guarantee that there won't be quality loss.
Yes, it would be a good idea to keep a copy of the original if one is going to make an image using a lossy compression scheme such as JPEG. The original could be saved with a compression scheme which is lossless such as PNG, which will preserve the quality of the file at the cost of (generally) larger file size.
(Note: There is a lossless version of JPEG, however, the most common one uses techniques such as DCT to process the image and is lossy.)
In general, yes. However, depending on the compression format there are usually certain operations (mainly rotation and mirroring) that can be performed without any loss of quality by software designed to work with the properties of the file format.
Theoretically, since JPEG compresses each 8x8 block pf pixels independantly, it should be possible to keep all unchanged blocks of an image if it is saved with the same compression settings, but I'm not aware of any software that implements this.
Of course. Because level of algorithm used initially will probably be different than in your subsequent saves. You can easily check this by using an Image manipulation software (eg. Photoshop). Save your file several times and change level of of compression each time. Just a slight bit. You'll see image degradation.
If the changes are local (fixing a few pixels, rather than reshading a region) and you use the original editing tool with the same settings, you may avoid degradation in the areas that you do not affect. Still, expect some additional quality loss around the area of change as the compressed blocks are affected, and cannot be recovered.
The real answer remains to carry out editing on the source image, captured without compression where possible, and applying the desired degree of compression before targeting the image for use.
Yes, you will always lose a bit of information when you re-save an image as JPEG. How much you lose depend on what you have done to the image after loading it.
If you keep the image the same size and only make minor changes, you will not lose that much data. When the image is loaded, an approximation of the original image is recreated from the compressed data. If you resave the image using the same compression, most of the data that you lose will be data that was recreated when loading.
If you resize the image, or edit large areas of it, you will lose more data when resaving it. Any edited part of the image will lose about the same amount of information as when you first compressed it.
If you want to get the best possible quality, you should always keep the original.