Does Brotli compress better with a larger window? - compression

What is the window size in Brotli?
Is it true that setting a larger window size improves the compression ratio?

From RFC 7932:
The possible sliding window sizes range from 1 KiB - 16 B to 16 MiB -
16 B.
In general, yes, a larger window size will improve the compression ratio, but only if the data is likely to have repeating strings that are that far away from each other. You can google for compression benchmarks that include varying the window size.

Related

C++ running out of memory trying to draw large image with OpenGL

I have created a simple 2D image viewer in C++ using MFC and OpenGL. This image viewer allows a user to open an image, zoom in/out, pan around, and view the image in its different color layers (cyan, yellow, magenta, black). The program works wonderfully for reasonably sized images. However I am doing some stress testing on some very large images and I am easily running out of memory. One such image that I have is 16,700x15,700. My program will run out of memory before it can even draw anything because I am dynamically creating an UCHAR[] with a size of height x width x 4. I multiply it by 4 because there is one byte for each RGBA value when I feed this array to glTexImage2D(GLTEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGBA, GLUNSIGNED_BYTE, (GLvoid*)myArray)
I've done some searching and have read a few things about splitting my image up into tiles, instead of one large texture on a single quad. Is this something that I should be doing? How will this help me with my memory? Or is there something better that I should be doing?
Your allocation is of size 16.7k * 15.7k * 4 which is ~1GB in size. The rest of the answer depends on whether you are compiling to 32 bit or 64 bit executable and whether you are making use of Physical Address Extensions (PAE). If you are unfamiliar with PAE, chances are you aren't using it, by the way.
Assuming 32 Bit
If you have a 32 bit executable, you can address 3GB of that memory so one third of your memory is being used up in a single allocation. Now, to add to the problem, when you allocate a chunk of memory, that memory must be available as a single continuous range of free memory. You might easily have more than 1GB of memory free but in chunks smaller than 1GB, which is why people suggest you split your texture up into tiles. Splitting it into 32 x 32 smaller tiles means you are allocating 1024 allocations of 1MB for example (this is probably unnecessarily fine-grained).
Note: citation required but some modes of linux allow only 2GB..
Assuming 64 Bit
It seems unlikely that you are building a 64 bit executable, but if you were then the logically addressable memory is much higher. Typical numbers will be 2^42 or 2^ 48 (4096 GB and 256 TB, respectively). This means that large allocations shouldn't fail under anything other than artificial stress tests and you will kill your swapfile before you exhaust the logical memory.
If your constraints / hardware allow, I'd suggest building to 64bit instead of 32bit. Otherwise, see below
Tiling vs. Subsampling
Tiling and subsampling are not mutually exclusive, up front. You may only need to make one change to solve your problem but you might choose to implement a more complex solution.
Tiling is a good idea if you are in 32 bit address space. It complicates the code but removes the single 1GB contiguous block problem that you seem to be facing. If you must build a 32 bit executable, I would prefer that over sub-sampling the image.
Sub-sampling the image means that you have an additional (albeit-smaller) block of memory for the subsampled vs. original image. It might have a performance advantage inside openGL but set that against additional memory pressure.
A third way, with additional complications is to stream the image from disk when necessary. If you zoom out to show the whole image, you will be subsampling >100 pixels per screen pixel on a 1920 x 1200 monitor. You might choose to create an image that is significantly subsampled by default, and use that until you are sufficiently zoomed-in that you need a higher-resolution version of a subset of the image. If you are using SSDs this can give acceptable performance but adds a lot by way of additional complication.

Image size increases after loading

I have an RGB JPEG image which weighs about 11 MB and its resolution 7680 x 4320. I use an array of uchar4 to store it in the RAM. sizeof(uchar4) is 4 bytes, which is logical. It is not hard to calculate that the size of the array I use will be 4 x 7680 x 4320 = 132710400 bytes = [approx.] 126 MB. So how comes that the image weighs only 11 MB when it's stored on the hard drive and weighs 126 MB after being loaded into the RAM.
So actually your question is why is the image size smaller when it is stored on disk because the size in memory is actually what you expected, right?
Unfortunately you did not tell us which file format is used to store the image, but basically all common image formats don't store the pixel values as-is. They apply a compression algorithm first. Some formats like PNG or GIF use lossless compression, others like JPEG use lossy compression which means that image quality gets slightly worse whenever the image is stored. However these formats allow a better compression.
All compression algorithms depend on the fact that image pixels are not (statistically) independent from each other. Nearby pixels are usually correlated. This correlation is used to reduce the amount of data. Because different images typically have different correlations, the image file size can vary even if the number of pixels is the same.

How to compress sprite sheets?

I am making a game with a large number of sprite sheets in cocos2d-x. There are too many characters and effects, and each of them use a sequence of frames. The apk file is larger than 400mb. So I have to compress those images.
In fact, each frame in a sequence only has a little difference compares with others. So I wonder if there is a tool to compress a sequence of frames instead of just putting them into a sprite sheet? (Armature animation can help but the effects cannot be regarded as an armature.)
For example, there is an effect including 10 png files and the size of each file is 1mb. If I use TexturePacker to make them into a sprite sheet, I will have a big png file of 8mb and a plist file of 100kb. The total size is 8.1mb. But if I can compress them using the differences between frames, maybe I will get a png file of 1mb and 9 files of 100kb for reproducing the other 9 png files during loading. This method only requires 1.9mb size in disk. And if I can convert them to pvrtc format, the memory required in runtime can also be reduced.
By the way, I am now trying to convert .bmp to .pvr during game loading. Is there any lib for converting to pvr?
Thanks! :)
If you have lots of textures to convert to pvr, i suggest you get PowerVR tools from www.imgtec.com. It comes with GUI and CLI variants. PVRTexToolCLI did the job for me , i scripted a massive conversion job. Free to download, free to use, you must register on their site.
I just tested it, it converts many formats to pvr (bmp and png included).
Before you go there (the massive batch job), i suggest you experiment with some variants. PVR is (generally) fat on disk, fast to load, and equivalent to other formats in RAM ... RAM requirements is essentially dictated by the number of pixels, and the amount of bits you encode for each pixel. You can get some interesting disk size with pvr, depending on the output format and number of bits you use ... but it may be lossy, and you could get artefacts that are visible. So experiment with limited sample before deciding to go full bore.
The first place I would look at, even before any conversion, is your animations. Since you are using TP, it can detect duplicate frames and alias N frames to a single frame on the texture. For example, my design team provide me all 'walk/stance' animations with 5 pictures, but 8 frames! The plist contains frame aliases for the missing textures. In all my stances, frame 8 is the same as frame 2, so the texture only contains frame 2, but the plist artificially produces a frame8 that crops the image of frame 2.
The other place i would look at is to use 16 bits. This will favour bundle size, memory requirement at runtime, and load speed. Use RGBA565 for textures with no transparency, or RGBA5551 for animations , for examples. Once again, try a few to make certain you get acceptable rendering.
have fun :)

JPEG: Dimensions versus Compression

Pretty simple but specific question here:
I'm not entirely familiar with the JPEG standard for compressing images. Does it create a better (that being, smaller file size at a similar quality) image when the X dimension (width) is very large and the Y dimension (height) is very small, vice versa, or when the two are nearly equal?
The practical use I have for this is CSS sprites. If a website were to consist of hundreds of CSS sprites, it would be ideal to minimize the size of the sprite file to assist users on slower internet and also to reduce server load. If the JPEG standard operates really well on a single horizontal line, but moving vertically requires a lot more complexity, it would make sense for an image of 100 16x16 CSS sprites to be 1600x16.
On the other hand if the JPEG standard has a lot of complexity working horizontally but moves from row to row easily, you could make a smaller file or have higher quality by making the image 16x1600.
If the best compression occurs when the image is a perfect square, you would want the final image to be 160x160
The MPEG/JPEG blocking mechanism would (very slightly) favor an image size that is an exact multiple of the compression block size in each dimension. However, beyond that, the format won't care if the blocks are vertical or horizontal.
So, the direct answer to your question would be "square is as good as anything", as long as your sprites divide easily into a JPEG compression block (just make sure they are 8, 16, 24 or 32 pixels wide and you'll be fine).
However, I would go a bit further and say that for "most" spites, you are going to have a smaller image size, and clearer resolution if you have the initial master image be GIF instead of JPG, even more so if you can use a reduced color palette. Consider why would you need JPG at all for "hundreds of sprites".
It looks like JPEG's compression ratio isn't affected by image dimensions. However, it looks like your dimensions should be multiples of 8 but in all your examples you had multiples of 16 so you should be fine there.
http://en.wikipedia.org/wiki/JPEG#JPEG_codec_example
If I remember correctly, PNG (being lossless) operates much better when the same color appears in a horizontal stretch rather than a vertical stretch. Why are you making your sprites JPEG? If they are of a limited color-set (which is likely if you have 16x16 sprites, animated or not), PNG might actually yield better filesizes with perfect image quality.

Compressing High Resolution Satellite Images

Please advise the best way to compress satellite Image. Details
Uncompressed size - 60 gb
Uncompressed format - IMG
4 Bands (To be retained after compression)
Preferred compression format - JPEG2000
Lossy enough to aid in Visual analysis.
Thanks
Monika
If I'm interpreting your question correctly, you'll already need to look at the file format defined in JPEG2000 Part 2 ("JPX") because of the multiple bands. Beyond that, because you are asking for "visually lossless" (e.g. lossy compression but tuned to the point where you can't see it), you'll need to experiment with various settings using your own files until you achieve what you want. For a small discussion on how the Internet Archive did this with print materials, see A Status Report on JPEG 2000 Implementation for Still Images: The UConn Survey.
You should check out the Kakadu JPEG 2000 software. It's really fast.
The summary of one of their time tests, which sounds in line with our observed results:
'"There is an example on the spreadsheet involving a 13.3K x 13.3K RGB image (531 MBytes), being compressed to 2 bits/pixel, at 0.145 pictures/second, using the speedpack on a standard 2.4 GB core-2 duo machine. From this, it can be inferred that compressing 60 MB down to 5 MB (12:1 is 2 bits/pixel for RGB, although colour properties were not specified in the original request) should occur at a rate of 1.2 pictures/second. Compressing to the slightly lower target size of 4 MB will be a little bit faster."
http://www.kakadusoftware.com/