I need to convert 24bppRGB to 16bppRGB, 8bppRGB, 4bppRGB, 8bpp grayscal and 4bpp grayscale. Any good link or other suggestions?
preferably using Windows/GDI+
[EDIT] speed is more critical than quality. source images are screenshots
[EDIT1] color conversion is required to minimize space
You're better off getting yourself a library, as others have suggested. Aside from ImageMagick, there are others, such as OpenCV. The benefits of leaving this to a library are:
Save yourself some time -- by cutting out dev and testing time for the algorithm
Speed. Most libraries out there are optimized to a level far greater than a standard developer (such as ourselves) could achieve
Standards compliance. There are many image formats out there, and using a library cuts the problem of standards compliance out of the equation.
If you're doing this yourself, then your problem can be divided into the following sub-problems:
Simple color quantization. As #Alf P. Steinbach pointed out, this is just "downscaling" the number of colors. RGB24 has 8 bits per R, G, B channels, each. For RGB16 you can do a number of conversions:
Equal number of bits for each of R, G, B. This typically means 4 or 5 bits each.
Favor the green channel (human eyes are more sensitive to green) and give it 6 bits. R and B get 5 bits.
You can even do the same thing for RGB24 to RGB8, but the results won't be as pretty as a palletized image:
4 bits green, 2 red, 2 blue.
3 bits green, 5 bits between red and blue
Palletization (indexed color). This is for going from RGB24 to RGB8 and RGB4. This is a hard problem to solve by yourself.
Color to grayscale conversion. Very easy. Convert your RGB24 to YUV' color space, and keep the Y' channel. That will give you 8bpp grayscale. If you want 4bpp grayscale, then you either quantize or do palletization.
Also be sure to check out chroma subsampling. Often, you can decrease the bitrate by a third without visible losses to image quality.
With that breakdown, you can divide and conquer. Problems 1 and 2 you can solve pretty quickly. That will allow you to see the quality you can get simply by doing coarser color quantization.
Whether or not you want to solve Problem 2 will depend on the result from above. You said that speed is more important, so if the quality of color quantization only is good enough, don't bother with palletization.
Finally, you never mentioned WHY you are doing this. If this is for reducing storage space, then you should be looking at image compression. Even lossless compression will give you better results than reducing the color depth alone.
EDIT
If you're set on using PNG as the final format, then your options are quite limited, because both RGB16 and RGB8 are not valid combinations in the PNG header.
So what this means is: regardless of bit depth, you will have to switch to index color if you want RGB color images below 24bpp (8 bits per channel). This means you will NOT be able to take advantage of the color quantization and chroma decimation that I mentioned above -- it's not supported in PNG. So this means you will have to solve Problem 2 -- palletization.
But before you think about that, some more questions:
What are the dimensions of your images?
What sort of ideal file-size are you after?
How close to that ideal file-size do you get with straight RBG24 + PNG compression?
What is the source of your images? You've mentioned screenshots, but since you're so concerned about disk space, I'm beginning to suspect that you might be dealing with image sequences (video). If this is so, then you could do better than PNG compression.
Oh, and if you're serious about doing things with PNG, then definitely have a look at this library.
Find your self a copy of the ImageMagick [sic] library. It's very configurable, so you can teach it about the details of some binary format that you need to process...
See: ImageMagick, which has a very practical license.
I received acceptable results (preliminary) by GDI+, v.1.1 that is shipped with Vista and Win7. It allows conversion to 16bpp (I used PixelFormat16bppRGB565) and to 8bpp and 4bpp using standard palettes. Better quality could be received by "optimal palette" - GDI+ would calculate optimal palette for each screenshot, but it's two times slower conversion. Grayscale was received by specifying simple custom palette, e.g. as demonstrated here, except that I didn't need to modify pixels manually, Bitmap::ConvertFormat() did it for me.
[EDIT] results were really acceptable until I decided to check the solution on WinXP. Surprisingly, Microsoft decided to not ship GDI+ v.1.1 (required for Bitmap::ConvertFormat) to WinXP. Nice move! So I continue researching...
[EDIT] had to reimplement this on clean GDI hardcoding palettes from GDI+
Related
I have written a C/C++ implementation of what I term a "compositor" (I come from a video background) to composite/overlay video/graphics on the top of a video source. My current compositor implementation is rather naive and there is room for CPU optimization improvements (ex: SIMD, threading, etc).
I've created a high-level diagram of what I am currently doing:
The diagram is self explanatory. Nonetheless, I'll elaborate on some of the constraints:
The main video always comes served in an 8-bit YUV 4:2:2 packed format
The secondary video (optional) will come served in either an 8-bit YUV 4:2:2 or YUVA 4:2:2:4 packed format.
The output from the overlay must come out in an 8-bit YUV 4:2:2 packed format
Some other bits of information:
The number of graphics inputs will vary; it may (or may not) be a constant value.
The colour format of the Graphics can be pinned to either ARGB or YUVA format (ie. I can provide it as you see fit). At the moment, I pin it to YUVA to keep a consistent colour format.
The potential of using OpenGL and accompanying shaders is rather appealing:
No need to reinvent the wheel (in terms of actually performing the composition)
The possibility of using GPU where available.
My concern with using OpenGL is performance. Looking around on the web, it is my understanding that a YUV surface would be converted to RGB internally; I would like to minimize the number of colour format conversions and ensure optimal performance. Without prior OpenGL experience, I hope someone can shed some light and suggest if I'm about to venture down the wrong path.
Perhaps my concern relating to performance is less of an issue when using a dedicated GPU? Do I need to consider separate code paths:
Hardware with GPU(s)
Hardware with only CPU(s)?
Additionally, am I going to struggle when I need to process 10-bit YUV?
You should be able to treat YUV as independent channels throughout. OpenGL shaders will be calling them r, g, and b, but it's just data that can be treated as whatever you want.
Most GPUs will support 10 bits per channel (+ 2 alpha bits). Various will support 16 bits per channel for all 4 channels but I'm a little rusty here so I have no idea how common support is for this. Not sure about the 4:2:2 data, but you can always treat it as 3 separate surfaces.
The number of graphics inputs will vary; it may (or may not) be a constant value.
This is something I'm a little less sure about. Shaders like this to be predictable. If your implementation allows you to add each input iteratively then you should be fine.
As an alternative suggestion, have you looked into OpenCL?
Currently implementing a D3D11 renderer, and the majority of my normal maps are in 3-channel RGB 8bits-per-channel format.
Looking through the DXGI_FORMAT msdn page I've noticed that there's no option for this. What's the reasoning behind this? How would I use this texture format then?
There is no hardware that support a 24bits format texture for a good decade now.
You have different options now :
Expand to 32bits. You retain the full quality of the bitmap at the price of an extra memory cost, no extra processing need.
Use a compressed format. This is where you should head.
GPU are not the best at reading a lot of uncompressed textures. Of course, if you are only rendering a few models, it won't matter, but if you start pushing a larger amount of data, you have to go to the compressed road.
Your compression options if you support very old hardware:
BC1, RGB format with 4bits per pixel, worst quality, use only with giant textures when the saving overcome the quality, or better, never use it.
BC3 with Red/Alpha swap, 4bit per pixel, just slightly better.
You should consider these instead :
BC5 : 8bits per pixels, 2 channels, you will have to rebuild Z with sqrt(1 - dot(n.xy,n.yx)) or an equivalent tricks.
BC7 : 8bits per pixels, 3/4 channels, the best choice if you need to store something else with your normal, like a variance. This format is great but is also very CPU intensive to generate at his best quality.
On a side note, do not forget to generate a mip chains up to an 1x1 size, it is both for performance and visual quality. Do not apply a SRGB conversion like you should have for your albedos ( 0.5 is really 0.5) and you have options with some formats to use an SNORM type to skip the typical 2*n-1 formula but be carreful with the 0 case.
I want to optimize my program, in which I am using color object tracking algorithm described here. The only difference is that I am using cvBlob library, instead of cv::moments (cvBlob was faster and more accurate). Using profiler (valgrind + kcachegrind) I have found that ~29% of time is taken by colorspace conversion method (cv::cvtColor; I am tracking objects in three colors). I am converting from BGR to HSV.
I've read in some papers that using YCbCr colorspace is even better in color tracking. Is it good idea to convert from BGR to YCbCR? It should be slightly faster, as it requires less multiplications (I am not sure about that -- I do not know how OpenCv does it internally). Does this algorithm need some changes, or can I just convert lower and upper boundaries for tracked color from HSV to YCbCr, and then use inRangeS method, as I did with HSV?
Is there any way to get the frame from driver in YcbCr (or YUV)? I am not asking about HSV, because this is not supported by v4l2, AFAIR.
Do you have any other ideas? I don't want to use IPP or GPU.
Check out the OpenCV documentation for cvtColor. It talks about conversion between BGR2YCbCr using cvtColor.
(Please try that and also comment here about result, ie how much percentage of total time it takes in YCbCr mode. Because it will help lots of people in future.)
I am trying to compress my .jpeg image in Photoshop.
WHat is the best way to do this?
I am now calculating the bpp taking the image size in kb, calculating how many bits that is. Then I take the image size in pixel*pixel to get the amount of pixels in the image. After that I divide bits/pixels, to find how many bits per pixel the image has.
But How can I change this number? My guess is to change how many kb the image is, but how do i do this?
Thanks for any help!!
Yes, you can achieve higher compression ratio than 4 bits per pixel. Images with solid color can have rate as low as 0.13bpp.
In fact 4bpp is quite poor compression — it's same as uncompressed 16-color image or half of 256-color image, which even GIF can manage. JPEG can look decent at 1-2bpp.
in general, you cannot "compress" a jpeg image. all you can do is to reduce the image quality further in order to achieve a lower bpp value. jpeg streams are always compressed and they use a lossy compression method. it means that the original image will never ever be reconstructed from a jpeg image. the smaller the file the more information you have lost.
a specific "bpp value" is not, and should never be your target. especially with lossy compression. you should always look at your current image and decide whether it is still good enough or not.
if you still have the original image, try a lossless compression format, like zip-compressed or lzw-compressed tiff or compressed png. i'm sure PhotoShop can handle these formats as well. another softwares like IrfanView (https://www.irfanview.com/) or XnView MP (https://www.xnview.com/en/xnviewmp/) will convert your images too.
if you want manual (eg. full) control over your images, you should use command line utilities, like ImageMagick (https://imagemagick.org/) or NConvert (please find the XnView MP link above)
if you have only the jpeg images do not touch (edit & save) them. with every single save operation you lose another bunch of information. you should always work on file copies.
you should always keep your master image (the very picture you took with your phone or your camera).
of course, these rules of thumb will not answer your original question.
Pretty simple but specific question here:
I'm not entirely familiar with the JPEG standard for compressing images. Does it create a better (that being, smaller file size at a similar quality) image when the X dimension (width) is very large and the Y dimension (height) is very small, vice versa, or when the two are nearly equal?
The practical use I have for this is CSS sprites. If a website were to consist of hundreds of CSS sprites, it would be ideal to minimize the size of the sprite file to assist users on slower internet and also to reduce server load. If the JPEG standard operates really well on a single horizontal line, but moving vertically requires a lot more complexity, it would make sense for an image of 100 16x16 CSS sprites to be 1600x16.
On the other hand if the JPEG standard has a lot of complexity working horizontally but moves from row to row easily, you could make a smaller file or have higher quality by making the image 16x1600.
If the best compression occurs when the image is a perfect square, you would want the final image to be 160x160
The MPEG/JPEG blocking mechanism would (very slightly) favor an image size that is an exact multiple of the compression block size in each dimension. However, beyond that, the format won't care if the blocks are vertical or horizontal.
So, the direct answer to your question would be "square is as good as anything", as long as your sprites divide easily into a JPEG compression block (just make sure they are 8, 16, 24 or 32 pixels wide and you'll be fine).
However, I would go a bit further and say that for "most" spites, you are going to have a smaller image size, and clearer resolution if you have the initial master image be GIF instead of JPG, even more so if you can use a reduced color palette. Consider why would you need JPG at all for "hundreds of sprites".
It looks like JPEG's compression ratio isn't affected by image dimensions. However, it looks like your dimensions should be multiples of 8 but in all your examples you had multiples of 16 so you should be fine there.
http://en.wikipedia.org/wiki/JPEG#JPEG_codec_example
If I remember correctly, PNG (being lossless) operates much better when the same color appears in a horizontal stretch rather than a vertical stretch. Why are you making your sprites JPEG? If they are of a limited color-set (which is likely if you have 16x16 sprites, animated or not), PNG might actually yield better filesizes with perfect image quality.