If not, where can I find the algorithm to adjust contrast of an image. I will have to code it in C++ and have access to libjpeg and libjpeg-turbo libraries
http://en.wikipedia.org/wiki/Image_editing#Contrast_change_and_brightening
Is this a good starting point for color images?
The simplest I could have think of is the ImageMagick library, or do it yourself*.
* I know that the code in that answer is not c++, but if you know c or c++, you should be able to understand it.
You might like this one for starters: Processing in the 8-bit YUV Color Space
C there is the contrast adjustment. With an image with pixel format in YUV color space, constrast adjustment is quite easy and is an update for Y component of the pixel.
libjpeg is not quite the tool for image processing, unless you are decoding/encoding JPEGs and you need some processing on the way.
Related
I am looking for any C++ tools that will help me generate sine wave like fringe patterns onto a loaded image like so:
Any ideas using other programming modes (scripts?) would also be useful. If any more information is requested, please let me know.
You might want to look into OpenCV:
http://opencv.itseez.com/doc/tutorials/core/basic_linear_transform/basic_linear_transform.html#brightness-and-contrast-adjustments
Looks like it might be of use, though I don't know if it is sufficient for your specific use case. You should be able to do it manually though.
The rendering of a sine wave would result from local brightness adjustments through calculation of the sine value for the image position relative to the period ( e.g. period == image width). I don't have any real knowledge of the library, but from telling from previous experiences with Matlab and similar tools, the brightness distribution would pixel-wise hence be calculated
local_brightness = sin(2pi*cur_pos/width)*local_brightness
If you know the color space and the format of the image you might as well do it manually, pixel for pixel like described above. In that case you could read in the image with http://libav.org/ and recalculate it.
Oh and one last general idea, given you know the image format and color space:
Generate a vector that fits the width of the target image, then calculate the sine signal relating to the x-axis and multply the resulting vector with the target image brightness?
I admit it's a long shot, but it might work for you :P
You'll have to be more specific about exactly what you're looking for. Magick++, the C++ bindings for the ImageMagick library, has a lot of tools for doing various types of image processing, but depending on your needs it may or may not be able to do what you want.
I want to read the contents of every pixel in an image i have and convert it to a bit-stream (raw bits) or contain it in a 2-D array . Which would be the best place to start looking for such a conversion?
Specifics of the image : Standard test image called lena.bmp
size : 256 x 256
Bit depth of pixel : 8
Also I would like to know the importance of the number of bits per pixel with regards to this question since packing and unpacking will also be incorporated .
CImg is a nice simple, lightweight C++ library which can load and save a number of image formats (including BMP).
It's a single header file, so there's no need to compile or link the library. Just include the header, and you're good to go.
You should investigate OpenCV: a cross-platform computer vision library. It provides a C++ API as well as a C API, and it supports many image formats including bmp.
In the C++ interface, cv::Mat is the type that represents a 2D image. A simple application that loads and displays an image can be found here.
To learn how to access the matrix elements (pixels) you can check these threads:
OpenCV get pixel information from Mat image
Pixel access in OpenCV 2.2
Common Matrix Operations in OpenCV
OpenCV’s C++ interface offers a short introduction to cv::Mat. There has been many threads on Stackoverflow regarding OpenCV, there's a lot of valuable content around and you can benefit a lot by using the search box.
This page has a collection of books/tutorials/install guides focused on OpenCV, but this the newest official tutorial.
I want to optimize my program, in which I am using color object tracking algorithm described here. The only difference is that I am using cvBlob library, instead of cv::moments (cvBlob was faster and more accurate). Using profiler (valgrind + kcachegrind) I have found that ~29% of time is taken by colorspace conversion method (cv::cvtColor; I am tracking objects in three colors). I am converting from BGR to HSV.
I've read in some papers that using YCbCr colorspace is even better in color tracking. Is it good idea to convert from BGR to YCbCR? It should be slightly faster, as it requires less multiplications (I am not sure about that -- I do not know how OpenCv does it internally). Does this algorithm need some changes, or can I just convert lower and upper boundaries for tracked color from HSV to YCbCr, and then use inRangeS method, as I did with HSV?
Is there any way to get the frame from driver in YcbCr (or YUV)? I am not asking about HSV, because this is not supported by v4l2, AFAIR.
Do you have any other ideas? I don't want to use IPP or GPU.
Check out the OpenCV documentation for cvtColor. It talks about conversion between BGR2YCbCr using cvtColor.
(Please try that and also comment here about result, ie how much percentage of total time it takes in YCbCr mode. Because it will help lots of people in future.)
I need to convert 24bppRGB to 16bppRGB, 8bppRGB, 4bppRGB, 8bpp grayscal and 4bpp grayscale. Any good link or other suggestions?
preferably using Windows/GDI+
[EDIT] speed is more critical than quality. source images are screenshots
[EDIT1] color conversion is required to minimize space
You're better off getting yourself a library, as others have suggested. Aside from ImageMagick, there are others, such as OpenCV. The benefits of leaving this to a library are:
Save yourself some time -- by cutting out dev and testing time for the algorithm
Speed. Most libraries out there are optimized to a level far greater than a standard developer (such as ourselves) could achieve
Standards compliance. There are many image formats out there, and using a library cuts the problem of standards compliance out of the equation.
If you're doing this yourself, then your problem can be divided into the following sub-problems:
Simple color quantization. As #Alf P. Steinbach pointed out, this is just "downscaling" the number of colors. RGB24 has 8 bits per R, G, B channels, each. For RGB16 you can do a number of conversions:
Equal number of bits for each of R, G, B. This typically means 4 or 5 bits each.
Favor the green channel (human eyes are more sensitive to green) and give it 6 bits. R and B get 5 bits.
You can even do the same thing for RGB24 to RGB8, but the results won't be as pretty as a palletized image:
4 bits green, 2 red, 2 blue.
3 bits green, 5 bits between red and blue
Palletization (indexed color). This is for going from RGB24 to RGB8 and RGB4. This is a hard problem to solve by yourself.
Color to grayscale conversion. Very easy. Convert your RGB24 to YUV' color space, and keep the Y' channel. That will give you 8bpp grayscale. If you want 4bpp grayscale, then you either quantize or do palletization.
Also be sure to check out chroma subsampling. Often, you can decrease the bitrate by a third without visible losses to image quality.
With that breakdown, you can divide and conquer. Problems 1 and 2 you can solve pretty quickly. That will allow you to see the quality you can get simply by doing coarser color quantization.
Whether or not you want to solve Problem 2 will depend on the result from above. You said that speed is more important, so if the quality of color quantization only is good enough, don't bother with palletization.
Finally, you never mentioned WHY you are doing this. If this is for reducing storage space, then you should be looking at image compression. Even lossless compression will give you better results than reducing the color depth alone.
EDIT
If you're set on using PNG as the final format, then your options are quite limited, because both RGB16 and RGB8 are not valid combinations in the PNG header.
So what this means is: regardless of bit depth, you will have to switch to index color if you want RGB color images below 24bpp (8 bits per channel). This means you will NOT be able to take advantage of the color quantization and chroma decimation that I mentioned above -- it's not supported in PNG. So this means you will have to solve Problem 2 -- palletization.
But before you think about that, some more questions:
What are the dimensions of your images?
What sort of ideal file-size are you after?
How close to that ideal file-size do you get with straight RBG24 + PNG compression?
What is the source of your images? You've mentioned screenshots, but since you're so concerned about disk space, I'm beginning to suspect that you might be dealing with image sequences (video). If this is so, then you could do better than PNG compression.
Oh, and if you're serious about doing things with PNG, then definitely have a look at this library.
Find your self a copy of the ImageMagick [sic] library. It's very configurable, so you can teach it about the details of some binary format that you need to process...
See: ImageMagick, which has a very practical license.
I received acceptable results (preliminary) by GDI+, v.1.1 that is shipped with Vista and Win7. It allows conversion to 16bpp (I used PixelFormat16bppRGB565) and to 8bpp and 4bpp using standard palettes. Better quality could be received by "optimal palette" - GDI+ would calculate optimal palette for each screenshot, but it's two times slower conversion. Grayscale was received by specifying simple custom palette, e.g. as demonstrated here, except that I didn't need to modify pixels manually, Bitmap::ConvertFormat() did it for me.
[EDIT] results were really acceptable until I decided to check the solution on WinXP. Surprisingly, Microsoft decided to not ship GDI+ v.1.1 (required for Bitmap::ConvertFormat) to WinXP. Nice move! So I continue researching...
[EDIT] had to reimplement this on clean GDI hardcoding palettes from GDI+
How can I do simple thing like manipulate image programmatically ? ( with C++ I guess .. )
jpgs/png/gif .....
check out BOOST , it has a simple Image Processing Library called GIL. It also has extensions to import common formats.
http://www.boost.org/doc/libs/1_39_0/libs/gil/doc/index.html
Using .NET you have two options:
GDI+ from System.Drawing namespace (Bitmap class)
WPF engine wich can do a lot of things
If you want low level processing you can use unsafe code and pointers.
A Bitmap or Image is just a big array of bytes.
You need to learn:
what is a stride (extra padding bytes after each row of pixels)
how to compute the next row or a specific pixel location using width, height, stride
the image formats RGB, ARGB, white&black
basic image processing functions (luminosity, midtone, contrast, color detection, edge detection, matrix convulsion)
3D vectorial representation of a RGB color
Depending on how fancy you want to get, you may want to look at OpenCV. It's a computer vision library that has functions ranging from reading and writing images to image processing to advanced things like object detection.
Magick++ is a C++ API for the excellent ImageMagick library.
An advantage of ImageMagick is that it can be used from the command-line and a bunch of popular scripting and compiled languages too, and some of those might be more accessible to you than C++.