Converting image to pixmap using ImageMagic libraries - c++

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);

You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.

Related

How to use CImg functions with pixel data?

I am using Visual Studio and looking to find a useful image processing library that will take care of basic image processing functions such as rotation so that I don't have to keep coding them manually. I came across CImg and it supports this, as well as many other useful functions, along with interpolation.
However, all the examples I've seen show CImg being used by loading and using full images. I want to work with pixel data. So my loops are the typical:
for (x=0;x<width; x++)
for (y=0;y<height; y++)
I want to perform bilinear or bicubic rotation in this instance and I see CImg supports this. It provides a rotate() and get_rotate function, among others.
I can't find any examples online that show how to use this with pixel data. Ideally, I could simply pass it the pixel color, x, y, and interpolation method, and have it return the result.
Could anyone provide any helpful suggestions? If CImg is not the right library for this type of this, could anyone recommend a simple, light-weight, easy-to-use one?
Thank you!
You can copy pixel data to CImg class using iterators, and copy it back when you are done.
std::vector<uint8_t> pixels_src, pixels_dst;
size_t width, height, n_colors;
// Copy from pixel data
cimg_library::CImg<uint8_t> image(width, height, 1, n_colors);
std::copy(pixels_src.begin(), pixels_src.end(), image.begin());
// Do image processing
// Copy to pixel data
pixels_dst.resize(width * height * n_colors);
std::copy(image.begin(), image.end(), pixels_dst.begin());

QImage Custom Indexed Colors Using setColorTable

In my project I need to convert an image with many colors to one that only uses any of the 144 predetermined colors I set in a custom colorTable.
Here is my code:
QImage convImage(128, 128, QImage::Format_Indexed8);
convImage.setColorCount(144);
convImage.setColorTable(colorTable); //colorTable is a const QVector with 144 qRgb values.
//scaledImage is the source image
convImage = scaledImage.convertToFormat(QImage::Format_Indexed8,Qt::ThresholdDither|Qt::AutoColor);
ui->mapView->setPixmap(QPixmap::fromImage(convImage));
I would expect convImage to only contain colors that exist in the colorTable I created, however it seems to completely ignore the table I set and instead creates it's own unique table with 256 max colors.
I could index everything myself by looping through every pixel and find a way to accurately select a color from the colorTable, but I am wondering if I am just using the colorTable wrong. I couldn't find anything in the documentation that explains why a new table is being created.
Thanks for your time.
Well, ask yourself: how should the convertToFormat() call on scaledImage possibly know about the colortable you applied to convImage? It doesn't know anything about the convImage on the left-hand-side.
Fortunately, there's an overload of convertToFormat that takes a colortable and should do the job:
QImage convImage = scaledImage.convertToFormat (QImage::Format_Indexed8,
colorTable,
Qt::ThresholdDither|Qt::AutoColor);

How can I create a translucent copy of a QPixmap?

In the program I'm making, I need two images, which are exactly the same but one is translucent. For performance reasons, I want to create two separate QPixmaps instead of using just one and setting the opacity of a QPainter. Is there a straightforward way to do this?
No, there is no performant way to do this.
To modify channels of a QPixmap it must be: Converted into a QImage, modified, converted back to a QPixmap. Depending upon your application the round trip will probably make it simpler to just do this in the QPainter: http://www.qtcentre.org/threads/51158-setting-QPixmap-s-alpha-channel
However if you could do roll this into your startup time the round trip may be reasonable, preventing repeated conversions in QPainter.
Convert your QPixmap to a QImage:http://doc.qt.io/qt-5/qpixmap.html#toImage
If you didn't have an alpha channel in your QPixmap you'll need to add one: http://doc.qt.io/qt-5/qimage.html#convertToFormat
Then for each pixel in your image call setPixel: http://doc.qt.io/qt-5/qimage.html#pixel-manipulation (Note that setPixel takes a QRgb. You'll need to get the red, green, and blue channels from the pixel to be modified and use these along with your desired alpha value in qRgba: http://doc.qt.io/qt-5/qcolor.html#qRgba)
Finally you'll need to use convertFromImage: http://doc.qt.io/qt-5/qpixmap.html#convertFromImage
Maybe you should read this example. I think you will need to use the composition mode for this purpose.

How do I convert an RGB byte[] slice to an image.Image in go?

A C++ application running in another process passes in a char[] array of three-byte pixels (red, green, blue) to a go program. I've reconstructed this in go as a byte[] slice using cgo, but I'm unsure how to convert to an image. I can pass the width or height as well, if that is needed (I would imagine it would be).
I'm aware of the image.RGBA type, but the documentation seems to imply that those aren't just single-byte-per-color, and that assumes that there is an alpha channel, which my very simplistic bitmap does not have. Would converting the 3 byte values I have into something that works with image.RGBA be a solution? If so, how should I do that?
Alternatively, I could do the conversion in C/C++ before sending the values into a format that go recognizes (jpeg, gif, png). Either way works for my uses, but I don't know how to approach either.
The image package is based on interfaces. Just define a new type with those methods.
Your type's ColorModel would return color.RGBAModel, Bounds - your rectangle's borders, and At - the color at (x, y) that you can compute if you know the image's dimensions.

Extracting Depth images of Kinect using opencv

Does anyone know what is the simplest way to extract the gray-level depth images of Kinect using OpenCV and C++? any source code in this field?
if you use OpenNI SDK, you can simply point to the buffer:
//on setup:
xn::DepthGenerator depthGenerator;
xn::DepthMetaData depthMD;
cv::Mat depthWrapper;
//on update loop,
//after context.WaitAnyUpdateAll();
depthGenerator.GetMetaData(depthMD);
depthWrapper = cv::Mat(depthMD.YRes(), depthMD.XRes(), CV_16UC1, (void*) depthMD.Data());
note that depthWrapper is const so you need to clone it in order to manipulate it
The documentation has everything you need. Can't elaborate better than this.
You need to do two things (apart from reading about context, depth generator and initialization of Kinect):
Create Mat of the type CV_16U a.
context.WaitOneUpdateAll(depth_map); b. Mdepth_original =
Mat(h_depth, w_depth, CV_16U, (void*) depth_map.GetData()) c. copy
the Mat since it will be destroyed during next read:
Mdepth_original.copyTo(depth);
Map depth to gray or color. Color seems like a good idea (256^3 levels) but a human eye is more sensitive to the luminance change. Even with 256 levels you can map 10,000 Kinect levels reasonably well using [histogram equalization][1] technique. A simplest way though is to loose precision and just do I(x, y) = 255.0*z(x, y)/z_range
Here is how histogram equalization is implemented in openNI2:
https://github.com/OpenNI/OpenNI2/blob/master/Samples/Common/OniSampleUtilities.h