Faster than GetPixel()? - c++

How would I replace GetPixel() with something faster?
Currently I am using:
temp = GetPixel(hMonitor, 1, 1);
if (pixelArray[0] != temp)
{
pixelArray[0] = temp;
counter++;
}
Above code is just a simplified example.
This is contained in a for loop for all the pixels on the display. It compares one pixel (temp) against the previous array's pixel (pixelArray). If it has changed, then replace it. How-ever I am finding that using GetPixel() for every pixel on the display takes a long time.
I have been reading other questions of a similar nature such as:
Fastest method of screen capturing
Get Pixel color fastest way?
...but I am not sure which method is better such as GDI or DirectX nor how I would implement said methods.
Update: Windows GDI (using GetObject) to an array of the pixels is what I needed, thank you. This is much, much faster than GetPixel().

I would suggest you retrieve a pointer to the bitmap's pixel data (assuming you have a HBITMAP handle).
This is done via GetObject(), which should return you a BITMAP structure. This is the field you are interested in:
bmBits:
A pointer to the location of the bit values for the bitmap. The
bmBits member must be a pointer to an array of character (1-byte)
values.
Then you can run your checking logic per pixel on the buffers. That would be way faster from using GetPixel.

Related

Memory Management of update Method in Texture Pixel Manipulation

How is the array of pixels, that is passed to the update method in the Texture class (SFML), managed memory-wise? These are some of my guesses:
A weak pointer is saved inside the texture instance; which means that it is necessary to keep a pointer to the array of pixels of your own and manage it yourself.
The array is copied and managed by the texture (which also means that every time the update method is called again, the previous one is deallocated).
The second guess would justify this for updating a texture multiple times:
auto newPixels = new sf::Uint8[WIDTH * HEIGHT * 4];
... //do stuff to pixels
texture.update(newPixels);
Where the pixels are reallocated every time the texture is updated. Otherwise (if the pixels are just stored as a weak pointer and not managed/deallocated/allocated) a different approach would be necessary, where the pixels are managed by the user...
Thanks in advance for any answers :)
SFML is open source. You don't need to take guesses or ask here. You can just read it for yourself:
https://github.com/SFML/SFML/blob/master/src/SFML/Graphics/Texture.cpp#L390
Specifically, the pointer is passed to the glTexSubImage2D OpenGL method.

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.

Cropping an 8-bit bitmap by its palette information

I'm currently using C++ to read my 8-bit bitmap and save off its pixel data and colour table. I currently have my colour table stored in an array:
RGBQUAD* colours;
I was wondering how I would go about finding the nearest unique pixel colour in all directions and cropping the bitmap to that pixel. I'm using C++ without any external libraries.
I would recommend using readily available libraries, like ImageMagick, instead of trying to re-implement that particular wheel.
There's only two reasons why you would implement something already implemented that well elsewhere: 1) Homework, or 2) you think you can actually do significantly better than existing code.
It cannot be 1) because there is no "homework" tag, and it cannot be 2) because you wouldn't have to ask, then...
"nearest unique pixel colour" means nearest in color space? In absolute terms (R/G/B) or human sense? So, given #0002FE wou may find #0000FF in your color table?
The "standard" simple C++ method is std::min_element(), which takes a range and a predicate. In your case, that range is your color table and the predicate is the close-ness to the color you want. E.g. [targetColor](RGBQUAD tableEntry) { return abs(RGBdiff(tableEntry, targetColor)); }

A simple PNG wrapper that works. Anybody have a snippet to share?

I'm looking for a way to get a buffer of image data into a PNG file, and a way to get a PNG file into a buffer.
There are just these two things I want to do.
It would be a dead simple wrapper that uses png.h. Well, not exactly dead simple because of the horribly complex libpng API, but the concept of it is.
I tried DevIL before. It is much easier to use than libpng. Still, I have had issues with it. Also, DevIL does too much. I only need lean and mean basic PNG format support, not 20 other formats as well.
Then I find this page. I praised the Pixel Fairy and the Almighty Google for giving me an implementation on a silver platter... Then it turns out this screws up the image: in the processed image every fourth pixel in each scanline goes missing. I am fairly certain from reading the source that this is not meant to happen! It's supposed to zero out red and set green to blue. That didn't happen either.
I have also tried png++. The issue I had with it is that I couldn't get data out of a PNG in a format compatible for loading into OpenGL, I would have to construct another buffer. It just looked ugly, but I will definitely try png++ again before I even think about giving DevIL another shot. Because png++ worked, at least. It's also got the header-only aspect going for it. Still, it did produce a bunch of compiler warnings.
Are there any other contenders? Anybody who has worked with directly using libpng would know how to make what I am asking for: one function that takes a filename and fills a 32-bpp buffer and sets two resolution integers; one function that takes a 32-bpp buffer, two resolution integers, and a filename.
Update-edit: I found this. Might be something there.
This tutorial seems to have what you want.
From the link:
//Here's one of the pointers we've defined in the error handler section:
//Array of row pointers. One for every row.
rowPtrs = new png_bytep[imgHeight];
//Alocate a buffer with enough space.
//(Don't use the stack, these blocks get big easilly)
//This pointer was also defined in the error handling section, so we can clean it up on error.
data = new char[imgWidth * imgHeight * bitdepth * channels / 8];
//This is the length in bytes, of one row.
const unsigned int stride = imgWidth * bitdepth * channels / 8;
//A little for-loop here to set all the row pointers to the starting
//Adresses for every row in the buffer
for (size_t i = 0; i < imgHeight; i++) {
//Set the pointer to the data pointer + i times the row stride.
//Notice that the row order is reversed with q.
//This is how at least OpenGL expects it,
//and how many other image loaders present the data.
png_uint_32 q = (imgHeight- i - 1) * stride;
rowPtrs[i] = (png_bytep)data + q;
}
//And here it is! The actuall reading of the image!
//Read the imagedata and write it to the adresses pointed to
//by rowptrs (in other words: our image databuffer)
png_read_image(pngPtr, rowPtrs);
I'd add CImg to the list of options. While it is an image library the API is not so high level as most (devil/imagemagick/freeimage/GIL). It is also header only.
The image class has simple width height and data members with public access. Under the hood it uses libpng (if you tell it to with preprocessor directive). The data is cast to whatever type you chose for the templated image object.
CImg<uint8_t>myRGBA("fname.png");
myRGBA._data[0] = 255; //set red value of first pixel
Sean Barrett has written two public-domain files for PNG image reading/writing.

Converting image to pixmap using ImageMagic libraries

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);
You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.