How do I read the pixel color values in a png with png++? I dont see any way of reading values in the documentation. I need to get all the values rgba seperately and append them to a char array.
can't add a comment, so here goes :)
Actually, you should want image[Y][X] since first [] gets you to Y-th row, and then to the X-th column in that row.
Btw, I'm the author of PNG++. Feel free to ask more specific questions on the mailing list or at my private email, or here, if you like. :)
I've never used png++, but from reading the documentation on pixel I think you can access a pixel (X,Y) of png::image<T> image with image[Y][X] and then access the red, green and blue values by accessing image[Y][X].red, etc.
Related
for(int yy=0; yy<height/2; yy++)
{
SDL_Color kolor = getPixel(xx,yy); //we are gettig each pixel in img;
setPixel(xx+width/2,yy+height/2,kolor.r,kolor.g,kolor.b );
//setPixel(xx,yy+height/2,kolor.r,kolor.g,kolor.b);
//setPixel(xx+width/2,yy,kolor.r,kolor.g,kolor.b);
}
}
I am trying by using a loop find 16 most common colors in an img and get its RGB.
I've been using mapping and trying to do something with structure but everything was in avail.
If you have some ideas about how to find these colors, I'll be sorely grateful. Thanks
If you had a 4x4 img it would be simple.
Simplify likewise, histogram function each RGB level(0-255x3), expecting each "popular" color to be in there. Sort the mess into top 16 used for each, again expecting those to be correct. Unless you have wild color gyrations, nothing else.
Check second time to see if popular colors actually exist.
Last, you might want to group into "close enough" categories, jpegs are lossy to a fault, anything within 4 RGB variations is grouped together, reducing 0-256 values to 0-64, unless colors swing widely. If you've used paint program magic wand, you know tolerance=0 makes uber mistakes, same idea.
Median filter. It will definitely reduce color count by merging into an average messy color, sample a 3x3, 4x4, circular weighted sample area.
If all else fails, steal the 256 color safe web palette, and work from there.
SWAG-Scientific Wild Ass Guess method, good luck.
Okay, let's try this again,
Is there a way in python to convert a md5 hash into a RGB color value?
I have a no of objects, each with a unique string ID. I want to generate a RGB color value from this ID - a color ID of sorts. So, given the same ID, I get the same color, and only that color.
I haven't worked much with colors, and at this point I'm drawing a bit of a blank with this. I'll be happy with pointers from folks with relevant knowledge on how to go about his.
This is what I have so far:
hash = hashlib.md5(obj_id).hexdigest()
int_val = int(hash, 16)
Now what?
After looking around, I found this gist and adapted it for my purposes:
https://gist.github.com/mrkmg/1607621
Posting this here in case it proves to be useful to someone.
I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.
For a game I'm working on, I'd like to compare two sprites in SFML2, such as with an if() statement. For example, I could have a large 1280x1024 image with one gray pixel among all black pixels. I would then have 2 separate sprites, one is the gray pixel alone, and the other is the map. I would crop only the gray pixel from the map and compare the two, if true, do other things.
Do you see what I'm getting at here? Is this possible? If so, how?
Im with Alex in saying there are smarter ways to check sprites.
Compare the file names not, don't reference a single pixel within an image, because you have to load the entire image into memory to do that atm you are loading 1.3MBytes into memory just to check a single pixel?
Store all of your resources in a Resource Manager and reference them via a UID, if a resource has UID then use that resource.
Number 2 is preferable above all else, but there are many other ways
Edit: As per comments, you wouldn't "crop" out the pixel, you would just load image into memory and use the Image class to get the colour of a pixel at a location. The following would be an example
sf::Image* map = MapSprite->GetTexture()->CopyToImage()
if (map->GetPixel(666,666) == sf::Color::Black)
{
//Funky stuff here
}
NOTE: You mentioned SFML2 so this is from that set of Documentation, may be different for 1.6
Edit2: Its been a while since I've used SMFL so hopefully the code snippet will at least give you direction
I need to create a program that loads a .raw image (generic 100x100 image), asks the user to select an (x, y) coordinate within the range, and display the red, green, and blue values for said pixel using the seekg function. I'm at a loss as to how to get the rgb values from the pixel. I've gone through every chapter of the textbook that we've covered so far, and there is nothing about retreiving rgb values.
The code asking for the coordinates and giving an error message if outside the range is working fine. Only when I try to come up with the code for using seekg/getting the rgb values am I running in to trouble. I've looked at different questions on the site, and there is good information here, but I've not see any answers using seekg in order to get the rgb values.
I'm not looking for anyone to produce the code for me, just looking for some guidance and a push in the right direction.
loc = (y * 100 + x) * 3; // code given by professor with 100 being the width of the image
imageRaw.seekg(loc, ios::beg);
And then I'm at a loss.
Any help would be greatly appreciated.
From there, you probably need to read three bytes, which will represent the red, green, and blue values. You haven't told us enough to be sure of the order; green is almost always in the middle, but RGB and BGR are both fairly common.
From a practical viewpoint, for a picture of this size you don't normally want to use seekg at all though. You'd read the entire image into memory, and lookup the values in the vector (or array, if you insist) that stores the data.