C++ RGB values from pixel selected by user-using seekg - c++

I need to create a program that loads a .raw image (generic 100x100 image), asks the user to select an (x, y) coordinate within the range, and display the red, green, and blue values for said pixel using the seekg function. I'm at a loss as to how to get the rgb values from the pixel. I've gone through every chapter of the textbook that we've covered so far, and there is nothing about retreiving rgb values.
The code asking for the coordinates and giving an error message if outside the range is working fine. Only when I try to come up with the code for using seekg/getting the rgb values am I running in to trouble. I've looked at different questions on the site, and there is good information here, but I've not see any answers using seekg in order to get the rgb values.
I'm not looking for anyone to produce the code for me, just looking for some guidance and a push in the right direction.
loc = (y * 100 + x) * 3; // code given by professor with 100 being the width of the image
imageRaw.seekg(loc, ios::beg);
And then I'm at a loss.
Any help would be greatly appreciated.

From there, you probably need to read three bytes, which will represent the red, green, and blue values. You haven't told us enough to be sure of the order; green is almost always in the middle, but RGB and BGR are both fairly common.
From a practical viewpoint, for a picture of this size you don't normally want to use seekg at all though. You'd read the entire image into memory, and lookup the values in the vector (or array, if you insist) that stores the data.

Related

cimg pixel value - numerical

is there a way to get the int value of a pixel returned with cimg? I'm in the process of building a basic ASCII art program that converts JPG's to character arrays, and I have the entire utility built out but I cann not find a way to get the unsigned char's converted into the range of ints I need (0-255, although the specifics don't matter so long as its a predictable interval).
Does anyone have any idea how to get a numerical pixel value from a JPG? (library suggestions or anything else are completely welcome)
Here is the pixel output:
\�_b��}�HaX�gNzԴ�����p��-�u�����lqu��Lߐ_"T������{�y�sricX[[TXgZ]`a~�t91960d�BpvJ0kY#uR!BpMWb\W?j"#���dCy2+4?ڽ�TT<Tght%P%y;mhͬ�����8#1�H��)����:4lu���CY|��u&<_��ī��������������ȿF�����LP:����N���-�Q�+�2;E3(�SdRO6��NI16j{#�0((
: pixel data
It's already been converted to black and white, so even accessing the numerical value of one color channel off the cimg would be fine. I just can't seem to get any kind of intelligible/manipulable output from the image, even though the image itself is exactly what i'm looking for.
cast it as an int using (int)img(x,y) and ignore the extra channels

how to find 16 most common colors in the img with extension BMP

for(int yy=0; yy<height/2; yy++)
{
SDL_Color kolor = getPixel(xx,yy); //we are gettig each pixel in img;
setPixel(xx+width/2,yy+height/2,kolor.r,kolor.g,kolor.b );
//setPixel(xx,yy+height/2,kolor.r,kolor.g,kolor.b);
//setPixel(xx+width/2,yy,kolor.r,kolor.g,kolor.b);
}
}
I am trying by using a loop find 16 most common colors in an img and get its RGB.
I've been using mapping and trying to do something with structure but everything was in avail.
If you have some ideas about how to find these colors, I'll be sorely grateful. Thanks
If you had a 4x4 img it would be simple.
Simplify likewise, histogram function each RGB level(0-255x3), expecting each "popular" color to be in there. Sort the mess into top 16 used for each, again expecting those to be correct. Unless you have wild color gyrations, nothing else.
Check second time to see if popular colors actually exist.
Last, you might want to group into "close enough" categories, jpegs are lossy to a fault, anything within 4 RGB variations is grouped together, reducing 0-256 values to 0-64, unless colors swing widely. If you've used paint program magic wand, you know tolerance=0 makes uber mistakes, same idea.
Median filter. It will definitely reduce color count by merging into an average messy color, sample a 3x3, 4x4, circular weighted sample area.
If all else fails, steal the 256 color safe web palette, and work from there.
SWAG-Scientific Wild Ass Guess method, good luck.

Disparity Map Block Matching

I am writing a disparity matching algorithm using block matching, but I am not sure how to find the corresponding pixel values in the secondary image.
Given a square window of some size, what techniques exist to find the corresponding pixels? Do I need to use feature matching algorithms or is there a simpler method, such as summing the pixel values and determining whether they are within some threshold, or perhaps converting the pixel values to binary strings where the values are either greater than or less than the center pixel?
I'm going to assume you're talking about Stereo Disparity, in which case you will likely want to use a simple Sum of Absolute Differences (read that wiki article before you continue here). You should also read this tutorial by Chris McCormick before you read more here.
side note: SAD is not the only method, but it's really common and should solve your problem.
You already have the right idea. Make windows, move windows, sum pixels, find minimums. So I'll give you what I think might help:
To start:
If you have color images, first you will want to convert them to black and white. In python you might use a simple function like this per pixel, where x is a pixel that contains RGB.
def rgb_to_bw(x):
return int(x[0]*0.299 + x[1]*0.587 + x[2]*0.114)
You will want this to be black and white to make the SAD easier to computer. If you're wondering why you don't loose significant information from this, you might be interested in learning what a Bayer Filter is. The Bayer Filter, which is typically RGGB, also explains the multiplication ratios of the Red, Green, and Blue portions of the pixel.
Calculating the SAD:
You already mentioned that you have a window of some size, which is exactly what you want to do. Let's say this window is n x n in size. You would also have some window in your left image WL and some window in your right image WR. The idea is to find the pair that has the smallest SAD.
So, for each left window pixel pl at some location in the window (x,y) you would the absolute value of difference of the right window pixel pr also located at (x,y). you would also want some running value, which is the sum of these absolute differences. In sudo code:
SAD = 0
from x = 0 to n:
from y = 0 to n:
SAD = SAD + absolute_value|pl - pr|
After you calculate the SAD for this pair of windows, WL and WR you will want to "slide" WR to a new location and calculate another SAD. You want to find the pair of WL and WR with the smallest SAD - which you can think of as being the most similar windows. In other words, the WL and WR with the smallest SAD are "matched". When you have the minimum SAD for the current WL you will "slide" WL and repeat.
Disparity is calculated by the distance between the matched WL and WR. For visualization, you can scale this distance to be between 0-255 and output that to another image. I posted 3 images below to show you this.
Typical Results:
Left Image:
Right Image:
Calculated Disparity (from the left image):
you can get test images here: http://vision.middlebury.edu/stereo/data/scenes2003/

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.

PNG++ Read Pixel Color Values

How do I read the pixel color values in a png with png++? I dont see any way of reading values in the documentation. I need to get all the values rgba seperately and append them to a char array.
can't add a comment, so here goes :)
Actually, you should want image[Y][X] since first [] gets you to Y-th row, and then to the X-th column in that row.
Btw, I'm the author of PNG++. Feel free to ask more specific questions on the mailing list or at my private email, or here, if you like. :)
I've never used png++, but from reading the documentation on pixel I think you can access a pixel (X,Y) of png::image<T> image with image[Y][X] and then access the red, green and blue values by accessing image[Y][X].red, etc.