I'm currently using C++ to read my 8-bit bitmap and save off its pixel data and colour table. I currently have my colour table stored in an array:
RGBQUAD* colours;
I was wondering how I would go about finding the nearest unique pixel colour in all directions and cropping the bitmap to that pixel. I'm using C++ without any external libraries.
I would recommend using readily available libraries, like ImageMagick, instead of trying to re-implement that particular wheel.
There's only two reasons why you would implement something already implemented that well elsewhere: 1) Homework, or 2) you think you can actually do significantly better than existing code.
It cannot be 1) because there is no "homework" tag, and it cannot be 2) because you wouldn't have to ask, then...
"nearest unique pixel colour" means nearest in color space? In absolute terms (R/G/B) or human sense? So, given #0002FE wou may find #0000FF in your color table?
The "standard" simple C++ method is std::min_element(), which takes a range and a predicate. In your case, that range is your color table and the predicate is the close-ness to the color you want. E.g. [targetColor](RGBQUAD tableEntry) { return abs(RGBdiff(tableEntry, targetColor)); }
Related
for(int yy=0; yy<height/2; yy++)
{
SDL_Color kolor = getPixel(xx,yy); //we are gettig each pixel in img;
setPixel(xx+width/2,yy+height/2,kolor.r,kolor.g,kolor.b );
//setPixel(xx,yy+height/2,kolor.r,kolor.g,kolor.b);
//setPixel(xx+width/2,yy,kolor.r,kolor.g,kolor.b);
}
}
I am trying by using a loop find 16 most common colors in an img and get its RGB.
I've been using mapping and trying to do something with structure but everything was in avail.
If you have some ideas about how to find these colors, I'll be sorely grateful. Thanks
If you had a 4x4 img it would be simple.
Simplify likewise, histogram function each RGB level(0-255x3), expecting each "popular" color to be in there. Sort the mess into top 16 used for each, again expecting those to be correct. Unless you have wild color gyrations, nothing else.
Check second time to see if popular colors actually exist.
Last, you might want to group into "close enough" categories, jpegs are lossy to a fault, anything within 4 RGB variations is grouped together, reducing 0-256 values to 0-64, unless colors swing widely. If you've used paint program magic wand, you know tolerance=0 makes uber mistakes, same idea.
Median filter. It will definitely reduce color count by merging into an average messy color, sample a 3x3, 4x4, circular weighted sample area.
If all else fails, steal the 256 color safe web palette, and work from there.
SWAG-Scientific Wild Ass Guess method, good luck.
I'm trying to find ways to copy multidimensional arrays from host to device in opencl and thought an approach was to use an image... which can be 1, 2, or 3 dimensional objects. However I'm confused because when reading a pixle from an array, they are using vector datatypes. Normally I would think double pointer, but it doesn't sound like that is what is meant by vector datatypes. Anyway here are my questions:
1) What is actually meant to vector datatype, why wouldn't we just specify 2 or 3 indices when denoting pixel coordinates? It looks like a single value such as float2 is being used to denote coordinates, but that makes no sense to me. I'm looking at the function read_imageui and read_image.
2) Can the input image just be a subset of the entire image and sampler be the subset of the input image? I don't understand how the coordinates are actually specified here either since read_image() only seams to take a single value for input and a single value for sampler.
3) If doing linear algebra, should I just bite the bullet and translate 1-D array data from the buffer into multi-dim arrays in opencl?
4) I'm still interested in images, so even if what I want to do is not best for images, could you still explain questions 1 and 2?
Thanks!
EDIT
I wanted to refine my question and ask, in the following khronos documentation they define...
int4 read_imagei (
image2d_t image,
sampler_t sampler,
int2 coord)
But nowhere can I find what image2d_t's definition or structure is supposed to be. The samething for sampler_t and int2 coord. They seem like structs to me or pointers to structs since opencl is supposed to be based on ansi c, but what are the fields of these structs or how do I note the coord with what looks like a scala?! I've seen the notation (int2)(x,y), but that's not ansi c, that looks like scala, haha. Things seem conflicting to me. Thanks again!
In general you can read from images in three different ways:
direct pixel access, no sampling
sampling, normalized coordinates
sampling, integer coordinates
The first one is what you want, that is, you pass integer pixel coordinates like (10, 43) and it will return the contents of the image at that point, with no filtering whatsoever, as if it were a memory buffer. You can use the read_image*() family of functions which take no sampler_t param.
The second one is what most people want from images, you specify normalized image coords between 0 and 1, and the return value is the interpolated image color at the specified point (so if your coordinates specify a point in between pixels, the color is interpolated based on surrounding pixel colors). The interpolation, and the way out-of-bounds coordinates are handled, are defined by the configuration of the sampler_t parameter you pass to the function.
The third one is the same as the second one, except the texture coordinates are not normalized, and the sampler needs to be configured accordingly. In some sense the third way is closer to the first, and the only additional feature it provides is the ability to handle out-of-bounds pixel coordinates (for instance, by wrapping or clamping them) instead of you doing it manually.
Finally, the different versions of each function, e.g. read_imagef, read_imagei, read_imageui are to be used depending on the pixel format of your image. If it contains floats (in each channel), use read_imagef, if it contains signed integers (in each channel), use read_imagei, etc...
Writing to an image on the other hand is straightforward, there are write_image{f,i,ui}() functions that take an image object, integer pixel coordinates and a pixel color, all very easy.
Note that you cannot read and write to the same image in the same kernel! (I don't know if recent OpenCL versions have changed that). In general I would recommend using a buffer if you are not going to be using images as actual images (i.e. input textures that you sample or output textures that you write to only once at the end of your kernel).
About the image2d_t, sampler_t types, they are OpenCL "pseudo-objects" that you can pass into a kernel from C (they are reserved types). You send your image or your sampler from the C side into clSetKernelArg, and the kernel gets back a sampler_t or an image2d_t in the kernel's parameter list (just like you pass in a buffer object and it gets a pointer). The objects themselves cannot be meaningfully manipulated inside the kernel, they are just handles that you can send into the read_image/write_image functions, along with a few others.
As for the "actual" low-level difference between images and buffers, GPU's often have specially reserved texture memory that is highly optimized for "read often, write once" access patterns, with special texture sampling hardware and texture caches to optimize scatter reads, mipmaps, etc..
On the CPU there is probably no underlying difference between an image and a buffer, and your runtime likely implements both as memory arrays while enforcing image semantics.
I am attempting to write a piece of code that is suppose to map data to RGB values, and one of the types of visualizations I am attempting to use is a diverging color map.
I am not exactly sure what the best way is to go about applying the colors. The current algorithm I am using is:
//F is the data point being checked
if(F <= .5){
RGB[0] = F*510;
RGB[1] = F*510;
RGB[2] = F*254 + 128;
}else{
RGB[0] = 255 - (F-.5)*254;
RGB[1] = 255 - (F-.5)*510;
RGB[2] = 255 - (F-.5)*510;
}
Where the key points for the curve are:
F=0: (0,0,128)
F=0.5: (255,255,255)
F=1: (128, 0, 0)
Are there any suggested algorithms out there for use instead of this, or is this hacked together piecewise function alright?
This is the image generated by this current algorithm.
I think you should use a bar to test your function as it would be easier to see the transition 'speed' in linear data.
Here is a really good article for using the diverging colour maps: http://www.sandia.gov/~kmorel/documents/ColorMaps/
It describes the mathematics behind it. I know it seems an overkill to go through Lab and MSH colour spaces for such a simple task, but if you want good quality colour maps it's really worth it.
Other than that, I don't know of any 'manual' implementation of the function (i.e. not using already complex functions from matlab or R)
I think it may be more useful to use HSV color space as opposed to RGB, and show your data using the Hue component. This way all the values of your function will map to a nice rainbow color and will be evenly saturated.
In the provided links you should be able to derive the formula, how to convert the Hue value to RGB.
I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.
For a game I'm working on, I'd like to compare two sprites in SFML2, such as with an if() statement. For example, I could have a large 1280x1024 image with one gray pixel among all black pixels. I would then have 2 separate sprites, one is the gray pixel alone, and the other is the map. I would crop only the gray pixel from the map and compare the two, if true, do other things.
Do you see what I'm getting at here? Is this possible? If so, how?
Im with Alex in saying there are smarter ways to check sprites.
Compare the file names not, don't reference a single pixel within an image, because you have to load the entire image into memory to do that atm you are loading 1.3MBytes into memory just to check a single pixel?
Store all of your resources in a Resource Manager and reference them via a UID, if a resource has UID then use that resource.
Number 2 is preferable above all else, but there are many other ways
Edit: As per comments, you wouldn't "crop" out the pixel, you would just load image into memory and use the Image class to get the colour of a pixel at a location. The following would be an example
sf::Image* map = MapSprite->GetTexture()->CopyToImage()
if (map->GetPixel(666,666) == sf::Color::Black)
{
//Funky stuff here
}
NOTE: You mentioned SFML2 so this is from that set of Documentation, may be different for 1.6
Edit2: Its been a while since I've used SMFL so hopefully the code snippet will at least give you direction