For a game I'm working on, I'd like to compare two sprites in SFML2, such as with an if() statement. For example, I could have a large 1280x1024 image with one gray pixel among all black pixels. I would then have 2 separate sprites, one is the gray pixel alone, and the other is the map. I would crop only the gray pixel from the map and compare the two, if true, do other things.
Do you see what I'm getting at here? Is this possible? If so, how?
Im with Alex in saying there are smarter ways to check sprites.
Compare the file names not, don't reference a single pixel within an image, because you have to load the entire image into memory to do that atm you are loading 1.3MBytes into memory just to check a single pixel?
Store all of your resources in a Resource Manager and reference them via a UID, if a resource has UID then use that resource.
Number 2 is preferable above all else, but there are many other ways
Edit: As per comments, you wouldn't "crop" out the pixel, you would just load image into memory and use the Image class to get the colour of a pixel at a location. The following would be an example
sf::Image* map = MapSprite->GetTexture()->CopyToImage()
if (map->GetPixel(666,666) == sf::Color::Black)
{
//Funky stuff here
}
NOTE: You mentioned SFML2 so this is from that set of Documentation, may be different for 1.6
Edit2: Its been a while since I've used SMFL so hopefully the code snippet will at least give you direction
Related
We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.
for(int yy=0; yy<height/2; yy++)
{
SDL_Color kolor = getPixel(xx,yy); //we are gettig each pixel in img;
setPixel(xx+width/2,yy+height/2,kolor.r,kolor.g,kolor.b );
//setPixel(xx,yy+height/2,kolor.r,kolor.g,kolor.b);
//setPixel(xx+width/2,yy,kolor.r,kolor.g,kolor.b);
}
}
I am trying by using a loop find 16 most common colors in an img and get its RGB.
I've been using mapping and trying to do something with structure but everything was in avail.
If you have some ideas about how to find these colors, I'll be sorely grateful. Thanks
If you had a 4x4 img it would be simple.
Simplify likewise, histogram function each RGB level(0-255x3), expecting each "popular" color to be in there. Sort the mess into top 16 used for each, again expecting those to be correct. Unless you have wild color gyrations, nothing else.
Check second time to see if popular colors actually exist.
Last, you might want to group into "close enough" categories, jpegs are lossy to a fault, anything within 4 RGB variations is grouped together, reducing 0-256 values to 0-64, unless colors swing widely. If you've used paint program magic wand, you know tolerance=0 makes uber mistakes, same idea.
Median filter. It will definitely reduce color count by merging into an average messy color, sample a 3x3, 4x4, circular weighted sample area.
If all else fails, steal the 256 color safe web palette, and work from there.
SWAG-Scientific Wild Ass Guess method, good luck.
I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.
I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.
I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.