why python cv2.drawContours() just draw the first contours? - python-2.7

I am having trouble filling a mask using cv2.drawContours, I dont know what could it be, or how to fix it.
I have a list with three elements's sublists [coordX,coordY, contour], so for every pair of coordinates, there is an if/else decision, each one with this stament: cv2.drawContours(mask,[l1[0][2]],0,c1[cla],-1)
[l1[0][2]]: contour, like array([[[437, 497]],[[436, 498]],[[439, 498]], [[438, 497]]])
c1[cla]: tuple, like (25,250,138)
The script runs well without errors, but the resulting image is almost black, just having 4 green pixels.
Any suggestion or advice?

Finally, I found what was the problem. I was passing the same contour to every step ([l1[0][2]]), so I need to change to operate with index, like [l1[i][2]]).
Sorry for that mistake and for your patience.
Have a good day, Jaime

Related

how to find 16 most common colors in the img with extension BMP

for(int yy=0; yy<height/2; yy++)
{
SDL_Color kolor = getPixel(xx,yy); //we are gettig each pixel in img;
setPixel(xx+width/2,yy+height/2,kolor.r,kolor.g,kolor.b );
//setPixel(xx,yy+height/2,kolor.r,kolor.g,kolor.b);
//setPixel(xx+width/2,yy,kolor.r,kolor.g,kolor.b);
}
}
I am trying by using a loop find 16 most common colors in an img and get its RGB.
I've been using mapping and trying to do something with structure but everything was in avail.
If you have some ideas about how to find these colors, I'll be sorely grateful. Thanks
If you had a 4x4 img it would be simple.
Simplify likewise, histogram function each RGB level(0-255x3), expecting each "popular" color to be in there. Sort the mess into top 16 used for each, again expecting those to be correct. Unless you have wild color gyrations, nothing else.
Check second time to see if popular colors actually exist.
Last, you might want to group into "close enough" categories, jpegs are lossy to a fault, anything within 4 RGB variations is grouped together, reducing 0-256 values to 0-64, unless colors swing widely. If you've used paint program magic wand, you know tolerance=0 makes uber mistakes, same idea.
Median filter. It will definitely reduce color count by merging into an average messy color, sample a 3x3, 4x4, circular weighted sample area.
If all else fails, steal the 256 color safe web palette, and work from there.
SWAG-Scientific Wild Ass Guess method, good luck.

Refining Camera parameters and calculating errors - OpenCV

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!

find the same area between 2 images

I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.

Comparing Sprites?

For a game I'm working on, I'd like to compare two sprites in SFML2, such as with an if() statement. For example, I could have a large 1280x1024 image with one gray pixel among all black pixels. I would then have 2 separate sprites, one is the gray pixel alone, and the other is the map. I would crop only the gray pixel from the map and compare the two, if true, do other things.
Do you see what I'm getting at here? Is this possible? If so, how?
Im with Alex in saying there are smarter ways to check sprites.
Compare the file names not, don't reference a single pixel within an image, because you have to load the entire image into memory to do that atm you are loading 1.3MBytes into memory just to check a single pixel?
Store all of your resources in a Resource Manager and reference them via a UID, if a resource has UID then use that resource.
Number 2 is preferable above all else, but there are many other ways
Edit: As per comments, you wouldn't "crop" out the pixel, you would just load image into memory and use the Image class to get the colour of a pixel at a location. The following would be an example
sf::Image* map = MapSprite->GetTexture()->CopyToImage()
if (map->GetPixel(666,666) == sf::Color::Black)
{
//Funky stuff here
}
NOTE: You mentioned SFML2 so this is from that set of Documentation, may be different for 1.6
Edit2: Its been a while since I've used SMFL so hopefully the code snippet will at least give you direction

Perlin's Noise with OpenGL

I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.