i am using open CV and C++. I used find Contours and moments to display the center coordinates of circles in an image. However,there is this strange coordinate which appears in between the good ones. It is [-2147483648,-2147483648]. Does anybody know what it means?
thanks
Chances that this reason occurs could due to the integer type used. Also known as an integer overflow.
This occurs when an arithmetic operation attempts to create a numeric value that is too large for the available storage space which you declared.
However, there may be other reasons as well. From what I noticed, the values are kind of big. May I ask what is the size of the image that you trying to implement the contours and moments on? Cuz I have done something similar myself, and the values are way too big.
If the size of the image you are processing is a not of humongous size(like really really big), then there might be something wrong with the code. Please edit the question to include the code, so that all the stackoverflow people can help you.
This are some links of how to find the centre of a circle, hope you find it useful. I used this links when I was doing a similar program back then.
How to find the coordinates of a point w.r.t another point on an image using OpenCV
http://lfhck.com/question/278176/python-and-opencv-how-do-i-detect-all-filledcirclesround-objects-in-an-image
Sorry it was not about the type that i used but it was the image quality which had problems. I had to blur that part so that it does not detect that color there.
Related
I have lecture notes written by a professor using a stylus.
A sample:
The width of the line used here is making reading difficult for me. I would like to make the lines thinner. The only solution I could think of is dilating the image. This gives a passable result:
The picture above is with uniform kernel of shape (2, 2) applied once; I've tried a bunch of kernel types, widths & numbers of iterations to arrive at this version that looks best to me.
However, I can't help but wonder if there's maybe another applicable algorithm that I'm missing; one that could lead to even better results? I wasn't able to google any computer vision approaches to font thinning, so I would appreciate any information on the subject.
Have been monitored such info during several days. Try to use Thinning described here, the link is also in the references to OpenCV-Python-Tutorial on morphological transforms. Taking Image Gradient can help, but it will make the image Grayscale, and with inverting colors you can get black-on-white text. Try to leave original color on black pixels location when original and final images are stacked.
I have grayscale images like this:
I want to detect anomalies on this kind of images. On the first image (upper-left) I want to detect three dots, on the second (upper-right) there is a small dot and a "Foggy area" (on the bottom-right), and on the last one, there is also a bit smaller dot somewhere in the middle of the image.
The normal static tresholding does't work ok for me, also Otsu's method is always the best choice. Is there any better, more robust or smarter way to detect anomalies like this? In Matlab I was using something like Frangi Filtering (eigenvalue filtering). Can anybody suggest good processing algorithm to solve anomaly detection on surfaces like this?
EDIT: Added another image with marked anomalies:
Using #Tapio 's tophat filtering and contrast adjustement.
Since #Tapio provide us with great idea how to increase contrast of anomalies on the surfaces like I asked at the begining, I provide all you guys with some of my results. I have and image like this:
Here is my code how I use tophat filtering and contrast adjustement:
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3), Point(0, 0));
morphologyEx(inputImage, imgFiltered, MORPH_TOPHAT, kernel, Point(0, 0), 3);
imgAdjusted = imgFiltered * 7.2;
The result is here:
There is still question how to segment anomalies from the last image?? So if anybody have idea how to solve it, just take it! :) ??
You should take a look at bottom-hat filtering. It's defined as the difference of the original image and the morphological closing of the image and it makes small details such as the ones you are looking for flare out.
I adjusted the contrast to make both images visible. The anomalies are much more pronounced when looking at the intensities and are much easier to segment out.
Let's take a look at the first image:
The histogram values don't represent the reality due to scaling caused by the visualization tools I'm using. However the relative distances do. So now the thresholding range is much larger, the target changed from a window to a barn door.
Global thresholding ( intensity > 15 ) :
Otsu's method worked poorly here. It segmented all the small details to the foreground.
After removing noise by morphological opening :
I also assumed that the black spots are the anomalies you are interested in. By setting the threshold lower you include more of the surface details. For example the third image does not have any particularly interesting features to my eye, but that's for you to judge. Like m3h0w said, it's a good heuristic to know that if something is hard for your eye to judge it's probably impossible for the computer.
#skoda23, I would try unsharp masking with fine tuned parameters for the blurring part, so that the high frequencies get emphasized and test it thoroughly so that no important information is lost in the process. Remember that it is usually not good idea to expect computer to do super-human work. If a human has doubts about where the anomalies are, computer will have to. Thus it is important to first preprocess the image, so that the anomalies are obvious for the human eye. Alternative for unsharp masking (or addition) might be CLAHE. But again: remember to fine tune it very carefully - it might bring out the texture of the board too much and interfere with your task.
Alternative approach to basic thresholding or Otsu's, would be AdaptiveThreshold() which might be a good idea since there is a difference in intensity values between different regions you want to find.
My second guess would be first using fixed value thresholding for the darkest dots and then trying Sobel, or Canny. There should exist an optimal neighberhood where texture of the board will not shine as much and anomalies will. You can also try bluring before edge detection (if you've detected the small defects with the thresholding).
Again: it is vital for the task to experiment a lot on every step of this approach, because fine tuning the parameters will be crucial for eventual success. I'd recommend making friends with the trackbar, to speed up the process. Good luck!
You're basically dealing with the unfortunate fact that reality is analog. A threshold is a method to turn an analog range into a discrete (binary) range. Any threshold will do that. So what exactly do you mean with a "good enough" threshold?
Let's park that thought for a second. I see lots of anomalies - sort of thin grey worms. Apparently, you ignore them. I'm applying a different threshold then you are. This may be reasonable, but you're applying domain knowledge that I don't have.
I suspect these grey worms will be throwing off your fixed value thresholding. That's not to say the idea of a fixed threshold is bad. You can use it to find some artifacts and exclude those. Somewhat darkish patches will be missed, but can be brought out by replacing each pixel with the median value of its neighborhood, using a neighborhood size that's bigger than the width of those worms. In the dark patch, this does little, but it wipes out small local variations.
I don't pretend these two types of abnormalities are the only two, but that is really an application domain question and not about techniques. E.g. you don't appear to have ligthing artifacts (reflections), at least not in these 3 samples.
say we got a 8x9 chessboard, and the function cv::findChessboardCorners recognize it without problem, My question is why does the function not recognize in the same image a a chessboard with smaller size, I tried in a for-loop and decremented the size the function may recognize a chessboard of let say 5x4 and 4x5 but not 6x7 for example ?
any idea why is that happening ?
I already tried debugging the program and I didn't understand what really happens in calibinit.hpp
thanks in advance !
I think the main problem is that you would have ambiguities since it is easily possible to find different smaller chessboards in a larger one.
If you do corner detection on an image consisting of a chessboard, you will find a regular grid of corners.
Then findChessboardCorners needs to find a structur which is very similar to the given chessboard of size (x,y). It will rate the different possibilities to map the chessboard to the regular grid found by the corner detection and these ratings are very similar.
So it is difficult to decide which is THE CHESSBOARD, you are looking for.
It because the recognized board must have light border.
Ok I am posting my conundrums of life to stackoverflow after 4 days of mindless programming when nothing seems to get things right or atleast close to right. sorry for being a little dramatic but I feel like a lousy programmer today.
Anyway, my problem is:
To obtain Fundamental matrix using RANSAC (N>8).
I have two images with wide baseline but sufficient overlap so that adequate amount of SURF keypoints (~308) are matched correctly (i plot them).
Now lies the problem. I pass the 2D points to cv::findFindamentalMat but I get completly baseless results. The function returns:
FundMat=[2.05148e-13 3.72341 -2.03671e+10
1.6701e+26 -4.17712 4.59533e+29
3.32414e+18 2.8843 1.91069e-26]
To circumvent the large dynamic range of the matrix, Hartley suggested to normalise the data points (in euclidean space and not the projection space normalization)....Even after doing that the result is the almost the same. (10^-9 to 10^9)
I understand that FundMat is accurate only upto scale but a difference of 10^-9 to 10^+9 is too much.
I referred to other questions here but i dont seem to get any leads:findfundamentalmatrix-doesnt-find-fundamental-matrix
how-to-calculate-the-fundamental-matrix-for-stereo-vision
Any ideas would be great. This is a very important step when considering uncalibrated images for the rest of the software pipeline.
n case the code is helpful. (its not indented and colored though..space is too less here.)
https://sites.google.com/site/3drecon124/
its solved...silly human error. there was a data type conversion from double to float and it caused data to be fetched from incorrect locations in memory. now its smooth and epipolar constraint is satisfied upto scale.
I am currently working on a data visualization project.My aim is to produce contour lines ,in other words iso-lines, from gridded data.Data can be temperature, weather data or any kind of other environmental parameters but only condition is it must be regularly spaced.
I searched in internet , however i could not find a good algorithm, pseudo-code or source code for producing contour lines from grids.
Does anybody knows a library, source code or an algorithm for producing contour lines from gridded data?
it will be good if your suggestion has a good run time performance, i don't want to wait my users so much :)
Edit: thanks for response but isolines have some constrains like they should not intersects
so just generating bezier curves does not accomplish my goal.
See this question: How to approximate a vector contour from an elevation raster?
It's a near duplicate, but uses quite different terminology. You'll find that cartography and computer graphics solve many of the same problems, but use different terminology for them.
there's some reasonably good contouring available in GNUplot - if you're able to use GPL code that may help.
If your data is placed at regular intervals, this can be done fairly easily (assuming I understand your problem correctly). First you need to determine at what interval you want your contours. Next create the grid you are going to use to store the contour information (i'm assuming just a simple on/off or elevation at this contour level type of data), which should be one interval smaller than the source data.
Now the trick here is to offset the 2 grids by 1/2 an interval (won't actually show up in code like this, but its the concept I'm dealing with here) and compare the 4 coordinates surrounding the current point in the contour data grid you are calculating. If any of the 4 points are in a different interval range, then that 'pixel' in the contour grid should be set to true (or the value of the contour range being crossed).
With this method, there will be a problem when the interval is too fine which will cause several contours to overlap onto each other.
As the link from Paul Tomblin suggests, Bezier curves (which are a subset of B-splines) are a ripe solution for your problem. If runtime performance is an issue, Bezier curves have the added benefit of being constructable via the very fast de Casteljau algorithm, instead of drawing them according to the parametric equations. On the off chance you're working with DirectX, it has a library function for the de Casteljau, but it should not be challenging to brew one yourself using the 1001 web pages that describe it.