I want to detect circular round black blobs in opencv2
detector = cv2.SimpleBlobDetector(params)
keypoints = detector.detect(im)
I am getting error:
keypoints = detector.detect(im)
TypeError: Incorrect type of self (must be 'Feature2D' or its
derivative)
You can use this instead:
detector = cv2.SimpleBlobDetector_create(params)
This changed with OpenCV version 2 onwards.
Related
I'm working on a c++ project where I need to OCR some text fields. I'm using Tesseract version 3.02 c++ API functions to achieve this. But the OCR results differ from the image.
The following image reads as "31 SW19 SQU" when i use api.GetUTF8Text() function.
and the following image as "31 SW19 3OU".
One problem is tesseract identifies the first character as "3" and fails to identify it within "3QU" correctly.
Can someone explain to me why the tesseract fails to identify these images or any guidance to fix the issue.
I screen grabbed that image and ran it through my setup (v5.0.0 alpha) and it got it right for me:
def TestImage2String(file):
img = cv2.imread(file)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray, th = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)
out = pytesseract.image_to_string(th)
print(out)
The output was:
SW19 3QU
It's Python, but the c++ API is very similar.
I have to compare two images of the same object taken in different surrounding light (for example one in bright sunlight, other in white light, etc.) and the camera angle is rotated as well as the sizes may be different. Now as the images are of the same object, they should match almost correctly. The comparison is to be done on the basis of color and shape. For this the test image needs to be rescaled, rotated and the difference in the light should be compensated. Please tell me how to do it using OpenCV-python. I am using OpenCV 3.0.0 and python 2.7.
I have attached sample images of the same object to be compared.
I am really not getting any good method that will do the job. Please help me.
Thanks in advance!
Try matching the images using an interest point detector like ORB or SIFT. The following is the result using ORB.
Code
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('im1.jpg',0)
img2 = cv2.imread('im2.jpg',0)
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
good = []
for m,n in matches:
if m.distance < 0.95*n.distance:
good.append([m])
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good, None,flags=2)
plt.imshow(img3),plt.show()
Following the comments, in case of objects with no distinct features like in this example, I would suggest using geometrical features:
Remove the background (Create binary image with only object vs. background)
Find the outer contour
Find its orientation, size, center
Rotate and scale the images
If you have some projection (image not taken directly from above), you could use some algorithms for shape matching to find correspondence between points and then estimate homography from them.
Then, when your images are aligned, you could use any correlation for the comparison. Have a look at this tutorial Histogram Comparison.
I am trying to do classification of images with combining SIFT features, Bag of Visual Words and SVM.
Now I am on training part. I need to get BoW histograms for each training image to be able to train SVM. For this I am using BOWImgDescriptorExtractor from OpenCV. I am using OpenCV version 3.1.0.
The problem is that it computes histogram for some images, but for some of images it gives me this error:
OpenCV Error: Assertion failed (queryIdx == (int)i) in compute,
file /Users/opencv-3.1.0/modules/features2d/src/bagofwords.cpp, line 200
libc++abi.dylib: terminating with uncaught exception of type
cv::Exception: /Users/opencv-3.1.0/modules/feature/src/bagofwords.cpp:200: error: (-215) queryIdx == (int)i in function compute
Training images are all of the same size, all have same number of channels.
For creating dictionary I use another image set than for training SVM.
Here's part of code:
Ptr<FeatureDetector> detector(cv::xfeatures2d::SIFT::create());
Ptr<DescriptorMatcher> matcher(new BFMatcher(NORM_L2, true));
BOWImgDescriptorExtractor bow_descr(det, matcher);
bow_descr.setVocabulary(dict);
Mat features_svm;
for (int i = 0; i < num_svm_data; ++i) {
Mat hist;
std::vector<KeyPoint> keypoints;
detector->detect(data_svm[i], keypoints);
bow_descr.compute(data_svm[i], keypoints, hist);
features_svm.push_back(hist);
}
data_svm is a vector<Mat> type. It is my training set images which I will use in SVM.
What the problem can be?
I ran into the same issue. I was able to fix it by changing the cross check option of the Brute Force matcher to False. I think its because the cross check option keeps only a select few matches and removes the rest and messes up the indexes in the process
Current situation: I would like to detect rectangles (or squares) inside an image, where the contours of these rectangles are not solid consistent. Like a chessboard, where the outer contours have wholes.
Possible Solution: I am trying to implement an active contour algorithm, which should help me to detect the outside contour of the object. I know some points outside of the object, which could be used to shrink and fit the points as long as the object fits in it.
Search: I have found the cvSnakeImage Function of an older openCV version, which is not maintained and should not be used any more. I have found an active contour C++ implementation, which also uses an older openCV and the boost library. I have tried but was not able to build the code. HiDiYANG/ActiveContour
Post using cvSnake Implementation
Matlab porting to Opencv 3.0
Further articles in this topic: SNAKES: Active Contour Model
Question: Is there a current implementation of the active contour algorithm available in OpenCV? Is there a best implementation available, where I should invest time to understand the implementation?
Example Image:
I have the first image with the the points on the grey border and would like to get the red rectangle (second image).
For the image you have uplaoded, simple union over bounding boxes of contours should give you the result you desired. 'bb_union' is a function you need to write for yourself.
import cv2
img = cv2.imread('path to your image') # BGR image
im = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
im = 255 - im # your contours are black, so invert the image
_, contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
bb = None
for cnt in contours:
rect = cv2. boundingRect(cnt)
if (bb is None):
bb = rect
continue
bb = bb_union(rect, bb)
cv2.rectangle(img, bb, (0,0,255), 2)
I have a simple program that uses Brisk detection and flann matching. I'm looking to improve the efficiency by applying the detection to a specific region of interest in each frame. I've tried the following:
cv::Mat ROI_scene, scene;
//region of interest
cv::Rect myROI(10, 10, 100, 100);
cv::Ptr<BRISK> detector = BRISK::create();
cv::Ptr<BRISK> descriptorExtractor = BRISK::create();
while ( capture.read(scene) )
{
ROI_scene = scene(myROI);
detector->detect(ROI_scene, keypoints_scene);
descriptorExtractor->compute(ROI_scene, keypoints_scene, descriptors_scene);
matcher.match(descriptors_object, descriptors_scene, matches);
}
The above returns this error:
OpenCV Error: Bad argument (Only continuous arrays are supported) in
buildIndex_, file opencv/3.0.0/modules/flann/src/miniflann.cpp, line
317 libc++abi.dylib: terminating with uncaught exception of type
cv::Exception: opencv/3.0.0/modules/flann/src/miniflann.cpp:317:
error: (-5) Only continuous arrays are supported in function
buildIndex_
Abort trap: 6
I've also tried using copyTO and then I get a Arrays must be 2d or 3d error.
Any insight would be really helpful
Here is a gist of the working code:
https://gist.github.com/DevN00b1/63eba5813926e4d0ea32