I'm currently learning OpenCV for a project I recently started in, and need to detect 3D boxes (imagine the big plastic boxes maybe 3ft x 2ft x 2ft) in an image. I've used the inRange method to create an image which just had the boxes I'd like to detect in it, but I'm not sure where to go from there. I'd like to get a 3D representation of these boxes back from OpenCV, but I can't figure out how. I've found quite a few tutorials explaining how to do this with just one object (which I have done successfully), but I don't know how I would make this work with multiple boxes in one image.
Thanks!
If you have established a method that works well with one object, you may just go with a divide-and-conquer approach: split your problem into several small ones by dividing your image with multiple boxes into an several images with one object.
Apply an Object Detector to your image. This Tutorial on Object Detection may help you. A quick search for object detection with OpenCV also gave this.
Determine the bounding boxes of the objects (min/max of the x and y-coordinates, maybe add some border margin)
Crop bounding boxes to get single object images
Apply your already working method to the set of single object images
In case of overlap, the cropped images may need some processing to isolate a "main" object. Whether 4. works is then dependent on how robust your method is to occlusions.
I stumbled over your question when looking for object detection. It's been quite a while since you asked, but since this is a public knowledge base a discussion on this topic might still be helpful for others.
Related
I would like to understand what solutions there are to perform object detection using a single almost identical reference image on a picture or in an augmented reality setting.
To be more specific: I want to detect flat (i.e. 2-dimensional) and mostly rectangular objects. I have a database with "perfect" reference images (high quality, full frontal, exact colors, no alterations, etc.) of the objects to be detected but I may have only one reference for each object.
I am talking about things such as logos, famous paintings and playing cards so the reference will have exactly the same content, shape and proportions as the object. From my understanding, the only difference between the object and the reference could then be perspective and a difference in lighting conditions. Let's assume none of these are very extreme (e.g. no sharp angle or colored light).
I know that image recognition and object detection usually requires many training images but given these simplified conditions, is there a way to make it work with one or few images (or create several by transforming one)?
I looked here and elsewhere and the only thing I found so far was this example of the Vuforia SDK: https://www.youtube.com/watch?v=MtiUx_szKbI&t=1m10s. One image of a card in a card game is apparently enough to create an overlay so I assume there are ways. This is not my field of expertise so I hope you guys can help me out :)
If there were no perspective distortion, you could use simple normalized cross-correlation. But since there is, you probably want to use SURF. The basic algorithm to use SURF to find your reference image within a world image is:
find keypoints, such as corners, in both images.
describe the local texture of each keypoint.
use those descriptors to match keypoints between images. If there are a lot of matches, with consistent geometry, you've probably found your object.
Check out this tutorial, that walks you through doing exactly that: http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html#feature-homography
Actually, I'm currently working on a simple project to detect collision between 2 specific objects in a surgery scene. The problem is that I don't have background on such problems so I'm really newbie to such things and I don't know yet what to do. After a little bit of research, I found Bullet library which can be used as a collision detection tool but not sure yet if it suits my case. I already checked some examples where the developer create the objects of interest manually which led me to think that I should detect first the objects of interest then launch the collision detection process.
In my case, I have 2 types of data:
Video shooting the operating room
Cloud points representing the room in 3D
I need to detect the collision between two objects in the scene. Is there any way to use Bullet to achieve such thing? Is it common to use a video as input for a detection collision problem(I'm wondering since I could find too much resources on it)?
I'm just starting so it might be a fuzzy question so sorry in advance for any inconveniences.
EDITED:
I already checked it but my point was to understand what options can be used before digging into the details. For me, a collision detection problem should have 2 parts: the objects of interest (The 2 or more objects that we're trying to detect their collision) and the scene in which we will be trying to detect the collision of the objects of interest. For the scene, the data I have is presented in 2 types mentioned above. So, I was asking about which type of data should be used as input for bullet collision process. Should it be an image taken from the video or should it be a list of 3D points? Or something else?
I have used Bullet half a year ago. I remember, that you need to register objects to Bullet with a collision shape. In simplistic case of your points, it could probably be small spheres. In case of your video, you need to have a 3d representation. I do not understand a 100% what you mean by detecting a "video" for collisions. However, to use Bullet, you need to have a collision shape associated with the object.
Further, you register a Collision Callback. This is one function called for each collision detected. All callbacks are listed here: http://www.bulletphysics.org/mediawiki-1.5.8/index.php?title=Collision_Callbacks_and_Triggers
As the wiki says - and I implemented it this way - to detect a specific collision, you need to iterate over allr esulting manifolds from Bullet manually. A little bit painful and performance wise strange approach. So you cannot register a specific callback for a specific object with another specific object!
Once the objects are registered, you run the algorithm and then you can check all manifolds in the callback.
To get started with Bullet, I used Bullet Physics Simplest Collision Example with the answers at that time.
I'm using the OpenCV framework in iOS Xcode objc, is there a way that I could process the image feed from the video camera and allow the user to touch an object on the screen then we use some functionality in OpenCV to highlight it.
Here is graphically what I mean. The first image shows an example of what the user might see in the video feed:
Then when they tap on the screen on the ipad i want to use OpenCV feature/object detecting to process the area they've clicked to highlight the area. Would look something like this if they clicked the ipad:
Any ideas on how this would be achievable in objc OpenCV?
I can see quite easily how we could achieve this using trained templates of the iPad to match it using OpenCV algorithms but I want to try and get it dynamic so users can just touch anything in the screen and we'll take it from there?
Explanation: why should we use the segmentation approach
According to my understanding, the task which you are trying to solve is segmentation of objects, regardless to their identity.
The object recognition approach is one way to do it. But it has two major downsides:
It requires you to train an object classifier, and to collect a dataset which contains a respectable amount of examples of objects which you would like to recognize. If you choose to take a classifier which is already trained - it won'y necessarily work on any type of object which you would like to detect.
Most of the object recognition solutions find a bounding box around the recognized object, but they don't perform a complete segmentation of it. The segmentation part requires extra effort.
Therefore, I believe that the best way for your case is to use an image segmentation algorithms. More precisly, we'll be using the GrabCut segmentation algorithm.
The GrabCut algorithm
This is an iterative algorithm with two stages:
initial stage - the user specify a bounding box around the object.
given this bounding box the algorithm estimates the color distribution of foreground (the object) and the background by using GMM, followed by a graph cut optimization for finding the optimal boundaries between the foreground and the background.
In the next stage, the user may correct the segmentation if needed, by supplying scribbles of the foreground and the background. The algorithm fixes the model accordingly and perform a new segmentation based on the updated information.
Using this approach has pros and cons.
The pros:
The segmentation algorithm is easy to implement with openCV.
It enables the user to fix segmentation errors if needed.
It doesn't relies on a collecting a dataset and training a classifier.
The main con is that you will need an extra source of information from the user beside of a single tap on the screen. This information will be a bounding box around the object, and in some cases - additional scribbles will be required to correct the segmentation.
Code
Luckily, there is an implementation of this algorithm in OpenCV. The user Itseez create a simple and easy to use sample for using OpenCV's GrabCut algorithm, which can be found here: https://github.com/Itseez/opencv/blob/master/samples/cpp/grabcut.cpp
Application usage:
The application receives a path to an image file as an command line argument input. It renders the image onto the screen and the user is required to supply an initial bounding rect.
The user can press 'n' in order to perform the segmentation for the current iteration or press 'r' to revert his operation.
After choosing a rect, the segmentation is calculated. If the user wants to correct it, he may choose to add foreground or background scribbles by pressing shift+left and Ctrl+left accordingly.
Examples
Segmenting the iPod:
Segmenting the pen:
You Can do it by Training a Classifier of Ipad images using opencv Haar Classifiers and then detecting Ipad images in a given frame.
Now based on coordinates of the touch check if that area overlapped with detected Ipad image area. If it does Drawbounding box on the detected Object.Means from there on you can proceed towards processing your detected ipad image.
Repeat the above procedure for Number of objects that you want to detect.
The task which you are trying to solve is "Object proposal". It doesn't work very accurate and this results are very new.
This two articles give you a good overview of methods for this:
https://pdollar.wordpress.com/2013/12/10/a-seismic-shift-in-object-detection/
https://pdollar.wordpress.com/2013/12/22/generating-object-proposals/
To have state-of-the-art results, look for latest CVPR papers on Object proposals. Quite often they have code available to test.
I want to ask about what kind of problems there be if i use this method to extract foreground.
The condition before using this method is that it runs on fixed camera so there're not going to be any movement on camera position.
And what im trying to do is below.
read one frame from camera and set this frame as background image. This is done periodically.
periodically subtract frames that are read afterward to background image above. Then there will be only moving things colored differently from other area
that are same to background image.
then isolate moving object by using grayscale, binarization, thresholding.
iterate above 4 processes.
If i do this, would probability of successfully detect moving object be high? If not... could you tell me why?
If you consider illumination change(gradually or suddenly) in scene, you will see that your method does not work.
There are more robust solutions for these problems. One of these(maybe the best) is Gaussian Mixture Model applied for background subtraction.
You can use BackgroundSubtractorMOG2 (implementation of GMM) in OpenCV library.
Your scheme is quite adequate to cases where the camera is fix and the background is stationary. Indoor and man-controlled scenes are more appropriate to this approach than outdoor and natural scenes .I've contributed to a detection system that worked basically on the same principals you suggested. But of course the details are crucial. A few remarks based on my experience
Your initialization step can cause very slow convergence to a normal state. You set the background to the first frames, and then pieces of background coming behind moving objects will be considered as objects. A better approach is to take the median of N first frames.
Simple subtraction may not be enough in cases of changing light condition etc. You may find a similarity criterion better for your application.
simple thresholding on the difference image may not be enough. A simple approach is to dilate the foreground for the sake of not updating the background on pixels that where accidentally identified as such.
Your step 4 is unclear, I assumed that you mean that you update the foreground only on those places that are identified as background on the last frame. Note that with such a simple approach, pixels that are actually background may be stuck forever with a "foreground" labeling, as you don't update the background under them. There are many possible solutions to this.
There are many ways to solve this problem, and it will really depend on the input images as to which method will be the most appropriate. It may be worth doing some reading on the topic
The method you are suggesting may work, but it's a slightly non-standard approach to this problem. My main concern would be that subtracting several images from the background could lead to saturation and then you may lose some detail of the motion. It may be better to take difference between consecutive images, and then apply the binarization / thresholding to these images.
Another (more complex) approach which has worked for me in the past is to take subregions of the image and then cross-correlate with the new image. The peak in this correlation can be used to identify the direction of movement - it's a useful approach if more then one thing is moving.
It may also be possible to use a combination of the two approaches above for example.
Subtract second image from the first background.
Threshold etc to find the ROI where movement is occurring
Use a pattern matching approach to track subsequent movement focussed on the ROI detected above.
The best approach will depend on you application but there are lots of papers on this topic
I am working on Opencv application that need to count any object which motion can be detected by the camera. The camera is still and I did the object tracking with opencv and cvblob by referring many tutorials.
I found some similar question:
Object counting
And i found this was similar
http://labs.globant.com/uncategorized/peopletracker-people-and-object-tracking/
I am new to OpenCV and I've gone through the opencv documentation but I couldn't find anything which is related to count moving objects in video.
Can any one please give me a idea how to do this specially the counting part. As I read in article above, they count people who crosses the virtual line.Is there a special algorithm to detect the object crossing the line?
Your question might be to broad when you are asking about general technique that count moving objects in video sequences. I would give some hints that might help you:
As usual in computer vision, there does not exist one specific way to solve your problem. Try do do some research about people detection, background extraction and motion detection to have a wider point of view
State more clearly user requirements of your system, namely how many people can occur in the image frame? The things get complicated when you would like to track more than one person. Furthermore, can other moving objects appear on an image (e.g. animals)? If no and only one person are supposed to be track, the answer to your problem is pretty easy, see an explanation below. If yes, you will have to do more research.
Usually you cannot find in OpenCV API direct solution to computer vision problem, namely there is not such method that solve directly problem of people counting. But for sure there exists some paper, reference (usually some scientific stuff) which can be adopted to solve your problem. So there is no method that "count people crossing vertical line". You have to solve problem my merging some algorithms together.
In the link you have provided one can see that they use some algorithm for background extraction which determined what is a non-moving background and moving foreground (in our case, a walking person). We are not sure if they use something more (or sophisticated), but information about background extraction is sufficient to start with problem solving.
And here is my contribution to the solution. Assuming only one person walks in front of the stable placed camera and no other objects motion can be observed, do as following:
Save frame when no person is moving in front of the camera, which will be used later as a reference for background
In a loop, apply some background detector to extract parts in the image representing motion (MOG or even you can just calculate difference between background and current frame, followed by binary threshold and blob counting, see my answer here)
From the assumption, only one blob should be detected (if not, use some metrics the chooses "the best one". for example choose the one with maximum area). That blob is the person we would like to track. Knowing its position on an image, compare to the position of the "vertical line". Objects moving from left to right are exiting and from right to left entering.
Remember that this solution will only work in case of the assumption we stated.