I have a video sequence of which one frame is shown below as shown below.
I was trying to use corner detection to find the edges of the rectangle on the sheet of paper.
I am using the Shi-Tomasi corner detector for the same. However it detects a number of other things that I don't need from the background of the image. How can I narrow down my ROI to only the sheet of paper.
Second Question:
In the video sequence upon detecting The corners I need to play another video inside the rectangle. I was trying to do this using a single thread but it lead to a lot of lag and jerks. What can I possibly do to improve my processing speed. Do I need to use multiple threads for each video. One video is from the webcam while the other is from the hard-drive.
Here is what I did for one of previous projects.
Find all contours in your picture and approximate each with 4 corner shape
Find right rectangle with your own condition such as rectangle with area > 1000000
(optional) you will notice that your rectangle is not real rectangle because of 3D world. You might want to do perspective transformation to get correct rectangle
Paint green or whatever texture on the found rectangle since you already have 4 corners from above
As for jerky playing, you might want to use not only multithreading with GPU but also encryption to improve speed.
Related
First of all, sorry for my bad English,
I have an object like following picture, the object always spin around a horizontal axis. Anybody can recommend me how to I can take a photo that's full label of tube when the tube is spinning ? I can take a image from my camera via OpenCV C++, but when I'm trying to spin the tube around, I can't take a perfect photo (my image is blurry, not clearly).
My tube is perfectly facing toward camera. Its rotating speed is about 500 RPM.
Hope to get your help soon,
Thank you very much!
this is my object:
Some sample images:
Here my image when I use camera of Ip5 with flash:
Motion blur
this can be improved by lowering the exposure time but you need to increase light conditions to compensate. Most modern compact cameras can not set the exposure time directly (so the companies can sold the expensive profi cameras) even if it is just few lines of GUI code but if you increase the light the automatic exposure should lower on its own.
In industry this problem is solved by special TDI cameras like
HAMAMATSU TDI Line Scan Cameras
The TDI means Time delay integration which means the camera CCD pixels are passing its charge to the next pixel synchronized with the motion. This results in effect like you would move the camera synchronously with your object surface. The blur is still present but much much smaller (only a fraction of real exposure time)
In computer vision and DIP you can de-blur the image by deconvolution process if you know the movement properties (which you know) It is inversion of gaussian blur filter with use of FFT and optimization process to find the inverse filter.
Out of focus blur
This is due the fact your surface is curved and camera chip is not. So outer pixels have different distance to chip then the center pixels. Without special optics you can handle this by Line cameras. Of coarse I do not expect you got one so you can use your camera for this too.
Just mount your camera so one of the camera axis is parallel to you object rotation axis (surface) for example x axis. Then sample more images with constant time step and use only the center line/slice of the image (height of the line/slice depends on your exposure time and the object speed, they should overlap a bit). then just combine these lines/slices from all the sampled images to form the focused image .
[Edit1] home made TDI setup
So mount camera so its view axis is perpendicular to surface.
Take burst shots or video with constant frame-rate
The shorter exposure time (higher frame-rate) the more focused whole image will be (due to optical blur) and the bigger area dy from motion blur. And the higher the rotation RPM the smaller the dy will be. So find the best option for your camera,RPM and lighting conditions (usually adding strong light helps if you do not have reflective surfaces on the tube).
For correct output you need to compromise each parameter so:
exposure time is as short as it can
focused areas are overlapping between the shots (if not you can sample more rounds similar to old FDD sector reading...)
extract focused part of shots
You need just the focused middle part of all shots so empirically take few shots from your setup and choose the dy size. Then use that as a constant latter. So extract the middle part (slice) from the shots. In my example image it is the red area.
combine slices
You just copy (or average overlapped part) the slices together. They should overlap a bit so you do not have holes in final image. As you can see my final image example has smaller slices then acquired to make that more obvious.
Your camera image can be off by few pixels due to vibrations so If that is a problem in final image then you can use SIFT/SURF + RANSAC for auto-stitching for higher precision output.
I have this application where a rectangle is drawn in the initial frame. I wanted to know if it is possible to make the rectangle a part of the image from the next frame onwards.
For example in my first frame I would draw something like this but a bit darker
http://imgur.com/zACIiHJ
I want it to become a part of the environment, so that the next time my camera accesses that frame I should see a rectangular box. How to do this using OpenCV?
Edit: My algorithm finds and draws the rectangle in the first frame. I'm trying to keep the rectangle in the same place as the camera moves around and the rectangle need not be always on the white board.
What what you are trying to achieve is possible, but its going to take some research and work on your part. One possible solution to your problem is to use Optical Flow (http://en.wikipedia.org/wiki/Optical_flow) analysis to monitor the apparent motion of objects in your cameras view. You could use the resulting optical flow field to apply a "correction" to the positions of the corners of your rectangle between each frame. Here is a link to the OpenCV documentation for their optical flow functions:
http://docs.opencv.org/modules/gpu/doc/video.html
If the particular device you are using has a gyroscope, and gps/ins you might also use this data to supplement the optical flow data. Let me know how it goes, sounds like a really fun project!
You need a static rectangle that has nothing to do with what is in the image? Just draw a rectangle on every frame you capture.
Rect r = Rect( .. );
rectangle( imageFromCam, r, Scalar( .. ) );
cv::rectangle( )
Rect
I've working on this some time now, and can't find a decent solution for this.
I use OpenCV for image processing and my workflow is something like this:
Took a picture of a tv.
Split image in to R, G, B planes - I'm starting to test using H, S, V too and seems a bit promising.
For each plane, threshold image for a range values in 0 to 255
Reduce noise, detect edges with canny, find the contours and approximate it.
Select contours that contains the center of the image (I can assume that the center of the image is inside the tv screen)
Use convexHull and HougLines to filter and refine invalid contours.
Select contours with certain area (area between 10%-90% of the image).
Keep only contours that have only 4 points.
But this is too slow (loop on each channel (RGB), then loop for the threshold, etc...) and is not good enought as it not detects many tv's.
My base code is the squares.cpp example of the OpenCV framework.
The main problems of TV Screen detection, are:
Images that are half dark and half bright or have many dark/bright items on screen.
Elements on the screen that have the same color of the tv frame.
Blurry tv edges (in some cases).
I also have searched many SO questions/answers on Rectangle detection, but all are about detecting a white page on a dark background or a fixed color object on a contrast background.
My final goal is to implement this on Android/iOS for near-real time tv screen detection. My code takes up to 4 seconds on a Galaxy Nexus.
Hope anyone could help . Thanks in advance!
Update 1: Just using canny and houghlines, does not work, because there can be many many lines, and selecting the correct ones can be very difficult. I think that some sort of "cleaning" on the image should be done first.
Update 2: This question is one of the most close to the problem, but for the tv screen, it didn't work.
Hopefully these points provide some insight:
1)
If you can properly segment the image via foreground and background, then you can easily set a bounding box around the foreground. Graph cuts are very powerful methods of segmenting images. It appears that OpenCV provides easy to use implementations for it. So, for example, you provide some brush strokes which cover "foreground" and "background" pixels, and your image is converted into a digraph which is sliced optimally to split the two. Here is a fun example:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
This is a quick something I put together to illustrate its effectiveness:
2)
If you decide to continue down the edge detection route, then consider using Mathematical Morphology to "clean up" the lines you detect before trying to fit a bounding box or contour around the object.
http://en.wikipedia.org/wiki/Mathematical_morphology
3)
You could train across a dataset containing TVs and use the viola jones algorithm for object detection. Traditionally it is used for face detection but you can adapt it for TVs given enough data. For example you could script downloading images of living rooms with TVs as your positive class and living rooms without TVs as your negative class.
http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html
4)
You could perform image registration using cross correlation, like this nice MATLAB example demonstrates:
http://www.mathworks.com/help/images/examples/registering-an-image-using-normalized-cross-correlation.html
As for your template TV image which would be slid across the search image, you could obtain a bunch of pictures of TVs and create "Eigenscreens" similar to how Eigenfaces are used for facial recognition and generate an average TV image:
http://jeremykun.com/2011/07/27/eigenfaces/
5)
It appears OpenCV has plenty of fun tools for describing shape and structure features, which appears to be mainly what you're interested in. Worth a look if you haven't seen this already:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Best of luck.
I am familiar with openCV, a powerful open source library and using that I am dealing with farm industry project where a mouse will be injected with drug , and its been kept on so called a stage which is surrounded by cylinder with painted strips of successive white and black. So i need to find out how many times the mouse will rotate its head to words the rotation of the cylinder . (its because it has got hang of drug) . How can i achieve this any opencv experts can help me out there.
I have added an image below
Seems an interesting one, these are my preliminary suggestions...
Depends on the resolution of the camera and how far your object (mouse) is from the camera...coz mouse is a small object so the image of the mouse need to cover good number of pixels in the image to differentiate head movement...
I don't think the mouse will stick to one position..it will keep moving in the cage...so you need to track the mouse...
At every position of the mouse you need to find the position of the head with respect to the body....that you can do using template matching (create templates of the head of the mouse)
Hence more info and some sample pictures are necessary to get the clear idea of the scene
EDIT AFTER IMAGE UPLOADED
since the camera is fixed hence create a circular region of interest...so that only movement inside this circle concerns you and not the moving cylinder outside the circle
subtract the present frame from the previous frame (frame differentiation) and store the absolute of the difference in an image.
absdiff(frameNow,framePrevs,diffofFrames);
threshold the diffofFrames as required to get the current position of the rat...
Now the task is easier if the image clearly shows its nose...since the nose has a pointed shape it can be detected by some template matching....however from the image you have given its difficult to make out the nose against a black background...However I can only suggest you the following process... green circles denote the tip of the nose...all I am trying to do is to get orientation of the head w.r.t. the body....for good results you need to have good images...
I am using openCv with C++ and I am trying to find a moving ball under different lighting conditions. So far I am able to filter an image by thresholding it using HSV color space. The problem with this is that it will pick up other object that have a similar color. It is very tedious to figure out the exact hsv range everytime there is a ball with different color/background.
Is there a way for me to apply any filter on the thresholded binary image to detect only the objects that are moving? This way I will only find the ball and not other objects since they are usually stationary.
Thank you,
Varun
Simplest approach would be frame differencing / background learning in an image sequence.
frame differencing: substract two successive frames, the result is the moving part (you will probably only get the edges of moving objects)
background learning: e.g. build an average over 50 frames, this would be your learned background, then substract the current frame, again the difference is the moving part