Basicly i need to cut foreground object from green screen video. I need to make green transparent or directly cut the foreground object. I need to use OpenCv and C++. I find couple of methods but doesnt work. What i need to do it?
there isn't a magical way to do so. You need to programatically select the roi applying effects on each frame (i.e. on the Mat object). You may need to reduce noise, apply blur, extract each channels and do much more. So be patient and start experimenting.
Related
So I am fairly inexperienced with After Effects. But I am making a title for a video project and am having a hard time figuring something out. I have some text that slides over, and appears once it is in the masked area. However this is a very hard edge and doesn't look the best. How do I soften that edge? I tried making another shape and blurring it and then masking that over the other shape but that didn't work. I tried Googling but it was full of people saying you cannot feather a shapes edges in AE.
I would use the mask in my image directly with the feather option but then that mask moves with my image, and that's not what I want.
So I learned that you can just make an image in photoshop that does whatever you want it to do and just use that.
The masks feathering moves your image? Can you put up a screen recording of the problem?
I want to ask about what kind of problems there be if i use this method to extract foreground.
The condition before using this method is that it runs on fixed camera so there're not going to be any movement on camera position.
And what im trying to do is below.
read one frame from camera and set this frame as background image. This is done periodically.
periodically subtract frames that are read afterward to background image above. Then there will be only moving things colored differently from other area
that are same to background image.
then isolate moving object by using grayscale, binarization, thresholding.
iterate above 4 processes.
If i do this, would probability of successfully detect moving object be high? If not... could you tell me why?
If you consider illumination change(gradually or suddenly) in scene, you will see that your method does not work.
There are more robust solutions for these problems. One of these(maybe the best) is Gaussian Mixture Model applied for background subtraction.
You can use BackgroundSubtractorMOG2 (implementation of GMM) in OpenCV library.
Your scheme is quite adequate to cases where the camera is fix and the background is stationary. Indoor and man-controlled scenes are more appropriate to this approach than outdoor and natural scenes .I've contributed to a detection system that worked basically on the same principals you suggested. But of course the details are crucial. A few remarks based on my experience
Your initialization step can cause very slow convergence to a normal state. You set the background to the first frames, and then pieces of background coming behind moving objects will be considered as objects. A better approach is to take the median of N first frames.
Simple subtraction may not be enough in cases of changing light condition etc. You may find a similarity criterion better for your application.
simple thresholding on the difference image may not be enough. A simple approach is to dilate the foreground for the sake of not updating the background on pixels that where accidentally identified as such.
Your step 4 is unclear, I assumed that you mean that you update the foreground only on those places that are identified as background on the last frame. Note that with such a simple approach, pixels that are actually background may be stuck forever with a "foreground" labeling, as you don't update the background under them. There are many possible solutions to this.
There are many ways to solve this problem, and it will really depend on the input images as to which method will be the most appropriate. It may be worth doing some reading on the topic
The method you are suggesting may work, but it's a slightly non-standard approach to this problem. My main concern would be that subtracting several images from the background could lead to saturation and then you may lose some detail of the motion. It may be better to take difference between consecutive images, and then apply the binarization / thresholding to these images.
Another (more complex) approach which has worked for me in the past is to take subregions of the image and then cross-correlate with the new image. The peak in this correlation can be used to identify the direction of movement - it's a useful approach if more then one thing is moving.
It may also be possible to use a combination of the two approaches above for example.
Subtract second image from the first background.
Threshold etc to find the ROI where movement is occurring
Use a pattern matching approach to track subsequent movement focussed on the ROI detected above.
The best approach will depend on you application but there are lots of papers on this topic
Is there an algorithm to separate both foreground and background of an image using open-CV?
Can some one help me? I am confused about where to find the solution.
I am using open-CV
Yes! You can do something called background subtraction. You can find a complete tutorial on the OpenCV website here
This tutorial shows you how to:
Read data from videos by using VideoCapture or image sequences by
using imread;
Create and update the background model by using BackgroundSubtractor class;
Get and show the foreground mask by using imshow;
Save the output by using imwrite to quantitatively evaluate the results.
In case a clean plate background is unavailable, making background subtraction unfeasible (though you could still use an adaptive background segmentation approach), you can go for a graph cut based approach, such as OpenCV's grabcut function. This is an interactive segmentation algorithm which requires the user to create a bounding box for the foreground and (optionally) mark some pixels as seeds for the foreground, and then performs a graph cut - first it models the image as a graph and solves a min-cut algorithm to yield the segmentation. I've found the OpenCV implementation to work very well even for challenging images with reasonable color separability around the desired foreground borders.
Once you've obtained the foreground, you only need to subtract this from the original image to obtain the background. Hope this helps.
This question already has answers here:
C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap
(2 answers)
Closed 8 years ago.
I am using QT and I was able to create a basic MS paint pencil drawing tool.
I created the pencil tool by connecting a series of points with lines.
It looks good for opaque thin lines but with thick and transparent lines I get an alpha transparency overlap (because the lines intersect at a shared point). I have researched and some suggestions are to draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer but I don't really know how to do that in Qt.
I am not highly experienced with graphics or Qt so I don't know the approach. How do programs like MyPaint and Krita handle brushes to keep nice transparent lines without the overlapping?
What I do not want:
The effect I want:
As you've not shown any code, I'm going to make the assumption that what you're doing is storing a set of points and then in a paint function, using a painter to draw those points. The effect you're getting is when you draw over the area that you've already drawn.
One method you can use to prevent this is to use a QPainterPath object. When the mouse down event occurs, use the moveTo function for the QPainterPath object. Then call the lineTo function for mouse move events.
Finally when it comes to rendering, instead of drawing the points, render the QPainterPath object.
---------- Edit --------------------------------------
Since you've added the example of the effect you're wanting, I understand your problem better and you may not be able to use the QPainterPath here, but I do recommend it for the opaque lines.
However, if you work out the gradient changes before adding the lines to a QPainterPath, it may be possible to use a gradient pen with the QPainterPath and get that working the way you want. Alternatively...
You mentioned this in your original answer: -
draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer.
This sounds more complicated than it is due to the word buffer. In actuality, you just create a separate QImage and draw to that rather than the screen. Then when it comes to drawing the screen, you copy the image instead. To 'obtain the maximum opacity' you can either scan the bits on the image and look at the alpha channel, or keep a separate struct of info that records the pressure of the pen and its location at each point. I would look to get the maximum and minimum values of when the alpha is increasing and then decreasing and linearly interpolate the values for rendering, rather than trying to map every minute change.
When rendering the buffer image back to the main one, I think you need to set a composition mode on the QPainter, but off the top of my head, I'm not exactly sure which one. Read the documentation to see what they do and experiment with them to see what effects they produce.
In my experience with graphics, it's often the case that I find you need to experiment to see what works and get a feel for what you're doing, especially when you find a method that you're using starts to become slow and you need to optimise it to work at a reasonable frame rate.
See the answer I gave to this question. The same applies here.
For the sake of not giving a link only, I will repeat the answer here:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.
After spending on a while on this, I finally managed to detect the hands through thresholding. The only problem is that VERY FEW pixels in the background remain, which will mess up the next step. Any suggestions on how to get rid of the few background pixels? Because I don't want to go through the whole background subtraction thing for just a few pixels. Background Subtraction is not an option for the program, so please don't suggest it
Thanks
It's hard to be sure without a more detailed description of your hand detection algorithm. If you have a few background pixels that are isolated from the hands you have detected, I would suggest morphological operation like opening to eliminate single pixel detections in your binary mask. In openCV, I think you need to erode and then dilate