Isolating a subject with rotobrush tool and then create Light wrap - after-effects

I am trying to create a light wrap on an isolated subject using the technic explained on the tutorial: http://www.youtube.com/watch?v=qR7V-4ZlW0E It uses a channele effect which equals the source of all channels to the background. It then uses a gaussian blur, inverts and makes two CC composites. I can't really explain why the person does that as I am no expert. My proble is that I don't know how he got the subjects isolated. I am trying to isolate the subject on the same layer using the rotobrush tool and its not working right.
Anyone knows how to get an isolated subject with the rotobrush tool, removing completly the background.

Make sure that the Rotobrush has created the transparency you want by clicking on the transparency grid in the comp panel. You should see the grid where there is transparency. You can also use the Channel menu to view the Alpha.

Related

MOG2 background subtraction Parameters

I am trying to use the OpenCVs Background Subtractor class MOG2 to seperate a person moving infront of a camera. I got everything set up and working nicely. But the resulting mask I am getting looks something like this:
(default settings)
Now what I would like to get is something like this:
(bad gimp skills :D)
I have already tryed to mess around with the parameter described in the docu, but all I managed to accomplish was something the looked like a motion blur effect...
So I was hopeing somebody with a better understanding of the algorithm or somebody who has already done something similar might be able to help me!
Thanks in advance, Foaly
I am working also with that and what I've seen is that this algorithm needs a good calibration to accomplish that goal, because you should be aware that this algorithm try to put in the background some pixels that don't show changes, e.g. in your skin major part of the pixels have the same color maybe this is the reason. I recommend to you that use other kind of methods (using zncc) if you want to use an application like the one that is showed in your question.
So I guess this were our image processing skills come into play. The first thing I would do is make the lines on the image thicker and join it it. We can use the following:
1) I would want the lines thicker. Use Morphological operators tutorial on Morphological operators with Otsu's method. This paper worked for me when I did my ear biometrics http://www4.comp.polyu.edu.hk/~csajaykr/myhome/papers/PR2011.pdf
2) Fill in connected components using opencv and clean the image
3) Segment human profile

remove gradient of a image without a comparison image

currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
EDIT :
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
EDIT2 :
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A * x - B * y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.
I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
UPDATE
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used CLAHE) - see comparison below - any use?
I will update this answer with more details as I progress.
I would like to point you to this paper: http://www.cs.berkeley.edu/~ravir/dirtylens.pdf, but, in my opinion, without any sort of calibration/comparison image taken apriori, it is difficult to mine out the ground truth from the flared image.
However, if you are trying to just present the image minus the lens flare, disregarding the actual scientific data behind the flared part, then you switch into the domain of image inpainting. Criminsi's algorithm, as described in this paper: http://research.microsoft.com/pubs/67276/criminisi_tip2004.pdf and explained/simplified in these two links: http://cs.brown.edu/courses/csci1950-g/results/final/eboswort/ http://www.cc.gatech.edu/~sooraj/inpainting/, will do a very good job in restoring texture information to the flared up regions. (If you'd really like to pursue this approach, do mention that. More comprehensive help can be provided for this).
However, given the fact that we're dealing with microscopic data, I doubt if you'd like to lose the scientific data contained in a particular region of an image. In that case, I really think you need to find a workaround to determine the flare model of the flash/light source w.r.t the lens you're using.
I hope someone else can shed more light on this.

after effects remove still background

I have a movie in the after effects that dosent have a KEY color of background, but a still background.
I want to detect the motion of 2 persons walking in front of this still background and bring them to the front, so I can create effects on the background.
Is that possible to do? Witch Effect do I use?
This works with a technique called difference matting. It never works well even if you have a solid clean plate. your best bet is to rotobrush or just mask them out manually. if you want to try difference matting you can set the footage layer with the people in it to difference blending mode and put a clean plate below it.
you can use the rotobtush that allows to separate elements from a static background. Works better if:
you have a clean background
good quality videos
front object needs to be cutted as big as possible
This method works well with a locked down camera. Depending upon how active people are and how still the background - you can remove the people by cutting and assemble pieces of the background to create a background layer without the people... Once you have a clean plate - then use a difference matte to subtract the background from your clip which will leave anything that moves (read people). This will work well if the hue/values between the background and the people are different.
Doing this in conjuncture with animated masks gives good results and can give you a jump on your rotobrushing to improve your edges.
If all else fails the Mocha plug works well as long as you break your masks up into lots of parts to follow the planes of your actors.
I have never found doing this is a single tool solution.
It is not impossible, but I don't think it could give you the best result.
Anyway, you should mask them out manually. but not keyframe by keyframe. there is a technic for it. for setting the mask keyframes, first, you should set 3 keyframes in first, middle and end of your footage.
after masking three one, you do it the same for first keyframe and the middle one.
it means between the first keyframe and the middle keyframe, you should set a key.
Do the same for middle and end key.
this technic could save your time and make masking with less fault.
by the way, you can use Mocha pro as well for tracking the persons.

Face detection and image preview drawing

I'm developing application that uses DirectShow combined with C++.
Its main goal is to capture users' faces.
I have reached the phase when I capture a image from my webcam.
The problem is I need an intelligent render. In fact, I need that render to be able to detect a face inside a rectangle.
I'm wondring if there is a filter that I can use for this purpose,
or if I need to create my own custmized filter.
If so enlighten my mind.
It would look like this:
I need to understand how I can draw a recangle in my render in the first place. Because otherwise, even if I know the algorithm, I will not be able to apply it. This is my main goal now.
I have some idea but I don't know if they are correct. I think I need to grab each frame separately and apply some modification in some pixels, like what's drawn in the live render.
Have a look at OpenCV
Quick look inside and I found this.
Making your own "filter" that works well is no easy job.
Are you talking about automatic detection of where there is something like a human face in the shot you have taken with the webcam? In this case object detection algorithms like Viola-Jones might be interesting for you.
If a commercial package is an option, you can use the Montivision Filter SDK which includes filters that should do the job out of the box. They offer a free eval which is perfect for experimentation.

Augmented Reality-PC

I recently saw the virtual mirror concept on you tube, I tried it out and researched about it. It seems that the creators have used augmented reality so that people can see the output on their screens. On researching I found out that we identify a pattern on which a 3D image is superimposed.
Question 1:How are they able to superimpose the jewellery and track the face of the person without identifying any pattern?
I also tried to check various libraries that I can use to make a program similar to the one they show. Seems to me that a lot of people are using Android phones and iPhones and making apps that use augmented reality.
Question 2:Is there any way that I can use c++ and try to make a program that uses augmented reality?
Oh, and the most important thing, the link to the application is provided below:
http://www.boutiqueaccessories.com.au/virtual-mirror/w1/i1001664/
Do try it out. Its a good experience. :D
I'm not able to actually try the live demo, but the linked video suggests that they either use some simplified pattern recognition (get the person's outline), or they simply track you based on the initial image (with your position/texture being determined by the outline being shown.
Following the video, it's easy to see that there's no real/advanced AR behind this. The images are simply overlayed or hidden (e.g. in case it's missing track of one ear due to you looking to the side) and they're not transformed (no perspective or resizing happening). They definitely seem to track the head (or features like ears, neck, etc.). depending on your background and surroundings that's actually a rather trivial task.
Question 2: Sure! There are lots of premade toolsets out there, but you could as well use some general image processing library such as OpenCV to do the math. Augmented reality usually uses some kind of pattern (e.g. a card or page with a known pattern) to determine the correct position and transformation for the contents to be added to the image. There are also approaches using the device's orientation and perspective changes in camera images to determine depth/position (I really like this demo).