i am trying to do feature tracking for a white bead(circular in shape) under the micro scope , I tried using a threshold , get contours and the use a kalman filter but the problem is that there is no major color difference between the bead and the noisy background, also the light from microscope is not uniform so thresholding is not working good, i read about adaptive thresolding but i dont understand it really well. so any help or tips to track it would be highly appreciated
Related
I want to do image processing using OpenCV and C++. When I am capturing an image in a dark environment it seems to be hard to do people detection. Changing brightness and contrast may help the situation. But my project is related with computer vision. So i want my program to identify weather there is a need of adding or reducing brightness and contrast, But how to identify that? I have no idea, Please help
Good solution: Use illumination so your scene is not dark.
If this is not possible you can increase exposure time and/or gain. Both methods degrade your SNR. Especially with moving people motion blur will become a problem if your exposure time is too high.
Do not just increase image brightness or contrast by software. It makes no difference for your computer, only for you.
Read something about auto exposure algorithms. A well exposed image is neither under nor over exposed. It's histogram should be as broad as possible.
I believe, You can try "histogram equalization" .
Here is an example image that i have used for experiment.
Example
Source code in C++ language
Please let me know if you need any more information regarding this topic.
I think you should consider using an infrared camera. See this article here for example: "Selection of a Visible-Light vs. Thermal Infrared Sensor in
Dynamic Environments Based on Confidence Measures", authors: Cuerda and coworkoers.
I'm new to OpenCV and computer vision stuff. We are having a robot project with ROS and Kinect. We want to evaluate whether the room has adequate lighting using Kinect. Is there a way to use OpenCV to process the Kinect camera information and evaluate the environment?
Thanks in advance.
OpenCV has methods for connecting up with the Kinect, so yes, you would be able to pull the Kinect RGB image from the device.
As for determining your lighting conditions, I believe the Kinect has an auto-gain function built into it. In a very dark environment, that auto gain is going to cause a large amount of noise. So if you do some experiments in dark and light environments, measure the noise in the imagery you might be able to tell if the image (and consequently environment) is too dark from the image noise.
You could look for differences in two images, one where you shine a light, and one where you don't. I imagine that the change will be minimal in a bright environment, but there will be big difference in a dark one.
You'd have to elaborate on what would be "adequate lighting" for this to be more than a binary result.
I have been trying to do the following -
When a user uploads an Image in my web app, I'd like to detect his/her face in it and extract face (from forehead to chin and cheek to cheek) from it.
I tried OpenCV/C++ face detection using Haar Cascade but problem with it is that it gives a probability of where the face would be because of which either background of image comes inside the ROI or even the complete face doesn't come in the ROI.
I also want to detect eye inside the face and while using the above technique, the eye detection isn't that accurate.
I've read up on a new technique called Active Appearance Model (AAM). The blogs where I read up about this show that this is exactly what I want but I am lost on how to implement this.
My queries are -
Is using AAM a good idea for face detection and face feature detection.
Are there any other techniques for doing the same.
Any help on any of these is much appreciated.
Thanks !
As you noticed OpenCV's implementation of face detection is not state-of-the-art. It is a very good and robust implementation but you can do better.
Recently, Zhu and Ramanan (CVPR 2012) had intoduced Face detection, pose estimation and landmark localization in the wild which is considered to be one of the leading algorithms for face detection in recent years.
Their algorithm is capable of detecting faces both frontal and profile views AND identifying keypoints on the detected face such as eyes nose and mouth.
The authors were kind enough to publish their code along with learned models, it is a Matlab implementation but the main computations are done in C++, so it should not be too difficult to make a standalone C++ implementation of thier method.
currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
EDIT :
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
EDIT2 :
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A * x - B * y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.
I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
UPDATE
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used CLAHE) - see comparison below - any use?
I will update this answer with more details as I progress.
I would like to point you to this paper: http://www.cs.berkeley.edu/~ravir/dirtylens.pdf, but, in my opinion, without any sort of calibration/comparison image taken apriori, it is difficult to mine out the ground truth from the flared image.
However, if you are trying to just present the image minus the lens flare, disregarding the actual scientific data behind the flared part, then you switch into the domain of image inpainting. Criminsi's algorithm, as described in this paper: http://research.microsoft.com/pubs/67276/criminisi_tip2004.pdf and explained/simplified in these two links: http://cs.brown.edu/courses/csci1950-g/results/final/eboswort/ http://www.cc.gatech.edu/~sooraj/inpainting/, will do a very good job in restoring texture information to the flared up regions. (If you'd really like to pursue this approach, do mention that. More comprehensive help can be provided for this).
However, given the fact that we're dealing with microscopic data, I doubt if you'd like to lose the scientific data contained in a particular region of an image. In that case, I really think you need to find a workaround to determine the flare model of the flash/light source w.r.t the lens you're using.
I hope someone else can shed more light on this.
I'm currently working on an application that tracks movement on a bridge and could use a few tips to make the tracking more robust. The tracking is done over a long period of time during the day, so tips on how best deal with this are handy.
Currently I'm doing basic frame differencing and blob tracking using OpenCV and OpenFrameworks. Unfortunately in this state, the question is quite open ended: I'm trying to get advice on stable tracking in outdoor conditions:
how to handle light changes
how to ignore shadows (tracking dark blobs can trigger shadows to be tracked as well)
how to isolate people (I've looked into OpenCv's HOGDescriptor but it's a bit much for my setup, I can deal with simpler/less exact data)
Also I'm thinking of improving stability by applying a few filters like blur and high pass on the images. Any other tricks/tips I could use ?