remove gradient of a image without a comparison image - c++

currently i am having much difficulty thinking of a good method of removing the gradient from a image i received.
The image is a picture taken by a microscope camera that has a light glare in the middle. The image has a pattern that goes throughout the image. However i am supposed to remove the light glare on the image created by the camera light.
Unfortunately due to the nature of the camera it is not possible to take a picture on black background with the light to find the gradient distribution. Nor do i have a comparison image that is without the gradient. (note- the location of the light glare will always be consistant when the picture is taken)
In easier terms its like having a photo with a flash in it but i want to get rid of the flash. The only problem is i have no way to obtaining the image without flash to compare to or even obtaining a black image with just the flash on it.
My current thought is conduct edge detection and obtain samples in specific locations away from the edges (due to color difference) and use that to gauge the distribution of gradient since those areas are supposed to have relatively identical colors. However i was wondering if there was a easier and better way to do this.
If needed i will post a example of the image later.
At the moment i have a preferrence of solving this in c++ using opencv if that makes it easier.
thanks in advance for any possible ideas for this problem. If there is another link, tutorial, or post that may solve my problem i would greatly appreciate the post.
as you can tell there is a light thats being shinned on the img as you can tell from the white spot. and the top is lighter than the bottome due to the light the color inside the oval is actually different when the picture is taken in color. However the color between the box and the oval should be consistant. My original idea was to perhaps sample only those areas some how and build a profile that i can utilize to remove the light but i am unsure how effective that would be or if there is a better way
EDIT :
Well i tried out Roger's suggestion and the results were suprisngly good. Using 110 kernel gaussian blurr to find illumination and conducting CLAHE on top of that. (both done in opencv)
However my colleage told me that the image doesn't look perfectly uniform and pointed out that around the area where the light used to be is slightly brighter. He suggested trying a selective gaussian blur where the areas above certain threshold pixel values are not blurred while the rest of the image is blurred.
Does anyone have opinions regarding this and perhaps a link, tutorial, or an example of something like this being done? Most of the things i find tend to be selective blur for programs like photoshop and gimp
EDIT2 :
it is difficult to tell with just eyes but i believe i have achieved relatively close uniformization by using a simple plane fitting algorithm.((-A * x - B * y) / C) (x,y,z) where z is the pixel value. I think that this can be improved by utilizing perhaps a sine fitting function? i am unsure. But I am relatively happy with the results. Many thanks to Roger for the great ideas.
I believe using a bunch of pictures and getting the avg would've been another good method (suggested by roger) but Unofruntely i was not able to implement this since i was not supplied with various pictures and the machine is under modification so i was unable to use it.

I have done some work in this area previously and found that a large Gaussian blur kernel can produce a reasonable approximation to the background illumination. I will try to get something working on your example image but, in the meantime, here is an example of your image after Gaussian blur with radius 50 pixels, which may help you decide if it's worth progressing.
UPDATE
Just playing with this image, you can actually get a reasonable improvement using adaptive histogram equalisation (I used CLAHE) - see comparison below - any use?
I will update this answer with more details as I progress.

I would like to point you to this paper: http://www.cs.berkeley.edu/~ravir/dirtylens.pdf, but, in my opinion, without any sort of calibration/comparison image taken apriori, it is difficult to mine out the ground truth from the flared image.
However, if you are trying to just present the image minus the lens flare, disregarding the actual scientific data behind the flared part, then you switch into the domain of image inpainting. Criminsi's algorithm, as described in this paper: http://research.microsoft.com/pubs/67276/criminisi_tip2004.pdf and explained/simplified in these two links: http://cs.brown.edu/courses/csci1950-g/results/final/eboswort/ http://www.cc.gatech.edu/~sooraj/inpainting/, will do a very good job in restoring texture information to the flared up regions. (If you'd really like to pursue this approach, do mention that. More comprehensive help can be provided for this).
However, given the fact that we're dealing with microscopic data, I doubt if you'd like to lose the scientific data contained in a particular region of an image. In that case, I really think you need to find a workaround to determine the flare model of the flash/light source w.r.t the lens you're using.
I hope someone else can shed more light on this.

Related

How to find bad pixels on an image that has clusters of bad pixels?

I am working on a project in which my task is to find malfunctioning detector-pixels. I thought that this problem is really similar to the problems people facing, when trying to detect bad pixels on an image. Right now I have maps, that have good and bad detector pixels. The way to find out if a detector part is bad is the following: if collects different data then the other non-malfunctioning pixels around it, then it probably is malfunctioning. However, in my case, the bad pixels tend to be next to each other clumping up, and I don't really know how I should interpret this. Can someone help me out with a good algorithm, or a book that is helpful?
This is how a map looks:
These should be found:
If you have multiple images from the same sensor, and there are bad pixels at the same place, you can detect them by comparing images pixel by pixel. This will allow you to detect places that does not change (probably bad pixel).
Other idea may be using something like Gauss-Filter and then compare this blurred image with original one.
Good Idea will be loading some images to Gimp or Photoshop and try some filters and then if you will find good way to spot the bad pixels - implement it by yourself. I would recommend OpenCV for this task.
OpenCV has lots of build-in mechanism. Some of them (maybe edge-detection? blurring?) may be interesting for you.

Im trying to use this method to detect moving object. Can someone advise me for this?

I want to ask about what kind of problems there be if i use this method to extract foreground.
The condition before using this method is that it runs on fixed camera so there're not going to be any movement on camera position.
And what im trying to do is below.
read one frame from camera and set this frame as background image. This is done periodically.
periodically subtract frames that are read afterward to background image above. Then there will be only moving things colored differently from other area
that are same to background image.
then isolate moving object by using grayscale, binarization, thresholding.
iterate above 4 processes.
If i do this, would probability of successfully detect moving object be high? If not... could you tell me why?
If you consider illumination change(gradually or suddenly) in scene, you will see that your method does not work.
There are more robust solutions for these problems. One of these(maybe the best) is Gaussian Mixture Model applied for background subtraction.
You can use BackgroundSubtractorMOG2 (implementation of GMM) in OpenCV library.
Your scheme is quite adequate to cases where the camera is fix and the background is stationary. Indoor and man-controlled scenes are more appropriate to this approach than outdoor and natural scenes .I've contributed to a detection system that worked basically on the same principals you suggested. But of course the details are crucial. A few remarks based on my experience
Your initialization step can cause very slow convergence to a normal state. You set the background to the first frames, and then pieces of background coming behind moving objects will be considered as objects. A better approach is to take the median of N first frames.
Simple subtraction may not be enough in cases of changing light condition etc. You may find a similarity criterion better for your application.
simple thresholding on the difference image may not be enough. A simple approach is to dilate the foreground for the sake of not updating the background on pixels that where accidentally identified as such.
Your step 4 is unclear, I assumed that you mean that you update the foreground only on those places that are identified as background on the last frame. Note that with such a simple approach, pixels that are actually background may be stuck forever with a "foreground" labeling, as you don't update the background under them. There are many possible solutions to this.
There are many ways to solve this problem, and it will really depend on the input images as to which method will be the most appropriate. It may be worth doing some reading on the topic
The method you are suggesting may work, but it's a slightly non-standard approach to this problem. My main concern would be that subtracting several images from the background could lead to saturation and then you may lose some detail of the motion. It may be better to take difference between consecutive images, and then apply the binarization / thresholding to these images.
Another (more complex) approach which has worked for me in the past is to take subregions of the image and then cross-correlate with the new image. The peak in this correlation can be used to identify the direction of movement - it's a useful approach if more then one thing is moving.
It may also be possible to use a combination of the two approaches above for example.
Subtract second image from the first background.
Threshold etc to find the ROI where movement is occurring
Use a pattern matching approach to track subsequent movement focussed on the ROI detected above.
The best approach will depend on you application but there are lots of papers on this topic

Remove noise from the computed optical flow

I compute the optical flow on grayscale videos which contains true-white and noisy-black patch besides the useful information. I want to remove those patches because the correspondant optical flow is foolish.
Those patches are on the edges of the image and their sizes vary from a video to another. My goal is to extract a bounding box describing the useful information in my video thanks to the optical flow.
How can I compute this bounding box ? Or at least, how can I remove the computed optical flow in those regions ?
Edit : I saw your answers. I'll try that next week end then come back to discuss about that. Tank you !
Remove noise from optical flow could be a complicated task. A simple and dummy way could be to use a threshold on the optical flow vector intensity.
But if you only need to find bounding boxes why just do not use a simple background/motion object segmentation? Like MOG, GMG, opencv has nice implementations of them and they works well and are quite fast. See this tutorial.
It's a little tough to understand what the problem is, if the noises is true-white and noisy-black patches in a grayscale image as you have said, then I suggest you look at eroding and dilating. More information can be found here: Eroding and Dilating
Should this not be what you are asking, do post some sample images with the patches and comment so that I can have a clearer idea on what the problem is. Cheers.
If I understand correctly, you are getting noisy optical flow in patches which are grey/white or basically uniform. A simple approach would be to divide the image into small patches and compute the entropy over each patch. Now, patches which have a very low entropy can be discarded by choosing an appropriate threshold because they do not contain much information.

C++ OpenCV sky image stitching

Some background:
Hi all! I have a project which involves cloud imaging. I take pictures of the sky using a camera mounted on a rotating platform. I then need to compute the amount of cloud present based on some color threshold. I am able to this individually for each picture. To completely achieve my goal, I need to do the computation on the whole image of the sky. So my problem lies with stitching several images (about 44-56 images). I've tried using the stitch function on all and some subsets of image set but it returns an incomplete image (some images were not stitched). This could be because of a lack of overlap of something, I dunno. Also the output image has been distorted weirdly (I am actually expecting the output to be something similar to a picture taken by a fish-eye lense).
The actual problem:
So now I'm trying to figure out the opencv stitching pipeline. Here is a link:
http://docs.opencv.org/modules/stitching/doc/introduction.html
Based on what I have researched I think this is what I want to do. I want to map all the images to a circular shape, mainly because of the way how my camera rotates, or something else that has uses a fairly simple coordinate transformation. So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography? If so, does anyone have any idea how I can go about my problem? After this, I believe I need to get a mask for blending the images. Will I need to get a fixed mask like the one I want for my homography?
Am I going through a possible path? I have some background in programming but almost none in image processing. I'm basically lost. T.T
"So I think I need get some sort of fixed coordinate transform thing for the images. Is this what they call the homography?"
Yes, the homography matrix is the transformation matrix between an original image and the ideal result. It warps an image in perspective so it can fit in stitching to the other image.
"If so, does anyone have any idea how I can go about my problem?"
Not with the limited information you provided. It would ease the problem a lot if you know the order of pictures (which borders which.. row, column position)
If you have no experience in image processing, I would recommend you use a tutorial covering stitching using more basic functions in detail. There is some important work behind the scenes, and it's not THAT harder to actually do it yourself.
Start with this example. It stitches two pictures.
http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/

How do I do high quality scaling of a image?

I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable.
I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :)
I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do.
Here's what I'm looking for:
No assembler (I'm writing very portable code for multiple processor types).
No dependencies on external libraries.
I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later.
Quality of the result and clarity of the algorithm is most important (I can optimize it later).
My routine essentially takes the following form:
DrawScaled(uint32 *src, uint32 *dst,
src_x, src_y, src_w, src_h,
dst_x, dst_y, dst_w, dst_h );
Thanks!
UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp.
Example
Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55.
www.free_image_hosting.net/uploads/1a25434e0b.png
And finally:
www.free_image_hosting.net/uploads/eec3065e2f.png
the original 256x256 image
I've found the wxWidgets implementation fairly straightforward to modify as required. It is all C++ so no problems with portability there. The only difference is that their implementation works with unsigned char arrays (which I find to be the easiest way to deal with images anyhow) with a byte order of RGB and the alpha component in a separate array.
If you refer to the "src/common/image.cpp" file in the wxWidgets source tree there is a down-sampler function which uses a box sampling method "wxImage::ResampleBox" and an up-scaler function called "wxImage::ResampleBicubic".
A fairly simple and decent algorithm to resample images is Bicubic interpolation, wikipedia alone has all the info you need to get this implemented.
Is it possible that OpenGL is doing the scaling in the vector domain? If so, there is no way that any pixel-based scaling is going to be near it in quality. This is the big advantage of vector based images.
The bicubic algorithm can be tuned for sharpness vs. artifacts - I'm trying to find a link, I'll edit it in when I do.
Edit: It was the Mitchell-Netravali work that I was thinking of, which is referenced at the bottom of this link:
http://www.cg.tuwien.ac.at/~theussl/DA/node11.html
You might also look into Lanczos resampling as an alternative to bicubic.
Now that I see your original image, I think that OpenGL is using a nearest neighbor algorithm. Not only is it the simplest possible way to resize, but it's also the quickest. The only downside is that it looks very rough if there's any detail in your original image.
The idea is to take evenly spaced samples from your original image; in your case, 55 out of 256, or one out of every 4.6545. Just round the number to get the pixel to choose.
Try using the Adobe Generic Image Library ( http://opensource.adobe.com/wiki/display/gil/Downloads ) if you want something ready and not only an algorithm.
Extract from: http://www.catenary.com/howto/enlarge.html#c
Enlarge or Reduce - the C Source Code
Requires Victor Image Processing Library for 32-bit Windows v 5.3 or higher.
int enlarge_or_reduce(imgdes *image1)
{
imgdes timage;
int dx, dy, rcode, pct = 83; // 83% percent of original size
// Allocate space for the new image
dx = (int)(((long)(image1->endx - image1->stx + 1)) * pct / 100);
dy = (int)(((long)(image1->endy - image1->sty + 1)) * pct / 100);
if((rcode = allocimage(&timage, dx, dy,
image1->bmh->biBitCount)) == NO_ERROR) {
// Resize Image into timage
if((rcode = resizeex(image1, &timage, 1)) == NO_ERROR) {
// Success, free source image
freeimage(image1);
// Assign timage to image1
copyimgdes(&timage, image1);
}
else // Error in resizing image, release timage memory
freeimage(&timage);
}
return(rcode);
}
This example resizes an image area and replaces the original image with the new image.
Intel has IPP libraries which provide high speed interpolation algorithms optimized for Intel family processors. It is very good but it is not free though. Take a look at the following link:
Intel IPP
A generic article from our beloved host: Better Image Resizing, discussing the relative qualities of various algorithms (and it links to another CodeProject article).
It sounds like what you're really having difficulty understanding is the discrete -> continuous -> discrete flow involved in properly resampling an image. A good tech report that might help give you the insight into this that you need is Alvy Ray Smith's A Pixel Is Not A Little Square.
Take a look at ImageMagick, which does all kinds of rescaling filters.
As a follow up, Jeremy Rudd posted this article above. It implements filtered two pass resizing. The sources are C# but it looks clear enough that I can port it to give it a try. I found very similar C code yesterday that was much harder to understand (very bad variable names). I got it to sort-of-work, but it was very slow and did not produce good results which led me to believe there was an error in my adaptation. I may have better luck writing it from scratch with this as a reference, which I'll try.
But considering how the two pass algorithm works I wonder if there isn't a faster way of doing it, perhaps even in one pass?