opencv: performing night vision - c++

First I am not talking about real night vision. I am talking about the technique used to improve picture brightness/light when light condition is poor. You can see this technique perfectly in smart phones, superb in phablets. I know the technique used in here, get the existing light and used it to make the pic clear. But how to do this in opencv? Any method or step by step process?

There are essentially 2 ways to brighten your image:
Get more photons in the camera
Give each photon more 'weight'
For approach 1, supposing that you can't control the lighting, then the only way to get more photons is to expose your sensor for a longer period of time. That assumes that you can change your camera's integration time. The drawback of this approach is that you can get more motion blur.
For approach 2, this amounts to applying a multiplicative gain to the input image, which makes each photon contribute more DN's to the resulting image. Applying such a gain though supposes that you have a priori information about the input image's brightness. If your gain value is not good you'll have an image that's either saturated or too dark.
To improve your image automatically, the best approach would be to use OpenCV's equalizeHist function, as described here. The operation isn't exactly a multiplicative gain but the effect is similar.
The last step would be, as previously suggested in comments, to apply a gamma correction as described here. Gamma correction tends to reduce the contrast in an image, but since you improved the contrast using histogram equalization you should get good results.

As Michel points out try equalizeHist.
Here's a minimal example:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
namedWindow("input");
namedWindow("output");
Mat in = imread("yourDarkImage.jpg");;
Mat out;
if(in.empty())exit(1);
//equalize histograms per channel
vector<Mat> colors;
split(in, colors);
equalizeHist(colors[0], colors[0]);
equalizeHist(colors[1], colors[1]);
equalizeHist(colors[2], colors[2]);
merge(colors, out);
imshow("input", in);
imshow("output", out);
waitKey(0);
return 0;
}

Related

How to improve accuracy of estimateAffine2D (or estimageRigidTransform) in OpenCV?

I have two sets of points, one from time t-1 and current time t. The first set was generated using goodFeaturesToTrack, and the latter from using calcOpticalFlowPyrLK(). Using these two sets of points, I then estimate a transformation matrix via estimateAffine2DPartial() in order to keep track of its scale & rotation. Code snippet is listed below:
// Precompute image pyramids
maxLvl = cv::buildOpticalFlowPyramid(_imgPrev, imPyr1, _winSize, maxLvl, true);
maxLvl = cv::buildOpticalFlowPyramid(tmpImg, imPyr2, _winSize, maxLvl, true);
// Optical flow call for tracking pixels
cv::calcOpticalFlowPyrLK(imPyr1, imPyr2, _currentPoints, nextPts, status, err, _winSize, maxLvl, _terminationCriteria, 0, 0.000001);
// Get transformation matrix between the two data sets
cv::Mat H = cv::estimateAffinePartial2D(_currentPoints, nextPts, inlier_mask, cv::RANSAC, 10.0, 2000, 0.99);
Using H, I then map my masking points using perspectiveTransform(). The result seems accurate for the first few dozen frames until I notice some drift (in terms of rotation) occurring when the object I am tracking continues to rotate (usually when rotation becomes > M_PI). I'm honestly stumped on where the culprit is, but my main suspicion is perhaps my window size for optical flow might be too small, or too big. However, tweaking the window size did not seem to help, the position of my object is still accurate, but the estimated rotation (and scale) got worse. Can anyone hope to shed a light on this?
Warm regards and thanks.
EDIT: Images attached to show drift issue
Starting Frame
First few frames -- Rotation OK
Z-Rotation Drift occurs -- see anchor line has drifted towards the red rectangle.
Lucas Kanade tracker needs more features. Guess the tracking template you provided is not good enough.
(1) Try with other feature rich real images? e.g Opencv feautre tracking template image
(2) fix scale. Since you are doing simulation, you can try to anchor the size first.
calcOpticalFlowPyrLK is widely used in visual inertial state estimation studies. such as Semi direct visual odometry or VINSMONO. You can try to find the code inside those project to see how other people is playing with the feature and parameters

Is here any way to find out whether an image is blurry or not using Laplacian operator

I am working on this project where I have to automate the sharpness calculation of an camera taken image without actually looking a the image. I have tried many detection methods, but finally I am going further with Laplacian operator using openCV.
Now, the laplacian operator in the openCV returns the image matrix. But, I have to get boolean output whether the image is blurry or not depending upon my threshold.
Any link, algorithm or IEEE paper for the same would be helpful. Thanks!
You will find a lot of infos here.
Also the paper cited in one of the answers if quite interesting: Analysis of focus measure operators for shape from focus
Refer this https://stackoverflow.com/a/44579247/6302996
Laplacian(gray, laplacianImage, CV_64F);
Scalar mean, stddev; // 0:1st channel, 1:2nd channel and 2:3rd channel
meanStdDev(laplacianImage, mean, stddev, Mat());
double variance = stddev.val[0] * stddev.val[0];
double threshold = 2900;
if (variance <= threshold) {
// Blurry
} else {
// Not blurry
}

Image Segmentation using OpenCV

I am pretty new to openCV and would like a little help
So my basic idea was to use opencv to create a small application for interior designing.
Problem
How to differentiate between walls and floor of a picture (even when we have some noise in the picture).
For Ex.
Now, my idea was, if somehow i can find the edges of the wall or tile, and then if any object which will be used for interior decoration(for example any chair), then that object will be placed perfectly over the floor(i.e the two image gets blended)
My approach
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
using namespace cv;
using namespace std;
int main(){
Mat image=imread("/home/ayusun/Downloads/IMG_20140104_143443.jpg");
Mat resized_img,dst,contours,detected_edges;
resize(image, resized_img, Size(1024, 768), 0, 0, INTER_CUBIC);
threshold(resized_img, dst, 128, 255, CV_THRESH_BINARY);
//Canny(image,contours,10,350);
namedWindow("resized image");
imshow("resized image",resized_img);
//imshow("threshold",dst);
blur( resized_img, detected_edges, Size(2,2) );
imshow("blurred", detected_edges);
Canny(detected_edges,contours,10,350);
imshow("contour",contours);
waitKey(0);
return 1;
}
I tried canny edge detection algorithm, but it seems to find a lot of edges. And i still don't know how to combine floor of the room with that of the chair
Thanks
Sorry for involuntary advertisement but IKEA uses a catalog smartphone app which uses augmented reality to position objects/furniture around an image of your room. Is that what you're trying to do?
In order to achieve this you would need a "pinpoint", a fixed point where to hook your objects to. That usually helps differentiate between walls and floor in the app above (and renders things easy).
Distinguishing walls from floors is hard even for a human if they're hanging by their feet and walls/floors have the same texture on them (but we manage to do it thanks to our "gravity feeling").
Find some keypoints or please state if you're planning to do it with a fixed camera (i.e. it will never be put horizontally)
OpenCV's POSIT may be userful for you (here is an example): http://opencv-users.1802565.n2.nabble.com/file/n6908580/main.cpp
Also take a look at augmented reality toolkits ArUco for example.
For advanced methods take a look at ptam.
And you can find some userful links and papers here: http://www.doc.ic.ac.uk/~ajd/
Segmenting walls and floors out of a single image is possible to some extent but requires a lot of work, it will require quite a complex system if you want to achieve decent results. You can probably do much better with a pair of images (stereo reconstruction)

Low quality aerial stitching with OpenCV

I've been trying to stitch low quality, low resolution (320x180) images, taken by a quadrocopter, in OpenCV recently. Here is what i got:
http://postimg.org/gallery/1rqsycyk/
The pictures taken are almost nadir and as you can see overlapping much. Between each shot is a translation and i tried to place objects on the ground that keep the scene almost planar not to disturb the requirements for a homography. Anyway quite many pictures are not taken into account during the stitching process.
Here another example, (only three images are stitched together):
http://postimg.org/gallery/1wpt3lmo/
I'm using the Surf Featuredetector and believe that the low quality of the images is not working out right for it but i'm not sure about that.
Here's the code i use, i found it on a similar question OpenCV non-rotational image stitching and decided to use it since it worked better than mine:
Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.setWarper(new PlaneWarper());
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder(1000,3,4,3,4));
stitcher.setRegistrationResol(0.1);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(1);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(false,0.3));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
Stitcher::Status status = Stitcher::ERR_NEED_MORE_IMGS;
try{
status = stitcher.stitch(picturesTaken, pano);
}
catch(cv::Exception e){}
My other guess is to do the stitching process manually instead of using the Stitcher class, but i'm not sure if it would change much. So the question is: how can i make the stitching process more robust despite of the low quality of the images? Also: does defining ROIs have only an impact on the performance or also on the chance of actual stitching?
The result is not that bad given the quality of the input images!
To improve the quality of the output, I would do (in priority order):
an estimation of the camera distortion in order to fix it and make the matching easier
perform some histogram or lighting equalization before stitching
try to increase the temporal gap between pictures or use another stitcher. A part of the blur in the output is created by the stitcher when merging the images in their overlap areas.
I believe the problem is that you take pictures of textureless regions and it's hard to extract good distinctive keypoints from such smooth regions.
I found this question, which was very helpful for me. I investigate this theme and I have some other tips for you:
About finding similar images:
You set SURFFeatureFidner with minHessian = 1000. It is really big value (OpenCV suggest 300, I use sometimes 100). This is why there are only matches not all images.
You set PanoConfidendceThresh to "1", maybe you should set "0.8", it will stitch more images.
About the look of stitched images:
There are some other function in pipeline of Stitcher. Try to use:
stitcher.setSeamFinder(new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR))
stitcher.setBlender( detail::Blender::createDefault(Blender::MULTI_BAND, false))
stitcher.setExposureCompensator (detail::ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS) )
Maybe this will be helpful for you!

OpenCV, how to use arrays of points for smoothing and sampling contours?

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.
Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.
Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).
How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)
From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).
I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.
Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.
My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:
This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.