Image Segmentation using OpenCV - c++

I am pretty new to openCV and would like a little help
So my basic idea was to use opencv to create a small application for interior designing.
Problem
How to differentiate between walls and floor of a picture (even when we have some noise in the picture).
For Ex.
Now, my idea was, if somehow i can find the edges of the wall or tile, and then if any object which will be used for interior decoration(for example any chair), then that object will be placed perfectly over the floor(i.e the two image gets blended)
My approach
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
using namespace cv;
using namespace std;
int main(){
Mat image=imread("/home/ayusun/Downloads/IMG_20140104_143443.jpg");
Mat resized_img,dst,contours,detected_edges;
resize(image, resized_img, Size(1024, 768), 0, 0, INTER_CUBIC);
threshold(resized_img, dst, 128, 255, CV_THRESH_BINARY);
//Canny(image,contours,10,350);
namedWindow("resized image");
imshow("resized image",resized_img);
//imshow("threshold",dst);
blur( resized_img, detected_edges, Size(2,2) );
imshow("blurred", detected_edges);
Canny(detected_edges,contours,10,350);
imshow("contour",contours);
waitKey(0);
return 1;
}
I tried canny edge detection algorithm, but it seems to find a lot of edges. And i still don't know how to combine floor of the room with that of the chair
Thanks

Sorry for involuntary advertisement but IKEA uses a catalog smartphone app which uses augmented reality to position objects/furniture around an image of your room. Is that what you're trying to do?
In order to achieve this you would need a "pinpoint", a fixed point where to hook your objects to. That usually helps differentiate between walls and floor in the app above (and renders things easy).
Distinguishing walls from floors is hard even for a human if they're hanging by their feet and walls/floors have the same texture on them (but we manage to do it thanks to our "gravity feeling").
Find some keypoints or please state if you're planning to do it with a fixed camera (i.e. it will never be put horizontally)

OpenCV's POSIT may be userful for you (here is an example): http://opencv-users.1802565.n2.nabble.com/file/n6908580/main.cpp
Also take a look at augmented reality toolkits ArUco for example.
For advanced methods take a look at ptam.
And you can find some userful links and papers here: http://www.doc.ic.ac.uk/~ajd/

Segmenting walls and floors out of a single image is possible to some extent but requires a lot of work, it will require quite a complex system if you want to achieve decent results. You can probably do much better with a pair of images (stereo reconstruction)

Related

Image (color?) segmentation with opencv C++

As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.
Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)
The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.

Low quality aerial stitching with OpenCV

I've been trying to stitch low quality, low resolution (320x180) images, taken by a quadrocopter, in OpenCV recently. Here is what i got:
http://postimg.org/gallery/1rqsycyk/
The pictures taken are almost nadir and as you can see overlapping much. Between each shot is a translation and i tried to place objects on the ground that keep the scene almost planar not to disturb the requirements for a homography. Anyway quite many pictures are not taken into account during the stitching process.
Here another example, (only three images are stitched together):
http://postimg.org/gallery/1wpt3lmo/
I'm using the Surf Featuredetector and believe that the low quality of the images is not working out right for it but i'm not sure about that.
Here's the code i use, i found it on a similar question OpenCV non-rotational image stitching and decided to use it since it worked better than mine:
Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.setWarper(new PlaneWarper());
stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder(1000,3,4,3,4));
stitcher.setRegistrationResol(0.1);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(1);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(false,0.3));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
Stitcher::Status status = Stitcher::ERR_NEED_MORE_IMGS;
try{
status = stitcher.stitch(picturesTaken, pano);
}
catch(cv::Exception e){}
My other guess is to do the stitching process manually instead of using the Stitcher class, but i'm not sure if it would change much. So the question is: how can i make the stitching process more robust despite of the low quality of the images? Also: does defining ROIs have only an impact on the performance or also on the chance of actual stitching?
The result is not that bad given the quality of the input images!
To improve the quality of the output, I would do (in priority order):
an estimation of the camera distortion in order to fix it and make the matching easier
perform some histogram or lighting equalization before stitching
try to increase the temporal gap between pictures or use another stitcher. A part of the blur in the output is created by the stitcher when merging the images in their overlap areas.
I believe the problem is that you take pictures of textureless regions and it's hard to extract good distinctive keypoints from such smooth regions.
I found this question, which was very helpful for me. I investigate this theme and I have some other tips for you:
About finding similar images:
You set SURFFeatureFidner with minHessian = 1000. It is really big value (OpenCV suggest 300, I use sometimes 100). This is why there are only matches not all images.
You set PanoConfidendceThresh to "1", maybe you should set "0.8", it will stitch more images.
About the look of stitched images:
There are some other function in pipeline of Stitcher. Try to use:
stitcher.setSeamFinder(new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR))
stitcher.setBlender( detail::Blender::createDefault(Blender::MULTI_BAND, false))
stitcher.setExposureCompensator (detail::ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS) )
Maybe this will be helpful for you!

opencv: performing night vision

First I am not talking about real night vision. I am talking about the technique used to improve picture brightness/light when light condition is poor. You can see this technique perfectly in smart phones, superb in phablets. I know the technique used in here, get the existing light and used it to make the pic clear. But how to do this in opencv? Any method or step by step process?
There are essentially 2 ways to brighten your image:
Get more photons in the camera
Give each photon more 'weight'
For approach 1, supposing that you can't control the lighting, then the only way to get more photons is to expose your sensor for a longer period of time. That assumes that you can change your camera's integration time. The drawback of this approach is that you can get more motion blur.
For approach 2, this amounts to applying a multiplicative gain to the input image, which makes each photon contribute more DN's to the resulting image. Applying such a gain though supposes that you have a priori information about the input image's brightness. If your gain value is not good you'll have an image that's either saturated or too dark.
To improve your image automatically, the best approach would be to use OpenCV's equalizeHist function, as described here. The operation isn't exactly a multiplicative gain but the effect is similar.
The last step would be, as previously suggested in comments, to apply a gamma correction as described here. Gamma correction tends to reduce the contrast in an image, but since you improved the contrast using histogram equalization you should get good results.
As Michel points out try equalizeHist.
Here's a minimal example:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
namedWindow("input");
namedWindow("output");
Mat in = imread("yourDarkImage.jpg");;
Mat out;
if(in.empty())exit(1);
//equalize histograms per channel
vector<Mat> colors;
split(in, colors);
equalizeHist(colors[0], colors[0]);
equalizeHist(colors[1], colors[1]);
equalizeHist(colors[2], colors[2]);
merge(colors, out);
imshow("input", in);
imshow("output", out);
waitKey(0);
return 0;
}

How to detect the white gauge board for measuring the level of the water?

I work on a project where I need to measure water level using a white gauge board. Currently my approach is:
segmenting the white gauge board.
measure the water level against the gauge board.
But I get stuck in segmenting the gauge board. I avoid using color-based segmentation since I need it to be invariant with light changes, so I detect the edges using morphological operations instead. I've got this image:
The result from morphological operations seems promising. The edges on the white gauge board are sharper than others. But I still don't have any idea to properly segment the board. Can you suggest an algorithm to segment the board? Or please suggest if you have different algorithm for measuring the water level.
Here is my code:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
int main()
{
cv::Mat src = cv::imread("image.jpg");
if (!src.data)
return -1;
cv::Mat bw;
cv::cvtColor(src, bw, CV_BGR2GRAY);
cv::medianBlur(bw, bw, 3);
cv::Mat dilated, eroded;
cv::dilate(bw, dilated, cv::Mat());
cv::erode(bw, eroded, cv::Mat());
bw = dilated - eroded;
cv::imshow("src", src);
cv::imshow("bw", bw);
cv::waitKey();
return 0;
}
I'm using C++, but I'm open to other implementations in Matlab/Mathematica.
If the camera is indeed stationary, you can use this type of quick and dirty approach:
im= rgb2gray(imread('img.jpg'));
imr=imrotate(im,1);
a=imr(100:342,150);
plot(a)
The minima that are shown in the plot are from 10 (left) to 1 (right) in the scale of the indicator. You can use a peak detector to locate their positions and interpolate the water level found between them.
So, there's no real need of fancy image processing...
Why are you segmenting the gauge board anyway? You just want to find it in the image, that's all. You don't need to find the relative location of segments. 5 is always going to be between 4 and 6.
As you've probably noticed, you can find the rough location of the gauge board by looking for an area with a high contrast level. Using matchTemplate you can then find the exact location. (Considering that the camera is fixed, you might be able to skip the first step and call matchTemplate directly).

C++ - Image Conversion

I am new to C++ and would like to know how to read in a .jpg image and then convert it to binary (black and white/bi-level/two-level)?
Thank you.
Your better choice is probably boost Gil.
Boost libraries are not especially for beginner, but they are often well designed.
#include <boost/gil/image.hpp>
#include <boost/gil/typedefs.hpp>
#include <boost/gil/extension/io/jpeg_io.hpp>
int main() {
using namespace boost::gil;
rgb8_image_t img;
jpeg_read_image("test.jpg",img);
gray8s_view_t view(img.dimensions());
color_converted_view<gray8_pixel_t>(const_view(img), view);
jpeg_write_view("grey.jpg", view);
}
You can use DevIL to read the image. It supports a lot of different formats.
To convert it to pure black and white, you then go through the whole image data and compute the intensity or light contribution of each pixel and if it falls below a certain threshold you'll output a black pixel otherwise a white pixel.
You could do it as simply as check the RGB-values of each pixel against a threshold of RGB(0.5, 0.5, 0.5). But you might get better results if you convert the image to HSI and use the intensity value for each pixel, but that's more work.
There is the option for libpng, which as been used on many projects. For additional reading on how to write a grayscale image, take a look at this chapter from their website.