Image (color?) segmentation with opencv C++ - c++

As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.

Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)

The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.

Related

Reshape opencv image for PCA

This is probably a rather simple task, but I am uncertain on how to proceed, since I am new to opencv in C++.
I was inspired by this code.
The idea I had was then to take a single image, do PCA on the RGB intensities and visualize the projection of the RGB data onto the 3 principal components in grayscale.
The first problem I run into, is how to setup the matrix for PCA. Here is my code so far:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image, imageMat;
image= imread("images/0.jpg");
imageMat = convertForPCA(image);
// Do pca and visualize channels in grayscale.
// ...
return 0;
}
So if you could help me implement convertForPCA function. That function should take in an image matrix and return an n by 3 float matrix, where n is the number of pixels in the original image.
I think I can proceed with the rest, but will maybe post more questions if I get stuck and can't find an answer.
I solved my problem, and I put the solution on github in case anyone runs into this later.

polarToCart and cartToPolar functions in opencv

Hey I've a circular image that I want to make a cartesian in openCV.
I've successfully made it on matlab however I want to do it on OpenCV.
After some digging on internet. I figured out there are actually functions called logPolar, polarToCart and cartToPolar. However OpenCV official documentation is lack of information to use them. Since I don't really understand parameters those functions take I couldn't really use them
So could someone give me (actually I think a lot of people looking for it) appropriate example to use those functions please ?
Just in case I am sharing my sample image too.
thanks in advance
if you're using opencv3, you probably want linearPolar:
note, that for both versions, you need a seperate src and dst image (does not work inplace)
#include "opencv2/opencv.hpp" // needs imgproc, imgcodecs & highgui
Mat src = imread("my.png", 0); // read a grayscale img
Mat dst; // empty.
linearPolar(src,dst, Point(src.cols/2,src.rows/2), 120, INTER_CUBIC );
imshow("linear", dst);
waitKey();
or logPolar:
logPolar(src,dst,Point(src.cols/2,src.rows/2),40,INTER_CUBIC );
[edit:]
if you're still using opencv2.4, you can only use the arcane c-api functions, and need IplImage conversions (not recommended):
Mat src=...;
Mat dst(src.size(), src.type()); // yes, you need to preallocate here
IplImage ipsrc = src; // new header, points to the same pixels
IplImage ipdst = dst;
cvLogPolar( &ipsrc, &ipdst, cvPoint2D32f(src.cols/2,src.rows/2), 40, CV_INTER_CUBIC);
// result is in dst, no need to release ipdst (and please don't do so.)
(polarToCart and cartToPolar work on point coords, not images)

Image Segmentation using OpenCV

I am pretty new to openCV and would like a little help
So my basic idea was to use opencv to create a small application for interior designing.
Problem
How to differentiate between walls and floor of a picture (even when we have some noise in the picture).
For Ex.
Now, my idea was, if somehow i can find the edges of the wall or tile, and then if any object which will be used for interior decoration(for example any chair), then that object will be placed perfectly over the floor(i.e the two image gets blended)
My approach
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
using namespace cv;
using namespace std;
int main(){
Mat image=imread("/home/ayusun/Downloads/IMG_20140104_143443.jpg");
Mat resized_img,dst,contours,detected_edges;
resize(image, resized_img, Size(1024, 768), 0, 0, INTER_CUBIC);
threshold(resized_img, dst, 128, 255, CV_THRESH_BINARY);
//Canny(image,contours,10,350);
namedWindow("resized image");
imshow("resized image",resized_img);
//imshow("threshold",dst);
blur( resized_img, detected_edges, Size(2,2) );
imshow("blurred", detected_edges);
Canny(detected_edges,contours,10,350);
imshow("contour",contours);
waitKey(0);
return 1;
}
I tried canny edge detection algorithm, but it seems to find a lot of edges. And i still don't know how to combine floor of the room with that of the chair
Thanks
Sorry for involuntary advertisement but IKEA uses a catalog smartphone app which uses augmented reality to position objects/furniture around an image of your room. Is that what you're trying to do?
In order to achieve this you would need a "pinpoint", a fixed point where to hook your objects to. That usually helps differentiate between walls and floor in the app above (and renders things easy).
Distinguishing walls from floors is hard even for a human if they're hanging by their feet and walls/floors have the same texture on them (but we manage to do it thanks to our "gravity feeling").
Find some keypoints or please state if you're planning to do it with a fixed camera (i.e. it will never be put horizontally)
OpenCV's POSIT may be userful for you (here is an example): http://opencv-users.1802565.n2.nabble.com/file/n6908580/main.cpp
Also take a look at augmented reality toolkits ArUco for example.
For advanced methods take a look at ptam.
And you can find some userful links and papers here: http://www.doc.ic.ac.uk/~ajd/
Segmenting walls and floors out of a single image is possible to some extent but requires a lot of work, it will require quite a complex system if you want to achieve decent results. You can probably do much better with a pair of images (stereo reconstruction)

how can i read a greyimage line by line opencv c++

I have a grey image ,i want to read every line from the image .
how can i do this algorithm ?
can anyone help me
there is the code what im using
//#include "stdafx.h"
#include <stdio.h>
#include <cv.h>
#include <iostream>
#include <conio.h>
#include <cxcore.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdlib.h>
using namespace cv;
Mat dst;
void main()
{
cv::Mat img = cv::imread("capture.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat some_row = img.row(0); //gives 1st row
namedWindow( "source",CV_NORMAL);
imshow( "source",img);
namedWindow( "img",CV_NORMAL);
imshow( "img",dst);
waitKey(0);
from the image i like to sort the color and make a test if i can found the order white color then black end then white and then black .
for this reason i like to make a search row by row.
i can share the picture that you can more understand me .
i have make an algorithm who can detect the circle from the right and the left side
and that works.I need a code who search only the color and no object from the image (color in order (White.Black.White.Black) to get the right position what im searching)
us im shown in image .https://drive.google.com/file/d/0B1WQBCaQu10geG1uTm40ZG9IVjQ/edit?usp=sharing
I suggest that you read the opencv tutorial on scanning images. You should also have a look at the documentation for the Mat class.
There is more than one way of doing what you need. Different methods have different efficiency and safety trade offs. More efficient methods work directly with pointers. This is fast, but you must take care of doing pointer arithmetics right depending of the number of pixels, channels and padding your image may have stored in memory.
On the other hand, using an iterator or even something like Mat::at is safer since these calculations are made for you. But it isn't as efficient.
Have a look at the documentation and choose what is right for your problem. Hope that helps.
You can use .row() method:
cv::Mat img = cv::imread("some_path.jpg",CV_LOAD_IMAGE_GRAYSCALE);//some gray level image
cv::Mat some_row = img.row(0); //gives 1st row
Note that this method causes data sharing with the original matrix img. If you want to have a copy operation, you can use .copyTo() method.

Blob extraction in OpenCV

I'm using OpenCV to filter an image for certain colours, so I've got a binary image of the detected regions.
Now I want to erode those areas and then get rid of the smaller ones, and find the x,y coordinates of the largest 'blob'
I was looking for recommendations as to what the best library would be to use? I've seen cvBlob and cvBlobsLib but I'm not too sure how to set them up. Do I want to compile them along with the project or do I want to compile and install them to the system (like I did with OpenCV)?
I'm currently using the Code::Blocks IDE on Ubuntu (although that shouldn't restrict things)
I'm late to the party, but I'd just like to chime in that there is a way to do connected components in opencv, it's just not mainlined yet.
Update: It is mainlined, it's just been stuck waiting for 3.0 to release for multiple years. Linky to documentation
See http://code.opencv.org/issues/1236 and http://code.opencv.org/attachments/467/opencv-connectedcomponents.patch
Disclaimer - I'm the author.
You can use findContours to do that, see the opencv manual and a Tutorial to find connected components.
Edit: Code from the tutorial (via Archive.org)
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
int main(int argc, char *argv[])
{
IplImage *img, *cc_color; /*IplImage is an image in OpenCV*/
CvMemStorage *mem;
CvSeq *contours, *ptr;
img = cvLoadImage(argv[1], 0); /* loads the image from the command line */
cc_color = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvThreshold(img, img, 150, 255, CV_THRESH_BINARY);
mem = cvCreateMemStorage(0);
cvFindContours(img, mem, &contours, sizeof(CvContour), CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
for (ptr = contours; ptr != NULL; ptr = ptr->h_next) {
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours(cc_color, ptr, color, CV_RGB(0,0,0), -1, CV_FILLED, 8, cvPoint(0,0));
}
cvSaveImage("result.png", cc_color);
cvReleaseImage(&img);
cvReleaseImage(&cc_color);
return 0;
}
Unfortunately OpenCV doesn't have any connected component labelling functionality, which seems like a serious omission for a computer vision library. Anyway I had a similar requirement recently so I implemented my own CCL routine - there are a couple of different algorithms described on the CCL Wikipedia page and they are both pretty simple to implement.
I think the best and easy option to work with blobs with OpenCV is to use cvBlob library. Its a complemntary library with OpenCV a its so easy to use.