Reshape opencv image for PCA - c++

This is probably a rather simple task, but I am uncertain on how to proceed, since I am new to opencv in C++.
I was inspired by this code.
The idea I had was then to take a single image, do PCA on the RGB intensities and visualize the projection of the RGB data onto the 3 principal components in grayscale.
The first problem I run into, is how to setup the matrix for PCA. Here is my code so far:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image, imageMat;
image= imread("images/0.jpg");
imageMat = convertForPCA(image);
// Do pca and visualize channels in grayscale.
// ...
return 0;
}
So if you could help me implement convertForPCA function. That function should take in an image matrix and return an n by 3 float matrix, where n is the number of pixels in the original image.
I think I can proceed with the rest, but will maybe post more questions if I get stuck and can't find an answer.

I solved my problem, and I put the solution on github in case anyone runs into this later.

Related

Merging rgb and depth images from a kinect

I'm creating a vision algorithm that is implemented in a Simulink S-function( which is c++ code). I accomplished every thing wanted except the alignment of the color and depth image.
My question is how can i make the 2 images correspond to each other. in other words how can i make a 3d image with opencv.
I know my question might be a little vague so i will include my code which will explain the question
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
// reading in the color and depth image
Mat color = imread("whitepaint_col.PNG", CV_LOAD_IMAGE_UNCHANGED);
Mat depth = imread("whitepaint_dep.PNG", CV_LOAD_IMAGE_UNCHANGED);
// show bouth the color and depth image
namedWindow("color", CV_WINDOW_AUTOSIZE);
imshow("color", color);
namedWindow("depth", CV_WINDOW_AUTOSIZE);
imshow("depth", depth);
// thershold the color image for the color white
Mat onlywhite;
inRange(color, Scalar(200, 200, 200), Scalar(255, 255, 255), onlywhite);
//display the mask
namedWindow("onlywhite", CV_WINDOW_AUTOSIZE);
imshow("onlywhite", onlywhite);
// apply the mask to the depth image
Mat nocalibration;
depth.copyTo(nocalibration, onlywhite);
//show the result
namedWindow("nocalibration", CV_WINDOW_AUTOSIZE);
imshow("nocalibration", nocalibration);
waitKey(0);
destroyAllWindows;
return 0;
}
output of the program:
As can be seen in the output of my program when i apply the onlywhite mask to the depth image the quad copter body does not consist out of 1 color. The reason for this is that there is a miss match between the 2 images.
I know that i need calibration parameters of my camera and i got these from the last person who worked with this setup. Did the calibration in Matlab and this resulted in the following.
Matlab calibration esults:
I have spent allot of time reading the following opencv page about Camera Calibration and 3D Reconstruction ( cannot include the link because of stack exchange lvl)
But i cannot for the life of me figure out how i could accomplish my goal of adding the correct depth value to each colored pixel.
I tried using reprojectImageTo3D() but i cannot figure out the Q matrix.
i also tried allot of other functions from that page but i cannot seem to get my inputs correct.
As far as I know, Matlab has very good support for Kinect (especially for v1). You may use a function named alignColorToDepth, as follows:
[alignedFlippedImage,flippedDepthImage] = alignColorToDepth(depthImage,colorImage,depthDevice)
The returned values are alignedFlippedImage (the RGB registrated image) and flippedDepthImage (the registrated depth image). These two images are aligned and ready for you to process them.
You can find more at this MathWorks documentation page.
Hope it's what you need :)
As far as I can tell, you are missing the transformation between camera coordinate frames. The Kinect (v1 and v2) uses two separate camera systems to capture the depth and RGB data, and so there is a translation and rotation between them. You may be able to assume no rotation, but you will have to account for the translation to fix the misalignment you are seeing.
Try starting with this thread.

Image (color?) segmentation with opencv C++

As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.
Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)
The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.

how can i read a greyimage line by line opencv c++

I have a grey image ,i want to read every line from the image .
how can i do this algorithm ?
can anyone help me
there is the code what im using
//#include "stdafx.h"
#include <stdio.h>
#include <cv.h>
#include <iostream>
#include <conio.h>
#include <cxcore.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdlib.h>
using namespace cv;
Mat dst;
void main()
{
cv::Mat img = cv::imread("capture.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat some_row = img.row(0); //gives 1st row
namedWindow( "source",CV_NORMAL);
imshow( "source",img);
namedWindow( "img",CV_NORMAL);
imshow( "img",dst);
waitKey(0);
from the image i like to sort the color and make a test if i can found the order white color then black end then white and then black .
for this reason i like to make a search row by row.
i can share the picture that you can more understand me .
i have make an algorithm who can detect the circle from the right and the left side
and that works.I need a code who search only the color and no object from the image (color in order (White.Black.White.Black) to get the right position what im searching)
us im shown in image .https://drive.google.com/file/d/0B1WQBCaQu10geG1uTm40ZG9IVjQ/edit?usp=sharing
I suggest that you read the opencv tutorial on scanning images. You should also have a look at the documentation for the Mat class.
There is more than one way of doing what you need. Different methods have different efficiency and safety trade offs. More efficient methods work directly with pointers. This is fast, but you must take care of doing pointer arithmetics right depending of the number of pixels, channels and padding your image may have stored in memory.
On the other hand, using an iterator or even something like Mat::at is safer since these calculations are made for you. But it isn't as efficient.
Have a look at the documentation and choose what is right for your problem. Hope that helps.
You can use .row() method:
cv::Mat img = cv::imread("some_path.jpg",CV_LOAD_IMAGE_GRAYSCALE);//some gray level image
cv::Mat some_row = img.row(0); //gives 1st row
Note that this method causes data sharing with the original matrix img. If you want to have a copy operation, you can use .copyTo() method.

Extracting Background Image Using GrabCut

I've an image (.jpg image), and I want to extract the background from the original image. I've googled a lot but have only found tutorials of extracting foreground image.
I've taken the code from another stackoverflow question. The code is working fine for me, and I've successfully extracted the foreground (as per my requirements). Now I want to completely remove this foreground from the original image. I want it to be something like this:-
Background = Original Image - Foreground
The empty space can be filled with black or white color. How can I achieve this?
I've tried using this technique:-
Mat background = image2 - foreground;
but it gives a complete black image.
Code:-
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/abc.jpg");
Mat image2 = image.clone();
// define bounding rectangle
cv::Rect rectangle(40,90,image.cols-80,image.rows-170);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
//cv::Mat background(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
imwrite("img_1.jpg",image);
imwrite("Foreground.jpg",foreground);
Mat background = image2 - foreground;
imwrite("Background.jpg",background);
return 0;
}
Note: I'm an opencv beginner and don't have much knowledge of it right now. I shall be very thankful to you if you can either post the complete code (as required by me) or just post the lines of code and tell me where these lines of code be placed. Thanks.
P.S. This is my second question at StackOverflow.com. apologies ... if not following any convention.
Instead of copying all the pixels that are foreground, it copies all pixels which are not foreground. You can do this by using ~, which negates the mask:
image.copyTo(background,~result);
What if you //Get the pixels marked as likely background:
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result,cv::CMP_EQ);
Edit: The above code is missing GC_BGD pixels. Despite a more efficient answer was given, let's finish what we started:
// Get the pixels marked as background
cv::compare(result,cv::GC_BGD,result_a,cv::CMP_EQ);
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result_b,cv::CMP_EQ);
// Final results
result=result_a+result_b;
Just a small suggestion,#William's
answer can be written more concisely as:
result = result & 1;
in order to get the binary mask.
Maybe another example helps, in which I assumed that the middle portion of the image is definitely foreground.
So try this link.
Example

Assertion failed in unknown function

I am using this program to just read and display an image.
I dont know why it is showing this odd error:
assertion failed (scn==3 || scn ==4) in unknown function,file......\src\modules\imgproc\src\color.cpp line 3326
I changed some images, sometimes it runs without error but, even when it runs and everything, it is showing the window but not the image in it. What is wrong?
#include "stdafx.h"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat leftImg,frame=imread("C:\\Users\\user\\Downloads\\stereo_progress.png");
leftImg=imread("C:\\Users\\user\\Downloads\\dm_sl.gif");//add of left camera
cvtColor(leftImg,leftImg,CV_BGR2GRAY);
imwrite("imreadtest.txt",leftImg);
imshow("cskldnsl",leftImg);
getchar();
}
As answered by others, make sure the parameter1 in cvtColor is not 1 channel image. check it by type(). it should be CV_8UC3 and etc.
Put waitKey after imshow. Image will show up.
I do not know why you are saving leftImg in imreadtest.txt. [ Though it is not making the error.]
First, make sure that the image was correctly loaded by testing for leftImg.data != 0.
Then, you can force the number of channels by passing as second parameter to cv::imread() the value CV_LOAD_IMAGE_GRAYSCALE or CV_LOAD_IMAGE_COLOR in order to ensure that you load a grayscale (1 channel) or color (3 channels) image, whatever the type of the image file is.
You cannot use the same matrix for both the input matrix and the output matrix when using cvtColor(). If you don't need the colored image later on, passing a copy is a straightforward solution:
cvtColor(leftImg.clone(), leftImg, CV_BGR2GRAY);
Another solution is using a fresh output matrix:
Mat leftImgGray;
cvtColor(leftImg, leftImgGray, CV_BGR2GRAY);
imshow("cskldnsl",leftImgGray);