Assertion failed in unknown function - c++

I am using this program to just read and display an image.
I dont know why it is showing this odd error:
assertion failed (scn==3 || scn ==4) in unknown function,file......\src\modules\imgproc\src\color.cpp line 3326
I changed some images, sometimes it runs without error but, even when it runs and everything, it is showing the window but not the image in it. What is wrong?
#include "stdafx.h"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat leftImg,frame=imread("C:\\Users\\user\\Downloads\\stereo_progress.png");
leftImg=imread("C:\\Users\\user\\Downloads\\dm_sl.gif");//add of left camera
cvtColor(leftImg,leftImg,CV_BGR2GRAY);
imwrite("imreadtest.txt",leftImg);
imshow("cskldnsl",leftImg);
getchar();
}

As answered by others, make sure the parameter1 in cvtColor is not 1 channel image. check it by type(). it should be CV_8UC3 and etc.
Put waitKey after imshow. Image will show up.
I do not know why you are saving leftImg in imreadtest.txt. [ Though it is not making the error.]

First, make sure that the image was correctly loaded by testing for leftImg.data != 0.
Then, you can force the number of channels by passing as second parameter to cv::imread() the value CV_LOAD_IMAGE_GRAYSCALE or CV_LOAD_IMAGE_COLOR in order to ensure that you load a grayscale (1 channel) or color (3 channels) image, whatever the type of the image file is.

You cannot use the same matrix for both the input matrix and the output matrix when using cvtColor(). If you don't need the colored image later on, passing a copy is a straightforward solution:
cvtColor(leftImg.clone(), leftImg, CV_BGR2GRAY);
Another solution is using a fresh output matrix:
Mat leftImgGray;
cvtColor(leftImg, leftImgGray, CV_BGR2GRAY);
imshow("cskldnsl",leftImgGray);

Related

Reshape opencv image for PCA

This is probably a rather simple task, but I am uncertain on how to proceed, since I am new to opencv in C++.
I was inspired by this code.
The idea I had was then to take a single image, do PCA on the RGB intensities and visualize the projection of the RGB data onto the 3 principal components in grayscale.
The first problem I run into, is how to setup the matrix for PCA. Here is my code so far:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image, imageMat;
image= imread("images/0.jpg");
imageMat = convertForPCA(image);
// Do pca and visualize channels in grayscale.
// ...
return 0;
}
So if you could help me implement convertForPCA function. That function should take in an image matrix and return an n by 3 float matrix, where n is the number of pixels in the original image.
I think I can proceed with the rest, but will maybe post more questions if I get stuck and can't find an answer.
I solved my problem, and I put the solution on github in case anyone runs into this later.

How to increase brightness in specify part of image using Histogram Equalization

Here I did coding for increasing brightness using Histogram Equalization but it changes the overall brightness of images. And I need brightness in specify location.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, const char** argv {
Mat img = imread("MyPic.JPG", CV_LOAD_IMAGE_COLOR);
if (img.empty())
{
cout << "Image cannot be loaded..!!" << endl;
return -1;
}
cvtColor(img, img, CV_BGR2GRAY);
Mat img_hist_equalized;
equalizeHist(img, img_hist_equalized); //equalize the histogram
namedWindow("Original Image", CV_WINDOW_AUTOSIZE);
namedWindow("Histogram Equalized", CV_WINDOW_AUTOSIZE);
imshow("Original Image", img);
imshow("Histogram Equalized", img_hist_equalized);
waitKey(0); //wait for key press
destroyAllWindows(); //destroy all open windows
return 0;}
Input image:
Output I gets:
But I expected output
The above code is based on the Histogram Equalization.Is there any other approach means specify here.
The answer on your question is in the method, used for generation of last image with expected output. Which commands of graphical editor have you used to generate this result? Just repeat it with OpenCV.
You can try something like this:
increase contrast,
reduce number of colors to 4,
replace all three "near black" colors by real black, and replace one
"near white" color by real white.
You will have something like this (top image after second step, bottom image after third step):
Is it good enought for your task? Do you need better quality? Any way, try to process your image by any graphical editor before coding. And only when you understand, which operations you need to apply to your image, try to implement it using OpenCV.
To understand better the problem, let's see the histogram of your initial image:
We can see, that difference between "expected black" and "expected white" colors are very small. More over, inside "white circle" we can see pixels with "expected black colors". So it is not enough to change histogram for fixing of this mix of pixels. We need to analyze surroundings of each pixel (in fact, I done it in first step - when increased contrast of the image). We can discuss methods for it, but first of all we need more information and efforts from you. Editing of the pallet without analyzing of surroundings will give you strange results like this:
So, you need to open any graphical editor and find way to convert your input to expected output.

Image (color?) segmentation with opencv C++

As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.
Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)
The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.

how can i read a greyimage line by line opencv c++

I have a grey image ,i want to read every line from the image .
how can i do this algorithm ?
can anyone help me
there is the code what im using
//#include "stdafx.h"
#include <stdio.h>
#include <cv.h>
#include <iostream>
#include <conio.h>
#include <cxcore.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdlib.h>
using namespace cv;
Mat dst;
void main()
{
cv::Mat img = cv::imread("capture.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat some_row = img.row(0); //gives 1st row
namedWindow( "source",CV_NORMAL);
imshow( "source",img);
namedWindow( "img",CV_NORMAL);
imshow( "img",dst);
waitKey(0);
from the image i like to sort the color and make a test if i can found the order white color then black end then white and then black .
for this reason i like to make a search row by row.
i can share the picture that you can more understand me .
i have make an algorithm who can detect the circle from the right and the left side
and that works.I need a code who search only the color and no object from the image (color in order (White.Black.White.Black) to get the right position what im searching)
us im shown in image .https://drive.google.com/file/d/0B1WQBCaQu10geG1uTm40ZG9IVjQ/edit?usp=sharing
I suggest that you read the opencv tutorial on scanning images. You should also have a look at the documentation for the Mat class.
There is more than one way of doing what you need. Different methods have different efficiency and safety trade offs. More efficient methods work directly with pointers. This is fast, but you must take care of doing pointer arithmetics right depending of the number of pixels, channels and padding your image may have stored in memory.
On the other hand, using an iterator or even something like Mat::at is safer since these calculations are made for you. But it isn't as efficient.
Have a look at the documentation and choose what is right for your problem. Hope that helps.
You can use .row() method:
cv::Mat img = cv::imread("some_path.jpg",CV_LOAD_IMAGE_GRAYSCALE);//some gray level image
cv::Mat some_row = img.row(0); //gives 1st row
Note that this method causes data sharing with the original matrix img. If you want to have a copy operation, you can use .copyTo() method.

Extracting Background Image Using GrabCut

I've an image (.jpg image), and I want to extract the background from the original image. I've googled a lot but have only found tutorials of extracting foreground image.
I've taken the code from another stackoverflow question. The code is working fine for me, and I've successfully extracted the foreground (as per my requirements). Now I want to completely remove this foreground from the original image. I want it to be something like this:-
Background = Original Image - Foreground
The empty space can be filled with black or white color. How can I achieve this?
I've tried using this technique:-
Mat background = image2 - foreground;
but it gives a complete black image.
Code:-
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/abc.jpg");
Mat image2 = image.clone();
// define bounding rectangle
cv::Rect rectangle(40,90,image.cols-80,image.rows-170);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
//cv::Mat background(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
imwrite("img_1.jpg",image);
imwrite("Foreground.jpg",foreground);
Mat background = image2 - foreground;
imwrite("Background.jpg",background);
return 0;
}
Note: I'm an opencv beginner and don't have much knowledge of it right now. I shall be very thankful to you if you can either post the complete code (as required by me) or just post the lines of code and tell me where these lines of code be placed. Thanks.
P.S. This is my second question at StackOverflow.com. apologies ... if not following any convention.
Instead of copying all the pixels that are foreground, it copies all pixels which are not foreground. You can do this by using ~, which negates the mask:
image.copyTo(background,~result);
What if you //Get the pixels marked as likely background:
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result,cv::CMP_EQ);
Edit: The above code is missing GC_BGD pixels. Despite a more efficient answer was given, let's finish what we started:
// Get the pixels marked as background
cv::compare(result,cv::GC_BGD,result_a,cv::CMP_EQ);
// Get the pixels marked as likely background
cv::compare(result,cv::GC_PR_BGD,result_b,cv::CMP_EQ);
// Final results
result=result_a+result_b;
Just a small suggestion,#William's
answer can be written more concisely as:
result = result & 1;
in order to get the binary mask.
Maybe another example helps, in which I assumed that the middle portion of the image is definitely foreground.
So try this link.
Example