Weird pixel values when printing out? - c++

I know this question has been answered, but when I tried the solutions, It did not take me anywhere.
Below is the given code I have written to get the left and the right most boundary of an image, that was thresholded using canny edge detector, OpenCV.
#include<iostream>
#include<vector>
#include<math.h>
#include<string>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
int thresh = 100,rows = 0,cols = 0;;
Mat src,src_gray,canny_output;
src = imread( argv[1]);
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
Canny( src_gray, canny_output, thresh, thresh*3, 3 );
Mat boundary_image = Mat::zeros( canny_output.size(), CV_8UC1 );
rows = canny_output.rows;
cols = canny_output.cols;
cout<<rows<<endl<<cols<<endl;
for(int i=0;i<rows;i++)
{
for(int j=0;j<cols;j++)
{
cout<<canny_output.at<uchar>(i,j)<<endl;
if(canny_output.at<uchar>(i,j) == 255)
{
boundary_image.at<uchar>(i,j) = 255;
break;
}
}
for(int k = cols;k>0;k--)
{
if(canny_output.at<uchar>(i,k) == 255)
{
boundary_image.at<uchar>(i,k) = 255;
break;
}
}
}
imshow("boundary_image",boundary_image);
waitKey(0);
return 0;
}
Whether the algorithm works or not is secondary, but am not able to view the value of the canny edge detected image. It is showing some symbols or empty values. Can you please tell me where I am going wrong?
Note: The question is with the print statement
cout<<canny_output.at<uchar>(i,j)<<endl;
which is not giving me any reasonable values at the output to view the pixel values. Similar questions where posted and the answer was to use uchar as the data type, but in my case, it is not working. It may sound rudimentary, but your help is greatly appreciated.

To correctly cout a unsigned char / uchar, you should cast it first, e.g.,
cout << (int)canny_output.at<uchar>(i,j) << endl;
To read on, check out Why "cout" works weird for "unsigned char"?

Related

How to relate an image with another image in OpenCV

I'm doing this project in OpenCV C++ where i make the reflection of a given image, just like the flip function but with the coordinates of each pixel. the problem is that the image output that i get is all blue with a line horizontally, i believe that my code is only affecting the first channel.
I tried to do imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c); in order to solve it, but nothing changed. I'll leave the code below, thanks in advance.
Mat image = imread("image_dir/image.jpg");
Mat imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c);
Vec3b sourcePixel = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r, c) = (uchar)(c, -r + (220)/2);
}
}
If you don't want to use flip function, you can change the x-coordinates(cols) of each rows mirrorly. Here is the code:
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main() {
//You can change as "Mat3b" for the 3-channel images
Mat1b image = imread("/ur/image/directory/image.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat1b imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal(r, c) = image(r, image.cols - 1 - c);
//y-axis(r) doesnt change only x-axis(cols) mirroring
}
}
imshow("Result",imageReflectionFinal);
waitKey(0);
return 0;
}
This answer is also my reference.

Assertion Failed during debug in the 'for' loop, for the if condition statement

I want to equate the 0 pixel value to the pixel location having 0 pixel value in Mask image to the same location in the grayimg12 image, which is a gray image. When I put the for loop in try-catch block it is giving me error and assertion failed, without using try-catch the error is "Unhandled Exception at 0x755b0f22 and cv:: Exception at memory location 0x004af338.. I am using opencv 3.0.0 beta version and Visual Studio 2010.
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
int main()
{
// Reading Mask and Creating New Image
Mat grayimg, grayimg12, input, Mask; int keyboard;
input = imread("peter.jpg");
cvtColor(input, grayimg, COLOR_BGR2GRAY);
grayimg.copyTo(grayimg12, grayimg);
namedWindow("Gray Converted Frame");
imshow("Gray Converted Frame", grayimg);
int r = input.rows; int c = input.cols;
Mask = grayimg > 100;
namedWindow("Binary Image");
imshow("Binary Image", Mask);
try
{
for (int i=1;i<=r;i++)
{
for (int j=1;j<=c; j++)
{
if (Mask.at<uchar>(i,j) == 0)
{
grayimg12.at<uchar>(i,j) = 0;
}
else
grayimg12.at<uchar>(i,j) = grayimg.at<uchar>(i,j);
}
}
}
catch(Exception)
{
cout<<"Hi..";
}
namedWindow("Gray Output Image");
imshow("Gray Output Image", grayimg12);
keyboard = waitKey( 10000 );
return 0;
}
Your loop indices are off by one, so you get an exception when you try to access memory beyond the image bounds. Change:
for (int i=1;i<=r;i++)
{
for (int j=1;j<=c; j++)
{
to:
for (int i=0;i<r;i++) // for i = 0 to r-1
{
for (int j=0;j<c; j++) // for j = 0 to c-1
{
Note that in C, C++ and related languages, arrays are zero-based. So the valid index range for an array of size N is from 0 to N-1 inclusive.

C++ OpenCV How to eliminate small edges or small area of contours

I am working on project which should work as horizon detection. I am using canny edge and contours for horizon detection. It works quite fine, but I have problem with small area of edges>contours which werent eliminated by high cannythreshold and morfolocigal operation. If I use higher threshold on canny I start to loose some of the horizon edges.
So question is, how to get rid of small area of edges/contours? Or how do I display only one biggest contour?
This pictures shows how it should look like:
http://i.stack.imgur.com/f4USX.png
This picture is taken with small area on contours which i need to eliminate:
http://i.stack.imgur.com/TQi0v.jpg
And here is my code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <sstream>
#include <string>
#include <iostream>
#include <opencv\highgui.h>
#include <opencv\cv.h>
#include <opencv\ml.h>
using namespace cv;
using namespace std;
vector<Vec4i> lines;
vector<vector<Point> > contours0;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
int MAX_KERNEL_LENGTH = 31;
int main(int argc, char** argv)
{
string filename = "test.avi";
VideoCapture cap(filename);
if(!cap.isOpened())
return -1;
Mat edges,grey;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame;
cvtColor(frame, grey, CV_BGR2GRAY);
GaussianBlur(grey, grey, Size(5,5),0);
Mat erodeElement = getStructuringElement( MORPH_RECT,Size(10,10));
Mat dilateElement = getStructuringElement( MORPH_RECT,Size(10,10));
erode(grey,grey,erodeElement);
dilate(grey,grey,dilateElement);
Canny(grey, edges, 150,300, 3);
findContours( edges, contours0, hierarchy,
CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
contours.resize(contours0.size());
for( size_t k = 0; k < contours0.size(); k++ ){
approxPolyDP(Mat(contours0[k]), contours[k], 5, true);
}
int idx = 0;
for( ; idx >= 0; idx = hierarchy[idx][0] )
{
drawContours( frame, contours, idx, Scalar(128,255,255), 5, 8, hierarchy );
}
imshow("frame", frame);
imshow("grey", grey);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
return 0;
}
You can filter your contours depending on their length using arcLength-function (http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#arclength).
Either you can check, if the contours are longer than a certain threshold or you only filter the longest contour.

Negative image is completly black

Here is my code, which uses OpenCV 2.4.5
Histogram1D.h
#ifndef HISTOGRAM1D_H
#define HISTOGRAM1D_H
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
class Histogram1D
{
public:
Histogram1D();
//Histogram generators
MatND getHistogram(Mat );
Mat getHistogramImage(Mat );
//Generate Negative Image
Mat applyLookup(Mat ,Mat );
//Generate improved image with equalized histogram
Mat equalize(Mat image);
private:
int histSize[1];//Number of bins
float hRanges[2];//Max and Min pixel values
const float *ranges[1];
int channels[1];//Only one channel will be used
};
#endif // HISTOGRAM1D_H
Histogram1D.cpp
#include "Histogram1D.h"
Histogram1D::Histogram1D()
{
histSize[0] = 256;
hRanges[0] = 0.0;
hRanges[1] = 255.0;
ranges[0] = hRanges;
channels[0] = 0;
}
MatND Histogram1D::getHistogram(Mat image)
{
MatND hist;
cv::calcHist(&image,1,channels,Mat(),hist,1,histSize,ranges);
return hist;
}
Mat Histogram1D::getHistogramImage(Mat image)
{
MatND histo = getHistogram(image);
//Get minimum and maximum value bins
double minVal = 0;
double maxVal = 0;
minMaxLoc(histo,&minVal,&maxVal,0,0);
//Image on which to display histogram
Mat histImage(histSize[0],histSize[0],CV_8U,Scalar(255));
//Set highest point at 90% of nbins
int hpt = static_cast<int>(0.9,histSize[0]);
//Draw a vertical line for each bin
for(int i=0;i<histSize[0];i++)
{
float binVal = histo.at<float>(i);
int intensity = static_cast<int>(binVal*hpt/maxVal);
line(histImage,Point(i,histSize[0]),Point(i,histSize[0]-intensity),Scalar::all(0));
}
return histImage;
}
Mat Histogram1D::applyLookup(Mat image,Mat lookup)
{
Mat result;
cv::LUT(image,lookup,result);
return result;
}
Mat Histogram1D::equalize(Mat image)
{
Mat result;
cv::equalizeHist(image,result);
return result;
}
HistogramMain.cpp
#include "Histogram1D.h"
int main()
{
Histogram1D h;
Mat image = imread("C:/Users/Public/Pictures/Sample Pictures/Penguins.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cout << "Number of Channels: " << image.channels() << endl;
namedWindow("Image");
imshow("Image",image);
Mat histogramImage = h.getHistogramImage(image);
namedWindow("Histogram");
imshow("Histogram",histogramImage);
Mat thresholded;
threshold(image,thresholded,60,255,THRESH_BINARY);
namedWindow("Binary Image");
imshow("Binary Image",thresholded);
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Mat equalizedImage;
equalizedImage = h.equalize(image);
namedWindow("Equalized Image");
imshow("Equalized Image",equalizedImage);
waitKey(0);
return 0;
}
When you run this code, the negative image is 100% black! The most amazing this is, if you remove all other code from HistogramMain.cpp but keep the code below which is related to negative image, you will get the correct negative image! Why is this?
I am using QT latest version which use the VS 2010 Compiler.
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Your primary difficulty is that the expression Mat(1,&dim,CV_8U) allocates memory for a cv::Mat, but does not initialize any values. It is possible that your environment may fill uninitialized memory with zeros, which would explain the black image after calling applyLookup(). In any case, you should initialize the values in your lookup table in order to achieve correct results. For inverting the image, it is easy:
int dim(256);
cv::Mat tab(1,&dim,CV_8U);
uchar* ptr = tab.ptr();
for (size_t i = 0; i < tab.total(); ++i)
{
ptr[i] = 255 - i;
}
There are a few other issues with your code:
The line
int hpt = static_cast<int>(0.9,histSize[0]);
should be
int hpt = static_cast<int>(0.9*histSize[0]);
to do what your comment indicates. Pay attention to your compiler warnings!
You also have problems with your histogram ranges.
By the way, with opencv2 image are now numpy array, so to negative a grey 8-bits image in python, it's simply:
img = 255 - img

Colour reduction in images not working

Please have a look at the following code
#include <iostream>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\core\core.hpp>
using namespace std;
using namespace cv;
void reduceColor(Mat&,int=64);
int main()
{
Mat image = imread("C:/Users/Public/Pictures/Sample Pictures/Koala.jpg");
namedWindow("Image");
imshow("Image",image);
//reduceColor(image,64);
waitKey(0);
}
void reduceColor(Mat &image,int division)
{
int numberOfRows = image.rows;
int numberOfColumns = image.cols * image.channels();
for(int i=0;i<numberOfRows;i++)
{
uchar *data = image.ptr<uchar>(i);
for(int pixel=0;pixel<numberOfColumns;pixel++)
{
data[i] = data[i]/division*division + division/2;
}
}
namedWindow("Image2");
imshow("Image2",image);
}
This is Computer Vision. I am trying to read an image and reduce it's color by navigating through all the pixels and channels. But, the colour is not reduced! It simply displays the original image! Pleas help!
Variable i is never incremented in your nested for loop, but you're setting data[i]. So in all likelihood, a few pixels in the first column are changing after the function call, but nothing else is.