How to relate an image with another image in OpenCV - c++

I'm doing this project in OpenCV C++ where i make the reflection of a given image, just like the flip function but with the coordinates of each pixel. the problem is that the image output that i get is all blue with a line horizontally, i believe that my code is only affecting the first channel.
I tried to do imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c); in order to solve it, but nothing changed. I'll leave the code below, thanks in advance.
Mat image = imread("image_dir/image.jpg");
Mat imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c);
Vec3b sourcePixel = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r, c) = (uchar)(c, -r + (220)/2);
}
}

If you don't want to use flip function, you can change the x-coordinates(cols) of each rows mirrorly. Here is the code:
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main() {
//You can change as "Mat3b" for the 3-channel images
Mat1b image = imread("/ur/image/directory/image.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat1b imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal(r, c) = image(r, image.cols - 1 - c);
//y-axis(r) doesnt change only x-axis(cols) mirroring
}
}
imshow("Result",imageReflectionFinal);
waitKey(0);
return 0;
}
This answer is also my reference.

Related

Segmentation error when trying to equalize an image

I've been reading about opencv and I've been doing some exercises, in this case I want to perform an image equalization, I have implemented the following code, but when I execute it I get the following error:
"Segmentation fault (core dumped)"
So I have no idea what is due.
The formula I am trying to use is the following:
equalization
The code is the following:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <stdio.h>
using namespace cv;
using namespace std;
void equalization(cv::Mat &image,cv::Mat &green, int m) {
Mat eqIm;
int nl= image.rows; // number of lines
int nc= image.cols * image.channels();
for (int j=0; j<nl; j++) {
uchar* data= image.ptr<uchar>(j);
uchar* data2= green.ptr<uchar>(j);
uchar* eqIm= green.ptr<uchar>(j);
for (int i=0; i<nc; i++) {
eqIm[i]= data[i]+m-data2[i];
}
}
cv::imshow("Image",eqIm);
imwrite("eqIm.png",eqIm);
}
float mean(cv::Mat &image){
cv:Scalar tempVal = mean( image );
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
}
int main(int argc, char** argv ){
Mat dst;
Mat image= cv::imread("img.jpg");
Mat green= cv::imread("green.jpg");
cv::imshow("Image",image);
float m= mean(image);
equalization(image,green,m);
cv::namedWindow("Image");
cv::imshow("Image",image);
imwrite("equalizated.png",dst);
waitKey(0);
return 0;
}
and the image "Equalization.png" that is written contains nothing
You never initialized Mat eqIm, so when you do cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm); there is nothing in the mat. https://docs.opencv.org/2.4/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html
Also, I should note that you have 2 variables of eqIm. That may be part of the confusion.
One last thing, in your mean function, you may end up with a recursive function. You should specify what mean function you are using in the mean function you create, i.e.
float mean(cv::Mat &image) {
cv:Scalar tempVal = cv::mean(image);
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
return myMAtMean;
}
The following is something closer to what you are looking for in your equalization function.
void equalization(cv::Mat &image, cv::Mat &green, int m) {
Mat eqIm(image.rows,image.cols,image.type());
int nl = image.rows; // number of lines
int nc = image.cols * image.channels();
for (int j = 0; j<nl; j++) {// j is each row
for (int ec = 0; ec < nc; ec++) {//ec is each col and channels
eqIm.data[j*image.cols*image.channels() + ec] = image.data[j*image.cols*image.channels() + ec] + m - green.data[j*image.cols*image.channels() + ec];
}
}
cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm);
}
I do j*image.cols*image.channels() to step through the entire size of j lines (the number of columns times the number of channels per pixel).

create a random colour image in OpenCV

I am new to OpenCV. I am trying to create a random color image. Firstly I tried to create a random grayscale image. The code I have attached below
void random_colour(Mat input_image) {
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
image.at<uchar>(i,j)= rand()%255;
imwrite("output.tif",image);
}
int main( int argc, char** argv )
{
Mat img=Mat::zeros(100,100,CV_8UC1);
random_colour(img);
waitKey(0);
return 0;
}
The output obtained is
Now I changed my above code to create a random colour image as.
void random_colour(Mat input_image) {
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
image.at<Vec3b>(i,j)[0] = rand()%255;
image.at<Vec3b>(i,j)[1] = rand()%255;
image.at<Vec3b>(i,j)[2] = rand()%255;
}
}
imwrite("output.tif",image);
}
The main function remains same. While doing so, I get a runtime error. Please help me on what should I do. What I understood is that each pixel in colour space has three component RGB. So therefore I am changing all the three component of each pixel. I am not getting the output what I wanted.
no need to reinvent the wheel, please use cv::randu()
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
int main(int argc, char** argv)
{
Mat img(100, 100, CV_8UC3);
randu(img, Scalar(0, 0, 0), Scalar(255, 255, 255));
imshow("random colors image", img);
waitKey();
return 0;
}
This line creates a greyscale image, it doesn't have three channels.
Mat img=Mat::zeros(100,100,CV_8UC1);
That means when you use this line, its going to crash:
image.at<Vec3b>(i,j)[0] = rand()%255;
You need to not use CV_8UC1 because that creates 1 channel (C1) try it with CV_8UC3 instead
Mat img=Mat::zeros(100,100,CV_8UC3);
FYI 8u means 8 bit so values 0 to 255, CX is the number of channels in the image. If you want BGR you need C3.

OpenCV: can't get segmentation of image using k-means

Before going into deep of my question, I want you to know that I've read other posts on this forum, but none regards my problem.
In particular, the post here answers the question "how to do this?" with k-means, while I already know that I have to use it and I'd like to know why my implementation doesn't work.
I want to use k-means algorithm to divide pixels of an input image into clusters, according to their color. Then, after completing such task, I want each pixel to have the color of the center of the cluster it's been assigned to.
Taking as reference the OpenCV examples and other stuff retrieved on the web, I've designed the following code:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( argv[1], 1 );
// reshape matrix
Mat resized(src.rows*src.cols, 3, CV_8U);
int row_counter = 0;
for(int i = 0; i<src.rows; i++)
{
for(int j = 0; j<src.cols; j++)
{
Vec3b channels = src.at<Vec3b>(i,j);
resized.at<char>(row_counter,0) = channels(0);
resized.at<char>(row_counter,1) = channels(1);
resized.at<char>(row_counter,2) = channels(2);
row_counter++;
}
}
//cout << src << endl;
// change data type
resized.convertTo(resized, CV_32F);
// determine termination criteria and number of clusters
TermCriteria criteria(TermCriteria::COUNT + TermCriteria::EPS, 10, 1.0);
int K = 8;
// apply k-means
Mat labels, centers;
double compactness = kmeans(resized, K, labels, criteria, 10, KMEANS_RANDOM_CENTERS, centers);
// change data type in centers
centers.convertTo(centers, CV_8U);
// create output matrix
Mat result = Mat::zeros(src.rows, src.cols, CV_8UC3);
row_counter = 0;
int matrix_row_counter = 0;
while(row_counter < result.rows)
{
for(int z = 0; z<result.cols; z++)
{
int index = labels.at<char>(row_counter+z, 0);
//cout << index << endl;
Vec3b center_channels(centers.at<char>(index,0),centers.at<char>(index,1), centers.at<char>(index,2));
result.at<Vec3b>(matrix_row_counter, z) = center_channels;
}
row_counter += result.cols;
matrix_row_counter++;
}
cout << "Labels " << labels.rows << " " << labels.cols << endl;
//cvtColor( src, gray, CV_BGR2GRAY );
//gray.convertTo(gray, CV_32F);
imshow("Result", result);
waitKey(0);
return 0;
}
Anyway, at the end of computation, I simply get a black image.
Do you know why?
Strangely, if I initialize result matrix as
Mat result(src.size(), src.type())
at the end of algorithm it will display exactly the input image, without any segmentation.
In particular, I have two doubts:
1) is it correct to lay the RGB values of a pixel on each row of matrix resized the way I've done it? is there a way to do it without a loop?
2) what's exactly the content of centers, after k-means function finishes working? it's a 3 columns matrix, does it contains the RGB values of clusters' centers?
thanks for support.
-The below posted OpenCV program assigns the user preferred color to a particular pixel value in an image
-ScanImageAndReduceC() is a predefined method in OpenCV to scan through all the pixels of an Image
-I.atuchar>(10, 10) = 255; is used to access a particular pixel value of an image
Here is the code:
Mat& ScanImageAndReduceC(Mat& I)
{
// accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols * channels;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i, j;
uchar* p;
for (i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for (j = 0; j < nCols; ++j)
{
I.at<uchar>(10, 10) = 255;
}
}
return I;
}
-------Main Program-------
Calling the above method in our main program
diff = ScanImageAndReduceC(diff);
namedWindow("Difference", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Difference", diff); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}

Negative image is completly black

Here is my code, which uses OpenCV 2.4.5
Histogram1D.h
#ifndef HISTOGRAM1D_H
#define HISTOGRAM1D_H
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
class Histogram1D
{
public:
Histogram1D();
//Histogram generators
MatND getHistogram(Mat );
Mat getHistogramImage(Mat );
//Generate Negative Image
Mat applyLookup(Mat ,Mat );
//Generate improved image with equalized histogram
Mat equalize(Mat image);
private:
int histSize[1];//Number of bins
float hRanges[2];//Max and Min pixel values
const float *ranges[1];
int channels[1];//Only one channel will be used
};
#endif // HISTOGRAM1D_H
Histogram1D.cpp
#include "Histogram1D.h"
Histogram1D::Histogram1D()
{
histSize[0] = 256;
hRanges[0] = 0.0;
hRanges[1] = 255.0;
ranges[0] = hRanges;
channels[0] = 0;
}
MatND Histogram1D::getHistogram(Mat image)
{
MatND hist;
cv::calcHist(&image,1,channels,Mat(),hist,1,histSize,ranges);
return hist;
}
Mat Histogram1D::getHistogramImage(Mat image)
{
MatND histo = getHistogram(image);
//Get minimum and maximum value bins
double minVal = 0;
double maxVal = 0;
minMaxLoc(histo,&minVal,&maxVal,0,0);
//Image on which to display histogram
Mat histImage(histSize[0],histSize[0],CV_8U,Scalar(255));
//Set highest point at 90% of nbins
int hpt = static_cast<int>(0.9,histSize[0]);
//Draw a vertical line for each bin
for(int i=0;i<histSize[0];i++)
{
float binVal = histo.at<float>(i);
int intensity = static_cast<int>(binVal*hpt/maxVal);
line(histImage,Point(i,histSize[0]),Point(i,histSize[0]-intensity),Scalar::all(0));
}
return histImage;
}
Mat Histogram1D::applyLookup(Mat image,Mat lookup)
{
Mat result;
cv::LUT(image,lookup,result);
return result;
}
Mat Histogram1D::equalize(Mat image)
{
Mat result;
cv::equalizeHist(image,result);
return result;
}
HistogramMain.cpp
#include "Histogram1D.h"
int main()
{
Histogram1D h;
Mat image = imread("C:/Users/Public/Pictures/Sample Pictures/Penguins.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cout << "Number of Channels: " << image.channels() << endl;
namedWindow("Image");
imshow("Image",image);
Mat histogramImage = h.getHistogramImage(image);
namedWindow("Histogram");
imshow("Histogram",histogramImage);
Mat thresholded;
threshold(image,thresholded,60,255,THRESH_BINARY);
namedWindow("Binary Image");
imshow("Binary Image",thresholded);
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Mat equalizedImage;
equalizedImage = h.equalize(image);
namedWindow("Equalized Image");
imshow("Equalized Image",equalizedImage);
waitKey(0);
return 0;
}
When you run this code, the negative image is 100% black! The most amazing this is, if you remove all other code from HistogramMain.cpp but keep the code below which is related to negative image, you will get the correct negative image! Why is this?
I am using QT latest version which use the VS 2010 Compiler.
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Your primary difficulty is that the expression Mat(1,&dim,CV_8U) allocates memory for a cv::Mat, but does not initialize any values. It is possible that your environment may fill uninitialized memory with zeros, which would explain the black image after calling applyLookup(). In any case, you should initialize the values in your lookup table in order to achieve correct results. For inverting the image, it is easy:
int dim(256);
cv::Mat tab(1,&dim,CV_8U);
uchar* ptr = tab.ptr();
for (size_t i = 0; i < tab.total(); ++i)
{
ptr[i] = 255 - i;
}
There are a few other issues with your code:
The line
int hpt = static_cast<int>(0.9,histSize[0]);
should be
int hpt = static_cast<int>(0.9*histSize[0]);
to do what your comment indicates. Pay attention to your compiler warnings!
You also have problems with your histogram ranges.
By the way, with opencv2 image are now numpy array, so to negative a grey 8-bits image in python, it's simply:
img = 255 - img

Colour reduction in images not working

Please have a look at the following code
#include <iostream>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\core\core.hpp>
using namespace std;
using namespace cv;
void reduceColor(Mat&,int=64);
int main()
{
Mat image = imread("C:/Users/Public/Pictures/Sample Pictures/Koala.jpg");
namedWindow("Image");
imshow("Image",image);
//reduceColor(image,64);
waitKey(0);
}
void reduceColor(Mat &image,int division)
{
int numberOfRows = image.rows;
int numberOfColumns = image.cols * image.channels();
for(int i=0;i<numberOfRows;i++)
{
uchar *data = image.ptr<uchar>(i);
for(int pixel=0;pixel<numberOfColumns;pixel++)
{
data[i] = data[i]/division*division + division/2;
}
}
namedWindow("Image2");
imshow("Image2",image);
}
This is Computer Vision. I am trying to read an image and reduce it's color by navigating through all the pixels and channels. But, the colour is not reduced! It simply displays the original image! Pleas help!
Variable i is never incremented in your nested for loop, but you're setting data[i]. So in all likelihood, a few pixels in the first column are changing after the function call, but nothing else is.