I am new to OpenCV. I am trying to create a random color image. Firstly I tried to create a random grayscale image. The code I have attached below
void random_colour(Mat input_image) {
for (int i = 0; i < image.rows; i++)
for (int j = 0; j < image.cols; j++)
image.at<uchar>(i,j)= rand()%255;
imwrite("output.tif",image);
}
int main( int argc, char** argv )
{
Mat img=Mat::zeros(100,100,CV_8UC1);
random_colour(img);
waitKey(0);
return 0;
}
The output obtained is
Now I changed my above code to create a random colour image as.
void random_colour(Mat input_image) {
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
image.at<Vec3b>(i,j)[0] = rand()%255;
image.at<Vec3b>(i,j)[1] = rand()%255;
image.at<Vec3b>(i,j)[2] = rand()%255;
}
}
imwrite("output.tif",image);
}
The main function remains same. While doing so, I get a runtime error. Please help me on what should I do. What I understood is that each pixel in colour space has three component RGB. So therefore I am changing all the three component of each pixel. I am not getting the output what I wanted.
no need to reinvent the wheel, please use cv::randu()
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
int main(int argc, char** argv)
{
Mat img(100, 100, CV_8UC3);
randu(img, Scalar(0, 0, 0), Scalar(255, 255, 255));
imshow("random colors image", img);
waitKey();
return 0;
}
This line creates a greyscale image, it doesn't have three channels.
Mat img=Mat::zeros(100,100,CV_8UC1);
That means when you use this line, its going to crash:
image.at<Vec3b>(i,j)[0] = rand()%255;
You need to not use CV_8UC1 because that creates 1 channel (C1) try it with CV_8UC3 instead
Mat img=Mat::zeros(100,100,CV_8UC3);
FYI 8u means 8 bit so values 0 to 255, CX is the number of channels in the image. If you want BGR you need C3.
Related
I'm doing this project in OpenCV C++ where i make the reflection of a given image, just like the flip function but with the coordinates of each pixel. the problem is that the image output that i get is all blue with a line horizontally, i believe that my code is only affecting the first channel.
I tried to do imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c); in order to solve it, but nothing changed. I'll leave the code below, thanks in advance.
Mat image = imread("image_dir/image.jpg");
Mat imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal.at<Vec3b>(r,c) = image.at<Vec3b>(r,c);
Vec3b sourcePixel = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r, c) = (uchar)(c, -r + (220)/2);
}
}
If you don't want to use flip function, you can change the x-coordinates(cols) of each rows mirrorly. Here is the code:
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main() {
//You can change as "Mat3b" for the 3-channel images
Mat1b image = imread("/ur/image/directory/image.jpg",CV_LOAD_IMAGE_GRAYSCALE);
Mat1b imageReflectionFinal = Mat::zeros(image.size(), image.type());
for(unsigned int r=0; r<image.rows; r++) {
for(unsigned int c=0; c<image.cols; c++) {
imageReflectionFinal(r, c) = image(r, image.cols - 1 - c);
//y-axis(r) doesnt change only x-axis(cols) mirroring
}
}
imshow("Result",imageReflectionFinal);
waitKey(0);
return 0;
}
This answer is also my reference.
I've been reading about opencv and I've been doing some exercises, in this case I want to perform an image equalization, I have implemented the following code, but when I execute it I get the following error:
"Segmentation fault (core dumped)"
So I have no idea what is due.
The formula I am trying to use is the following:
equalization
The code is the following:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <stdio.h>
using namespace cv;
using namespace std;
void equalization(cv::Mat &image,cv::Mat &green, int m) {
Mat eqIm;
int nl= image.rows; // number of lines
int nc= image.cols * image.channels();
for (int j=0; j<nl; j++) {
uchar* data= image.ptr<uchar>(j);
uchar* data2= green.ptr<uchar>(j);
uchar* eqIm= green.ptr<uchar>(j);
for (int i=0; i<nc; i++) {
eqIm[i]= data[i]+m-data2[i];
}
}
cv::imshow("Image",eqIm);
imwrite("eqIm.png",eqIm);
}
float mean(cv::Mat &image){
cv:Scalar tempVal = mean( image );
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
}
int main(int argc, char** argv ){
Mat dst;
Mat image= cv::imread("img.jpg");
Mat green= cv::imread("green.jpg");
cv::imshow("Image",image);
float m= mean(image);
equalization(image,green,m);
cv::namedWindow("Image");
cv::imshow("Image",image);
imwrite("equalizated.png",dst);
waitKey(0);
return 0;
}
and the image "Equalization.png" that is written contains nothing
You never initialized Mat eqIm, so when you do cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm); there is nothing in the mat. https://docs.opencv.org/2.4/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html
Also, I should note that you have 2 variables of eqIm. That may be part of the confusion.
One last thing, in your mean function, you may end up with a recursive function. You should specify what mean function you are using in the mean function you create, i.e.
float mean(cv::Mat &image) {
cv:Scalar tempVal = cv::mean(image);
float myMAtMean = tempVal.val[0];
cout << "The value is " << myMAtMean;
return myMAtMean;
}
The following is something closer to what you are looking for in your equalization function.
void equalization(cv::Mat &image, cv::Mat &green, int m) {
Mat eqIm(image.rows,image.cols,image.type());
int nl = image.rows; // number of lines
int nc = image.cols * image.channels();
for (int j = 0; j<nl; j++) {// j is each row
for (int ec = 0; ec < nc; ec++) {//ec is each col and channels
eqIm.data[j*image.cols*image.channels() + ec] = image.data[j*image.cols*image.channels() + ec] + m - green.data[j*image.cols*image.channels() + ec];
}
}
cv::imshow("Image", eqIm);
imwrite("eqIm.png", eqIm);
}
I do j*image.cols*image.channels() to step through the entire size of j lines (the number of columns times the number of channels per pixel).
Before going into deep of my question, I want you to know that I've read other posts on this forum, but none regards my problem.
In particular, the post here answers the question "how to do this?" with k-means, while I already know that I have to use it and I'd like to know why my implementation doesn't work.
I want to use k-means algorithm to divide pixels of an input image into clusters, according to their color. Then, after completing such task, I want each pixel to have the color of the center of the cluster it's been assigned to.
Taking as reference the OpenCV examples and other stuff retrieved on the web, I've designed the following code:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( argv[1], 1 );
// reshape matrix
Mat resized(src.rows*src.cols, 3, CV_8U);
int row_counter = 0;
for(int i = 0; i<src.rows; i++)
{
for(int j = 0; j<src.cols; j++)
{
Vec3b channels = src.at<Vec3b>(i,j);
resized.at<char>(row_counter,0) = channels(0);
resized.at<char>(row_counter,1) = channels(1);
resized.at<char>(row_counter,2) = channels(2);
row_counter++;
}
}
//cout << src << endl;
// change data type
resized.convertTo(resized, CV_32F);
// determine termination criteria and number of clusters
TermCriteria criteria(TermCriteria::COUNT + TermCriteria::EPS, 10, 1.0);
int K = 8;
// apply k-means
Mat labels, centers;
double compactness = kmeans(resized, K, labels, criteria, 10, KMEANS_RANDOM_CENTERS, centers);
// change data type in centers
centers.convertTo(centers, CV_8U);
// create output matrix
Mat result = Mat::zeros(src.rows, src.cols, CV_8UC3);
row_counter = 0;
int matrix_row_counter = 0;
while(row_counter < result.rows)
{
for(int z = 0; z<result.cols; z++)
{
int index = labels.at<char>(row_counter+z, 0);
//cout << index << endl;
Vec3b center_channels(centers.at<char>(index,0),centers.at<char>(index,1), centers.at<char>(index,2));
result.at<Vec3b>(matrix_row_counter, z) = center_channels;
}
row_counter += result.cols;
matrix_row_counter++;
}
cout << "Labels " << labels.rows << " " << labels.cols << endl;
//cvtColor( src, gray, CV_BGR2GRAY );
//gray.convertTo(gray, CV_32F);
imshow("Result", result);
waitKey(0);
return 0;
}
Anyway, at the end of computation, I simply get a black image.
Do you know why?
Strangely, if I initialize result matrix as
Mat result(src.size(), src.type())
at the end of algorithm it will display exactly the input image, without any segmentation.
In particular, I have two doubts:
1) is it correct to lay the RGB values of a pixel on each row of matrix resized the way I've done it? is there a way to do it without a loop?
2) what's exactly the content of centers, after k-means function finishes working? it's a 3 columns matrix, does it contains the RGB values of clusters' centers?
thanks for support.
-The below posted OpenCV program assigns the user preferred color to a particular pixel value in an image
-ScanImageAndReduceC() is a predefined method in OpenCV to scan through all the pixels of an Image
-I.atuchar>(10, 10) = 255; is used to access a particular pixel value of an image
Here is the code:
Mat& ScanImageAndReduceC(Mat& I)
{
// accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols * channels;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i, j;
uchar* p;
for (i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for (j = 0; j < nCols; ++j)
{
I.at<uchar>(10, 10) = 255;
}
}
return I;
}
-------Main Program-------
Calling the above method in our main program
diff = ScanImageAndReduceC(diff);
namedWindow("Difference", WINDOW_AUTOSIZE);// Create a window for display.
imshow("Difference", diff); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}
I am new to opencv, and I am trying to find and save the largest cluster of a kmeaned clustered image. I have:
clustered the image following the method provided by Mercury and Bill the Lizard in the following post (Color classification with k-means in OpenCV),
determined the largest cluster by finding the largest label count from the kmeans output (bestLables)
tried to store the position of the pixels that constitute the largest cluster in an array of Point2i
However, the mystery is that I found myself with a number of stored points that is significantly less than the count
obtained when trying to find the largest cluster. In other words: inc < max. Plus the number given by inc does not even correspond to any other clusters' number of points.
What did I do wrong? or is there a better way to do what I'm trying to do?, any input will be much appreciated.
Thanks in advance for your precious help!!
#include <iostream>
#include "opencv2/opencv.hpp"
#include<opencv2/highgui/highgui.hpp>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat img = imread("pic.jpg", CV_LOAD_IMAGE_COLOR);
if (!img.data)
{
cout << "Could not open or find the image" << std::endl;
return -1;
}
//imshow("img", img);
Mat imlab;
cvtColor(img, imlab, CV_BGR2Lab);
/* Cluster image */
vector<cv::Mat> imgRGB;
int k = 5;
int n = img.rows *img.cols;
Mat img3xN(n, 3, CV_8U);
split(imlab, imgRGB);
for (int i = 0; i != 3; ++i)
imgRGB[i].reshape(1, n).copyTo(img3xN.col(i));
img3xN.convertTo(img3xN, CV_32F);
Mat bestLables;
kmeans(img3xN, k, bestLables, cv::TermCriteria(), 10, cv::KMEANS_RANDOM_CENTERS);
/*bestLables= bestLables.reshape(0,img.rows);
cv::convertScaleAbs(bestLables,bestLables,int(255/k));
cv::imshow("result",bestLables);*/
/* Find the largest cluster*/
int max = 0, indx= 0, id = 0;
int clusters[5];
for (int i = 0; i < bestLables.rows; i++)
{
id = bestLables.at<int>(i, 0);
clusters[id]++;
if (clusters[id] > max)
{
max = clusters[id];
indx = id;
}
}
/* save largest cluster */
int cluster = 1, inc = 0;
Point2i shape[2000];
for (int y = 0; y < imlab.rows; y++)
{
for (int x = 0; x < imlab.cols; x++)
{
if (bestLables.data[y + x*imlab.cols] == cluster) shape[inc++] = { y, x };
}
}
waitKey(0);
return 0;
}
You are pretty close, but there are a few errors. The code below should work as expected. I also added a small piece of code to show the classification result, where pixels of the larger cluster are red, the other with shades of green.
You never initialized int clusters[5];, so it will contains random numbers at the beginning, compromising it as an accumulator. I recommend to use vector<int> instead.
You access bestLabels with wrong indices. Instead of bestLables.data[y + x*imlab.cols], it should be bestLables.data[y*imlab.cols + x]. That caused your inc < max issue. In the code below I used a vector<int> to contain indices, since it's easier to see the content of the vector. So I access bestLabels a little differently, i.e. bestLables[y*imlab.cols + x] instead of bestLables.data[y*imlab.cols + x], but the result is the same.
You had Point2i shape[2000];. I used a vector<Point>. Note that Point is just a typedef of Point2i. Since you don't know how many points will be there, better use a dynamic array. If you know that there will be, say, 2000 points, you'd better call reserve to avoid reallocations, but that's not mandatory. With Point2i shape[2000]; if you have more than 2000 points you'll go out of bounds, with a vector you're safe. I used emplace_back to avoid a copy when appending the point (just like you did with the initializer list). Note that the contructor of Point is (x,y), not (y,x).
Using vector<Point> you don't need inc, since you append the value at the end. If you need inc to store the number of points in the largest cluster, simply call int inc = shape.size();
You initialized int cluster = 1. That's an error, you should initialize it with the index of the largest cluster, i.e. int cluster = indx;.
You are calling the vector of planes imgRGB, but you're working on Lab. You'd better change the name, but it's not an issue per se. Also, remember that RGB values are stored in OpenCV as BGR, not RGB (reversed order).
I prefer Mat1b, Mat3b, etc... where possible over Mat. It allows easier access and is more readable (in my opinion). That's not an issue, but you'll see that in my code.
Here we go:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
int main(int argc, char** argv)
{
Mat3b img = imread("path_to_image");
if (!img.data)
{
std::cout << "Could not open or find the image" << std::endl;
return -1;
}
Mat3b imlab;
cvtColor(img, imlab, CV_BGR2Lab);
/* Cluster image */
vector<cv::Mat3b> imgRGB;
int k = 5;
int n = img.rows * img.cols;
Mat img3xN(n, 3, CV_8U);
split(imlab, imgRGB);
for (int i = 0; i != 3; ++i)
imgRGB[i].reshape(1, n).copyTo(img3xN.col(i));
img3xN.convertTo(img3xN, CV_32F);
vector<int> bestLables;
kmeans(img3xN, k, bestLables, cv::TermCriteria(), 10, cv::KMEANS_RANDOM_CENTERS);
/* Find the largest cluster*/
int max = 0, indx= 0, id = 0;
vector<int> clusters(k,0);
for (size_t i = 0; i < bestLables.size(); i++)
{
id = bestLables[i];
clusters[id]++;
if (clusters[id] > max)
{
max = clusters[id];
indx = id;
}
}
/* save largest cluster */
int cluster = indx;
vector<Point> shape;
shape.reserve(2000);
for (int y = 0; y < imlab.rows; y++)
{
for (int x = 0; x < imlab.cols; x++)
{
if (bestLables[y*imlab.cols + x] == cluster)
{
shape.emplace_back(x, y);
}
}
}
int inc = shape.size();
// Show results
Mat3b res(img.size(), Vec3b(0,0,0));
vector<Vec3b> colors;
for(int i=0; i<k; ++i)
{
if(i == indx) {
colors.push_back(Vec3b(0, 0, 255));
} else {
colors.push_back(Vec3b(0, 255 / (i+1), 0));
}
}
for(int r=0; r<img.rows; ++r)
{
for(int c=0; c<img.cols; ++c)
{
res(r,c) = colors[bestLables[r*imlab.cols + c]];
}
}
imshow("Clustering", res);
waitKey(0);
return 0;
}
Here is my code, which uses OpenCV 2.4.5
Histogram1D.h
#ifndef HISTOGRAM1D_H
#define HISTOGRAM1D_H
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
class Histogram1D
{
public:
Histogram1D();
//Histogram generators
MatND getHistogram(Mat );
Mat getHistogramImage(Mat );
//Generate Negative Image
Mat applyLookup(Mat ,Mat );
//Generate improved image with equalized histogram
Mat equalize(Mat image);
private:
int histSize[1];//Number of bins
float hRanges[2];//Max and Min pixel values
const float *ranges[1];
int channels[1];//Only one channel will be used
};
#endif // HISTOGRAM1D_H
Histogram1D.cpp
#include "Histogram1D.h"
Histogram1D::Histogram1D()
{
histSize[0] = 256;
hRanges[0] = 0.0;
hRanges[1] = 255.0;
ranges[0] = hRanges;
channels[0] = 0;
}
MatND Histogram1D::getHistogram(Mat image)
{
MatND hist;
cv::calcHist(&image,1,channels,Mat(),hist,1,histSize,ranges);
return hist;
}
Mat Histogram1D::getHistogramImage(Mat image)
{
MatND histo = getHistogram(image);
//Get minimum and maximum value bins
double minVal = 0;
double maxVal = 0;
minMaxLoc(histo,&minVal,&maxVal,0,0);
//Image on which to display histogram
Mat histImage(histSize[0],histSize[0],CV_8U,Scalar(255));
//Set highest point at 90% of nbins
int hpt = static_cast<int>(0.9,histSize[0]);
//Draw a vertical line for each bin
for(int i=0;i<histSize[0];i++)
{
float binVal = histo.at<float>(i);
int intensity = static_cast<int>(binVal*hpt/maxVal);
line(histImage,Point(i,histSize[0]),Point(i,histSize[0]-intensity),Scalar::all(0));
}
return histImage;
}
Mat Histogram1D::applyLookup(Mat image,Mat lookup)
{
Mat result;
cv::LUT(image,lookup,result);
return result;
}
Mat Histogram1D::equalize(Mat image)
{
Mat result;
cv::equalizeHist(image,result);
return result;
}
HistogramMain.cpp
#include "Histogram1D.h"
int main()
{
Histogram1D h;
Mat image = imread("C:/Users/Public/Pictures/Sample Pictures/Penguins.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cout << "Number of Channels: " << image.channels() << endl;
namedWindow("Image");
imshow("Image",image);
Mat histogramImage = h.getHistogramImage(image);
namedWindow("Histogram");
imshow("Histogram",histogramImage);
Mat thresholded;
threshold(image,thresholded,60,255,THRESH_BINARY);
namedWindow("Binary Image");
imshow("Binary Image",thresholded);
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Mat equalizedImage;
equalizedImage = h.equalize(image);
namedWindow("Equalized Image");
imshow("Equalized Image",equalizedImage);
waitKey(0);
return 0;
}
When you run this code, the negative image is 100% black! The most amazing this is, if you remove all other code from HistogramMain.cpp but keep the code below which is related to negative image, you will get the correct negative image! Why is this?
I am using QT latest version which use the VS 2010 Compiler.
Mat negativeImage;
int dim(256);
negativeImage = h.applyLookup(image,Mat(1,&dim,CV_8U));
namedWindow("Negative Image");
imshow("Negative Image",negativeImage);
Your primary difficulty is that the expression Mat(1,&dim,CV_8U) allocates memory for a cv::Mat, but does not initialize any values. It is possible that your environment may fill uninitialized memory with zeros, which would explain the black image after calling applyLookup(). In any case, you should initialize the values in your lookup table in order to achieve correct results. For inverting the image, it is easy:
int dim(256);
cv::Mat tab(1,&dim,CV_8U);
uchar* ptr = tab.ptr();
for (size_t i = 0; i < tab.total(); ++i)
{
ptr[i] = 255 - i;
}
There are a few other issues with your code:
The line
int hpt = static_cast<int>(0.9,histSize[0]);
should be
int hpt = static_cast<int>(0.9*histSize[0]);
to do what your comment indicates. Pay attention to your compiler warnings!
You also have problems with your histogram ranges.
By the way, with opencv2 image are now numpy array, so to negative a grey 8-bits image in python, it's simply:
img = 255 - img