Mat cells set to NULL in OpenCV? - c++

Quick summary:
I create a cv::Mat by
cv::Mat m = cv::Mat::zeros(MAP_HEIGHT, MAP_WIDTH, CV_8UC1)
My approach after this is to see if i have any polygons in a list of polygons, and if i do, fill them in, and lastly i assign m to my public cv::Mat map (defined in the header-file).
What happens is basically:
cv::Mat m = cv::Mat::zeros(MAP_HEIGHT, MAP_WIDTH, CV_8UC1);
// possibly fill polygons with 1's. Nothing happens if there are no polygons
map = m;
The logic of my program is that position x,y is allowed if a 0 is occupying the cell. So no polygons => all map should be 'legit'.
I have defined this method to check whether a given x-y coordinate is allowed.
bool Map::isAllowed(bool res, int x, int y) {
unsigned char allowed = 0;
res = (map.ptr<unsigned char>(y)[x] == allowed);
}
Now the mystery begins.
cout << cv::countNonZero(map) << endl; // prints 0, meaning all cells are 0
for(int i = 0; i < MAP_HEIGHT; i++) {
unsigned char* c = map.ptr<unsigned char>(i);
for(int j = 0; j < MAP_WIDTH; j++) {
cout << c[j] << endl;
}
} // will print nothing, only outputs empty lines, followed by a newline.
If i print (c[j] == NULL) it prints 1.
If i print the entire Mat i see only 0's flashing over my screen, so they are clearly there.
Why does isAllowed(bool, x, y) return false for (0,0), when there is clearly a 0 there?
Let me know if any more information is needed, thanks!

Problem is solved now, here are my mistakes for future reference:
1: When printing, #Miki pointed out that unsigned characters -> ASCII value gets printed, not numerical representation.
2: in isAllowedPosition(bool res, int x, int y), res has a primitive type. Aka this is pushed on the stack and not a reference to a memorylocation. When writing to it, i write to the local copy and not to the one passed in as an argumet.
Two possible fixes, either pass in a pointer to a memorylocation and write to that, or simply return the result.

Since your data type is uchar (aka unsigned char), you're printing the ASCII value. Use
cout << int(c[j]) << endl;
to print the actual value.
Also map.ptr<unsigned char>(y)[x] can be rewritten simply as map.at<uchar>(y,x), or if you use Mat1b as map(y,x)

Related

C++ : Create 3D array out of stacking 2D arrays

In Python I normally use functions like vstack, stack, etc to easily create a 3D array by stacking 2D arrays one onto another.
Is there any way to do this in C++?
In particular, I have loaded a image into a Mat variable with OpenCV like:
cv::Mat im = cv::imread("image.png", 0);
I would like to make a 3D array/Mat of N layers by stacking copies of that Mat variable.
EDIT: This new 3D matrix has to be "travellable" by adding an integer to any of its components, such that if I am in the position (x1,y1,1) and I add +1 to the last component, I arrive to (x1,y1,2). Similarly for any of the coordinates/components of the 3D matrix.
SOLVED: Both answers from #Aram and #Nejc do exactly what expected. I set #Nejc 's answer as the correct one for his shorter code.
The Numpy function vstack returns a contiguous array. Any C++ solution that produces vectors or arrays of cv::Mat objects does not reflect the behaviour of vstack in this regard, becase separate "layers" belonging to individual cv::Mat objects will not be stored in contiguous buffer (unless a careful allocation of underlying buffers is done in advance of course).
I present the solution that copies all arrays into a three-dimensional cv::Mat object with a contiguous buffer. As far as the idea goes, this answer is similar to Aram's answer. But instead of assigning pixel values one by one, I take advantage of OpenCV functions. At the beginning I allocate the matrix which has a size N X ROWS X COLS, where N is the number of 2D images I want to "stack" and ROWS x COLS are dimensions of each of these images.
Then I make N steps. On every step, I obtain the pointer to the location of the first element along the "outer" dimension. I pass that pointer to the constructor of temporary Mat object that acts as a kind of wrapper around the memory chunk of size ROWS x COLS (but no copies are made) that begins at the address that is pointed-at by pointer. I then use copyTo method to copy i-th image into that memory chunk. Code for N = 2:
cv::Mat img0 = cv::imread("image0.png", CV_IMREAD_GRAYSCALE);
cv::Mat img1 = cv::imread("image1.png", CV_IMREAD_GRAYSCALE);
cv::Mat images[2] = {img0, img1}; // you can also use vector or some other container
int dims[3] = { 2, img0.rows, img0.cols }; // dimensions of new image
cv::Mat joined(3, dims, CV_8U); // same element type (CV_8U) as input images
for(int i = 0; i < 2; ++i)
{
uint8_t* ptr = &joined.at<uint8_t>(i, 0, 0); // pointer to first element of slice i
cv::Mat destination(img0.rows, img0.cols, CV_8U, (void*)ptr); // no data copy, see documentation
images[i].copyTo(destination);
}
This answer is in response to the question above of:
In Python I normally use functions like vstack, stack, etc to easily create a 3D array by stacking 2D arrays one onto another.
This is certainly possible, you can add matrices into a vector which would be your "stack"
For instance you could use a
std::vector<cv::Mat>>
This would give you a vector of mats, which would be one slice, and then you could "layer" those by adding more slices vector
If you then want to have multiple stacks you can add that vector into another vector:
std::vector<std::vector<cv::Mat>>
To add matrix to an array you do:
myVector.push_back(matrix);
Edit for question below
In such case, could I travel from one position (x1, y1, z1) to an immediately upper position doing (x1,y1,z1+1), such that my new position in the matrix would be (x1,y1,z2)?
You'll end up with something that looks a lot like this. If you have a matrix at element 1 in your vector, it doesn't really have any relationship to the element[2] except for the fact that you have added it into that point. If you want to build relationships then you will need to code that in yourself.
You can actually create a 3D or ND mat with opencv, you need to use the constructor that takes the dimensions as input. Then copy each matrix into (this case) the 3D array
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main() {
// Dimensions for the constructor... set dims[0..2] to what you want
int dims[] = {5, 5, 5}; // 5x5x5 3d mat
Mat m = Mat::zeros(5, 5, CV_8UC1);
for (size_t i = 0; i < 5; i++) {
for (size_t k = 0; k < 5; k++) {
m.at<uchar>(i, k) = i + k;
}
}
// Mat with constructor specifying 3 dimensions with dimensions sizes in dims.
Mat 3DMat = Mat(3, dims, CV_8UC1);
// We fill our 3d mat.
for (size_t i = 0; i < m2.size[0]; i++) {
for (size_t k = 0; k < m2.size[1]; k++) {
for (size_t j = 0; j < m2.size[2]; j++) {
3DMat.at<uchar>(i, k, j) = m.at<uchar>(k, j);
}
}
}
// We print it to show the 5x5x5 array.
for (size_t i = 0; i < m2.size[0]; i++) {
for (size_t k = 0; k < m2.size[1]; k++) {
for (size_t j = 0; j < m2.size[2]; j++) {
std::cout << (int) 3DMat.at<uchar>(i, k, j) << " ";
}
std::cout << endl;
}
std::cout << endl;
}
return 0;
}
Based on the question and comments, I think you are looking for something like this:
std::vector<cv::Mat> vec_im;
//In side for loop:
vec_im.push_back(im);
Then, you can access it by:
Scalar intensity_1 = vec_im[z1].at<uchar>(y, x);
Scalar intensity_2 = vec_im[z2].at<uchar>(y, x);
This assumes that the image is single channel.

How to reassign an individual element of a 2D parallel vector with a 1D vector?

Hi I am working on an assignment for my introduction to C++ class and I am completely stumped on a certain part. Basically the assignment is to open a file that contains individual integers (the data represents a grid of elevation averages), populate a 2D vector with those values, find the min and max value of the vector, convert each element of the vector to a 1D parallel vector containing the RGB representation of that value (in Grey scale), and export the data as a PPM file. I have successfully reached the point where I am supposed to convert the values of the vector to the RGB parallel vectors.
My issue is that I am not entirely sure how to assign the new RGB vector to the original element of the vector. Here is the code I have currently:
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
using namespace std;
int main () {
// initialize inputs
int rows;
int columns;
string fname;
// input options
cout << "Enter number of rows" << endl;
cin >> rows;
cout << "Enter number of columns" << endl;
cin >> columns;
cout << "Enter file name to load" << endl;
cin >> fname;
ifstream inputFS(fname);
// initialize variables
int variableIndex;
vector<vector<int>> dataVector (rows, vector<int> (columns));
int minVal = 0;
int maxVal = 0;
// if file is open, populate vector with data from file
if(inputFS.is_open()) {
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
inputFS >> variableIndex;
dataVector.at(i).at(j) = variableIndex;
}
}
}
// find max and min value within data set
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
if (dataVector.at(i).at(j) < minVal) {
minVal = dataVector.at(i).at(j);
}
if (dataVector.at(i).at(j) > minVal) {
maxVal = dataVector.at(i).at(j);
}
}
}
// initialize variables and new color vector
// -------PART I NEED HELP ON-----------
int range = maxVal - minVal;
int remainderCheck = 0;
double color = 0;
vector<int> colorVector = 3;
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
remainderCheck = dataVector.at(i).at(j) - minVal;
if (remainderCheck / range == 0) {
cout << "Color 0 error" << endl;
// still need to find the RGB value for these cases
}
else {
color = remainderCheck / range;
fill(colorVector.begin(),colorVector.end()+3,color);
dataVector.at(i).at(j) = colorVector; // <-- DOESN'T WORK
}
}
}
}
My knowledge with C++ is very limited so any help would be greatly appreciated. Also if you have any advice for the other comment dealing with the / operator issues in the same chunk of code, that too would also me incredibly appreciated.
Here are the actual instructions for this specific part:
Step 3 - Compute the color for each part of the map and store
The input data file contains the elevation value for each cell in the map. Now you need to compute the color (in a gray scale between white and black) to use to represent these evaluation values. The shade of gray should be scaled to the elevation of the map.
Traditionally, images are represented and displayed in electronic systems (such as TVs and computers) through the RGB color model, an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. In this model, colors are represented through three integers (R, G, and B) values between 0 and 255. For example, (0, 0, 255) represents blue and (255, 255, 0) represents yellow. In RGB color, if each of the three RGB values are the same, we get a shade of gray. Thus, there are 256 possible shades of gray from black (0,0,0) to middle gray (128,128,128), to white (255,255,255).
To make the shade of gray, you should use the min and max values in the 2D vector to scale each integer (elevation data) to a value between 0 and 255 inclusive. This can be done with the following equation:
color =(elevation - min elevation)(max elevation - min elevation) * 255
Check your math to ensure that you are scaling correctly. Check your code to make sure that your arithmetic operations are working as you want. Recall that if a and b are variables declared as integers, the expression a/b will be 0 if a==128 and b==256.
As you compute the shade of grey, store that value in three parallel vectors for R, G and B. Putting the same value for R, G and B will result in grey. The structure of the vector should mirror the vector with the elevation data.
Your professor is asking you to make three additional vector<vector<int>>s: 1 for each of R, G, and B. (I do not know why you need three separate vectors: they will have identical values, since for grayscale R==G==B for every element. Still, follow instructions.)
typedef std::vector <int> row_type;
typedef std::vector <row_type> image_type;
image_type dataVector( rows, row_type( columns ) );
image_type R ( rows, row_type( columns ) );
image_type G ( rows, row_type( columns ) );
image_type B ( rows, row_type( columns ) );
Also, be careful whenever you do something like fill(foo.begin(),foo.end()...). Attempting to fill beyond the end of the container (foo.end()+3) is undefined behavior.
Load your dataset into dataVector as before, find your min and max, then for each element find the grayscale value (in [0,255]). Assign that value to each corresponding element of R, G, and B.
Once you have those three square vectors, you can use them to create your PPM file.

Why do I get different values when using different datatypes when accessing pixels in a matrix?

I have a single channel grayscale image (slice).
cout << "num" << slice.channels() << ends; //outputs 1
for(int x = 0;x<=slice.cols;x++){
for(int y = 0;y<=slice.rows;y++){
Vec3b currentPoint = slice.at<Vec3b>(x,y);
cout << currentPoint;
}
}
however, when I try to access a pixel and expect currentPoint to be a single int as it is a single channel image. However, i get [32, 36, 255] which is odd, as it implies three channels. I appreciate I am using a type that says vec3b, but even so, where is it getting the other two elements from?
So I replace Vec3b with uchar, then i get lots of \377. That is even more confusing.
Even when I do have a 3 channel image, I get odd outputs when trying to access a single element of Vec3b (i get more \377).
How can this make sense? I must be mis understanding how the at() method is used.
Firstly, how do I get a single output for each pixel (0-255)?
Also, where am I going wrong when i see \377?
A lot of stuff for a few lines of code...
Since your image is a grayscale image, you should access it with at<uchar>.
Pay attention that the at<> function accepts (rows, cols), which is the opposite of (x,y).
It's faster to scan by line, since the matrix is stored row-wise in memory.
To print out the value of a uchar, you need to cast to int, or you get the ASCII coded character.
The loops should not be <=, but instead <, or you go out of bounds.
So:
for(int y = 0; y < slice.rows; y++) {
for(int x = 0; x < slice.cols; x++) {
uchar currentPoint = slice.at<uchar>(y,x);
cout << int(currentPoint) << " ";
}
cout << "\n";
}

Opencv convolution matrix gives unusual results

So I have a program that is trying to apply a simple 3x3 convolution matrix to an image.
This is the function that is doing the work:
Mat process(Mat image) {
int x = 2;
int y = 2;
Mat nimage(image); //just a new mat to put the resulting image on
while (y < image.rows-2) {
while (x < image.cols-2) {
nimage.at<uchar>(y,x) = //apply matrix to pixel
image.at<char>(y-1,x-1)*matrix[0]+
image.at<char>(y-1,x)*matrix[1]+
image.at<char>(y-1,x+1)*matrix[2]+
image.at<char>(y,x-1)*matrix[3]+
image.at<char>(y,x)*matrix[4]+
image.at<char>(y,x+1)*matrix[5]+
image.at<char>(y+1,x-1)*matrix[6]+
image.at<char>(y+1,x)*matrix[7]+
image.at<char>(y+1,x+1)*matrix[8];
//if (total < 0) total = 0;
//if (total > 255) total = 255;
//cout << (int)total << ": " << x << "," << y << endl;
x++;
}
x = 0;
y++;
}
cout << "done" << endl;
return nimage;
}
And the matrix looks like this
double ar[9] = {-1,0,0,
0,2,0,
0,0,0};
And the image that is used as input looks like this:
The desired output (I ran the same matrix on the input image in GIMP):
And the result is... weird:
I think this has to do with the data type I use when I set a pixel of the new image (nimage.at<uchar>(y,x) = ...), because whenever I change it I get a different, yet still incorrect result.
From the OpenCV documentation about the copy constructor of Mat, emphasis mine:
m – Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m. If you want to have an independent copy of the sub-array, use Mat::clone().
So
Mat nimage(image); //just a new mat to put the resulting image on
doesn't actually create a new matrix; it creates a new Mat object, but that object still refers to the same matrix. From then on nimage.at(y,x) acts like image.at(y,x).
To copy the image, use
Mat nimage(image.clone()); //just a new mat to put the resulting image on

Finding Local Maxima Grayscale Image opencv

I am trying to create my personal Blob Detection algorithm
As far as I know I first must create different Gaussian Kernels with different sigmas (which I am doing using Mat kernel= getGaussianKernel(x,y);) Then get the Laplacian of that kernel and then filter the Image with that so I create my scalespace. Now I need to find the Local Maximas in each result Image of the scalespace. But I cannot seem to find a proper way to do so.... my Code so far is
vector <Point> GetLocalMaxima(const cv::Mat Src,int MatchingSize, int Threshold)
{
vector <Point> vMaxLoc(0);
if ((MatchingSize % 2 == 0) ) // MatchingSize has to be "odd" and > 0
{
return vMaxLoc;
}
vMaxLoc.reserve(100); // Reserve place for fast access
Mat ProcessImg = Src.clone();
int W = Src.cols;
int H = Src.rows;
int SearchWidth = W - MatchingSize;
int SearchHeight = H - MatchingSize;
int MatchingSquareCenter = MatchingSize/2;
uchar* pProcess = (uchar *) ProcessImg.data; // The pointer to image Data
int Shift = MatchingSquareCenter * ( W + 1);
int k = 0;
for(int y=0; y < SearchHeight; ++y)
{
int m = k + Shift;
for(int x=0;x < SearchWidth ; ++x)
{
if (pProcess[m++] >= Threshold)
{
Point LocMax;
Mat mROI(ProcessImg, Rect(x,y,MatchingSize,MatchingSize));
minMaxLoc(mROI,NULL,NULL,NULL,&LocMax);
if (LocMax.x == MatchingSquareCenter && LocMax.y == MatchingSquareCenter)
{
vMaxLoc.push_back(Point( x+LocMax.x,y + LocMax.y ));
// imshow("W1",mROI);cvWaitKey(0); //For gebug
}
}
}
k += W;
}
return vMaxLoc;
}
which I found in this thread here, which it supposedly returns a vector of points where the maximas are. it does return a vector of points but all the x and y coordinates of each point are always -17891602... What to do???
Please if you are to lead me in something else other than correcting my code be informative because I know nothing about opencv. I am just learning
The problem here is that your LocMax point is declared inside the inner loop and never initialized, so it's returning garbage data every time. If you look back at the StackOverflow question you linked, you'll see that their similar variable Point maxLoc(0,0) is declared at the top and constructed to point at the middle of the search window. It only needs to be initialized once. Subsequent loop iterations will replace the value with the minMaxLoc function result.
In summary, remove this line in your inner loop:
Point LocMax; // delete this
And add a slightly altered version near the top:
vector <Point> vMaxLoc(0); // This was your original first line
Point LocMax(0,0); // your new second line
That should get you started anyway.
I found it guys. The problem was my threshold was too high. I do not understand why it gave me negative points instead of zero points but lowering the threshold worked