How to reassign an individual element of a 2D parallel vector with a 1D vector? - c++

Hi I am working on an assignment for my introduction to C++ class and I am completely stumped on a certain part. Basically the assignment is to open a file that contains individual integers (the data represents a grid of elevation averages), populate a 2D vector with those values, find the min and max value of the vector, convert each element of the vector to a 1D parallel vector containing the RGB representation of that value (in Grey scale), and export the data as a PPM file. I have successfully reached the point where I am supposed to convert the values of the vector to the RGB parallel vectors.
My issue is that I am not entirely sure how to assign the new RGB vector to the original element of the vector. Here is the code I have currently:
#include <iostream>
#include <string>
#include <fstream>
#include <vector>
using namespace std;
int main () {
// initialize inputs
int rows;
int columns;
string fname;
// input options
cout << "Enter number of rows" << endl;
cin >> rows;
cout << "Enter number of columns" << endl;
cin >> columns;
cout << "Enter file name to load" << endl;
cin >> fname;
ifstream inputFS(fname);
// initialize variables
int variableIndex;
vector<vector<int>> dataVector (rows, vector<int> (columns));
int minVal = 0;
int maxVal = 0;
// if file is open, populate vector with data from file
if(inputFS.is_open()) {
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
inputFS >> variableIndex;
dataVector.at(i).at(j) = variableIndex;
}
}
}
// find max and min value within data set
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
if (dataVector.at(i).at(j) < minVal) {
minVal = dataVector.at(i).at(j);
}
if (dataVector.at(i).at(j) > minVal) {
maxVal = dataVector.at(i).at(j);
}
}
}
// initialize variables and new color vector
// -------PART I NEED HELP ON-----------
int range = maxVal - minVal;
int remainderCheck = 0;
double color = 0;
vector<int> colorVector = 3;
for (int i = 0; i < dataVector.size(); i++) {
for (int j = 0; j < dataVector.at(0).size(); j++) {
remainderCheck = dataVector.at(i).at(j) - minVal;
if (remainderCheck / range == 0) {
cout << "Color 0 error" << endl;
// still need to find the RGB value for these cases
}
else {
color = remainderCheck / range;
fill(colorVector.begin(),colorVector.end()+3,color);
dataVector.at(i).at(j) = colorVector; // <-- DOESN'T WORK
}
}
}
}
My knowledge with C++ is very limited so any help would be greatly appreciated. Also if you have any advice for the other comment dealing with the / operator issues in the same chunk of code, that too would also me incredibly appreciated.
Here are the actual instructions for this specific part:
Step 3 - Compute the color for each part of the map and store
The input data file contains the elevation value for each cell in the map. Now you need to compute the color (in a gray scale between white and black) to use to represent these evaluation values. The shade of gray should be scaled to the elevation of the map.
Traditionally, images are represented and displayed in electronic systems (such as TVs and computers) through the RGB color model, an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. In this model, colors are represented through three integers (R, G, and B) values between 0 and 255. For example, (0, 0, 255) represents blue and (255, 255, 0) represents yellow. In RGB color, if each of the three RGB values are the same, we get a shade of gray. Thus, there are 256 possible shades of gray from black (0,0,0) to middle gray (128,128,128), to white (255,255,255).
To make the shade of gray, you should use the min and max values in the 2D vector to scale each integer (elevation data) to a value between 0 and 255 inclusive. This can be done with the following equation:
color =(elevation - min elevation)(max elevation - min elevation) * 255
Check your math to ensure that you are scaling correctly. Check your code to make sure that your arithmetic operations are working as you want. Recall that if a and b are variables declared as integers, the expression a/b will be 0 if a==128 and b==256.
As you compute the shade of grey, store that value in three parallel vectors for R, G and B. Putting the same value for R, G and B will result in grey. The structure of the vector should mirror the vector with the elevation data.

Your professor is asking you to make three additional vector<vector<int>>s: 1 for each of R, G, and B. (I do not know why you need three separate vectors: they will have identical values, since for grayscale R==G==B for every element. Still, follow instructions.)
typedef std::vector <int> row_type;
typedef std::vector <row_type> image_type;
image_type dataVector( rows, row_type( columns ) );
image_type R ( rows, row_type( columns ) );
image_type G ( rows, row_type( columns ) );
image_type B ( rows, row_type( columns ) );
Also, be careful whenever you do something like fill(foo.begin(),foo.end()...). Attempting to fill beyond the end of the container (foo.end()+3) is undefined behavior.
Load your dataset into dataVector as before, find your min and max, then for each element find the grayscale value (in [0,255]). Assign that value to each corresponding element of R, G, and B.
Once you have those three square vectors, you can use them to create your PPM file.

Related

C++ : Create 3D array out of stacking 2D arrays

In Python I normally use functions like vstack, stack, etc to easily create a 3D array by stacking 2D arrays one onto another.
Is there any way to do this in C++?
In particular, I have loaded a image into a Mat variable with OpenCV like:
cv::Mat im = cv::imread("image.png", 0);
I would like to make a 3D array/Mat of N layers by stacking copies of that Mat variable.
EDIT: This new 3D matrix has to be "travellable" by adding an integer to any of its components, such that if I am in the position (x1,y1,1) and I add +1 to the last component, I arrive to (x1,y1,2). Similarly for any of the coordinates/components of the 3D matrix.
SOLVED: Both answers from #Aram and #Nejc do exactly what expected. I set #Nejc 's answer as the correct one for his shorter code.
The Numpy function vstack returns a contiguous array. Any C++ solution that produces vectors or arrays of cv::Mat objects does not reflect the behaviour of vstack in this regard, becase separate "layers" belonging to individual cv::Mat objects will not be stored in contiguous buffer (unless a careful allocation of underlying buffers is done in advance of course).
I present the solution that copies all arrays into a three-dimensional cv::Mat object with a contiguous buffer. As far as the idea goes, this answer is similar to Aram's answer. But instead of assigning pixel values one by one, I take advantage of OpenCV functions. At the beginning I allocate the matrix which has a size N X ROWS X COLS, where N is the number of 2D images I want to "stack" and ROWS x COLS are dimensions of each of these images.
Then I make N steps. On every step, I obtain the pointer to the location of the first element along the "outer" dimension. I pass that pointer to the constructor of temporary Mat object that acts as a kind of wrapper around the memory chunk of size ROWS x COLS (but no copies are made) that begins at the address that is pointed-at by pointer. I then use copyTo method to copy i-th image into that memory chunk. Code for N = 2:
cv::Mat img0 = cv::imread("image0.png", CV_IMREAD_GRAYSCALE);
cv::Mat img1 = cv::imread("image1.png", CV_IMREAD_GRAYSCALE);
cv::Mat images[2] = {img0, img1}; // you can also use vector or some other container
int dims[3] = { 2, img0.rows, img0.cols }; // dimensions of new image
cv::Mat joined(3, dims, CV_8U); // same element type (CV_8U) as input images
for(int i = 0; i < 2; ++i)
{
uint8_t* ptr = &joined.at<uint8_t>(i, 0, 0); // pointer to first element of slice i
cv::Mat destination(img0.rows, img0.cols, CV_8U, (void*)ptr); // no data copy, see documentation
images[i].copyTo(destination);
}
This answer is in response to the question above of:
In Python I normally use functions like vstack, stack, etc to easily create a 3D array by stacking 2D arrays one onto another.
This is certainly possible, you can add matrices into a vector which would be your "stack"
For instance you could use a
std::vector<cv::Mat>>
This would give you a vector of mats, which would be one slice, and then you could "layer" those by adding more slices vector
If you then want to have multiple stacks you can add that vector into another vector:
std::vector<std::vector<cv::Mat>>
To add matrix to an array you do:
myVector.push_back(matrix);
Edit for question below
In such case, could I travel from one position (x1, y1, z1) to an immediately upper position doing (x1,y1,z1+1), such that my new position in the matrix would be (x1,y1,z2)?
You'll end up with something that looks a lot like this. If you have a matrix at element 1 in your vector, it doesn't really have any relationship to the element[2] except for the fact that you have added it into that point. If you want to build relationships then you will need to code that in yourself.
You can actually create a 3D or ND mat with opencv, you need to use the constructor that takes the dimensions as input. Then copy each matrix into (this case) the 3D array
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main() {
// Dimensions for the constructor... set dims[0..2] to what you want
int dims[] = {5, 5, 5}; // 5x5x5 3d mat
Mat m = Mat::zeros(5, 5, CV_8UC1);
for (size_t i = 0; i < 5; i++) {
for (size_t k = 0; k < 5; k++) {
m.at<uchar>(i, k) = i + k;
}
}
// Mat with constructor specifying 3 dimensions with dimensions sizes in dims.
Mat 3DMat = Mat(3, dims, CV_8UC1);
// We fill our 3d mat.
for (size_t i = 0; i < m2.size[0]; i++) {
for (size_t k = 0; k < m2.size[1]; k++) {
for (size_t j = 0; j < m2.size[2]; j++) {
3DMat.at<uchar>(i, k, j) = m.at<uchar>(k, j);
}
}
}
// We print it to show the 5x5x5 array.
for (size_t i = 0; i < m2.size[0]; i++) {
for (size_t k = 0; k < m2.size[1]; k++) {
for (size_t j = 0; j < m2.size[2]; j++) {
std::cout << (int) 3DMat.at<uchar>(i, k, j) << " ";
}
std::cout << endl;
}
std::cout << endl;
}
return 0;
}
Based on the question and comments, I think you are looking for something like this:
std::vector<cv::Mat> vec_im;
//In side for loop:
vec_im.push_back(im);
Then, you can access it by:
Scalar intensity_1 = vec_im[z1].at<uchar>(y, x);
Scalar intensity_2 = vec_im[z2].at<uchar>(y, x);
This assumes that the image is single channel.

Mat cells set to NULL in OpenCV?

Quick summary:
I create a cv::Mat by
cv::Mat m = cv::Mat::zeros(MAP_HEIGHT, MAP_WIDTH, CV_8UC1)
My approach after this is to see if i have any polygons in a list of polygons, and if i do, fill them in, and lastly i assign m to my public cv::Mat map (defined in the header-file).
What happens is basically:
cv::Mat m = cv::Mat::zeros(MAP_HEIGHT, MAP_WIDTH, CV_8UC1);
// possibly fill polygons with 1's. Nothing happens if there are no polygons
map = m;
The logic of my program is that position x,y is allowed if a 0 is occupying the cell. So no polygons => all map should be 'legit'.
I have defined this method to check whether a given x-y coordinate is allowed.
bool Map::isAllowed(bool res, int x, int y) {
unsigned char allowed = 0;
res = (map.ptr<unsigned char>(y)[x] == allowed);
}
Now the mystery begins.
cout << cv::countNonZero(map) << endl; // prints 0, meaning all cells are 0
for(int i = 0; i < MAP_HEIGHT; i++) {
unsigned char* c = map.ptr<unsigned char>(i);
for(int j = 0; j < MAP_WIDTH; j++) {
cout << c[j] << endl;
}
} // will print nothing, only outputs empty lines, followed by a newline.
If i print (c[j] == NULL) it prints 1.
If i print the entire Mat i see only 0's flashing over my screen, so they are clearly there.
Why does isAllowed(bool, x, y) return false for (0,0), when there is clearly a 0 there?
Let me know if any more information is needed, thanks!
Problem is solved now, here are my mistakes for future reference:
1: When printing, #Miki pointed out that unsigned characters -> ASCII value gets printed, not numerical representation.
2: in isAllowedPosition(bool res, int x, int y), res has a primitive type. Aka this is pushed on the stack and not a reference to a memorylocation. When writing to it, i write to the local copy and not to the one passed in as an argumet.
Two possible fixes, either pass in a pointer to a memorylocation and write to that, or simply return the result.
Since your data type is uchar (aka unsigned char), you're printing the ASCII value. Use
cout << int(c[j]) << endl;
to print the actual value.
Also map.ptr<unsigned char>(y)[x] can be rewritten simply as map.at<uchar>(y,x), or if you use Mat1b as map(y,x)

How to evenly distribute numbers 0 to n into m different containers

I am trying to write an algorithm for a program to draw an even, vertical gradient across an image. I.e. I want change the pixel color from 0 to 255 along the m rows of an image, but cannot find a good generic algorithm to do so.
I've tried to implement something like this using opencv, but it does not seem to work
#include <opencv2/opencv.hpp>
int main(){
//Make the image white.
cv::Mat Img(w_height, w_width, CV_8U);
for (int y = 0; y < Img.rows; y += 1) {
for (int x = 0; x < Img.cols; x++) {
Img.at<uchar>(y, x) = 255;//White
}
}
// try to create an even, vertical gradient
for(int row = 0; row < Img.rows; row++ ){
for(int col = 0; col < Img.cols; col++){
Img.at<uchar>(row, col) = col % 256;
}
}
cv::imshow("Window", Img);
cv::waitKey(0);
return 0;
}
Solving this problem requires the knowledge of three simple tricks:
1. Interpolation:
The process of gradually changing from one value to another is called interpolation. There are multiple ways of interpolating color values: the simplest one is to interpolate each component linearly, i.e. in the form of:
interpolated = start * (1-t) + dest * t.
Where
start is the value you are interpolating from towards the value dest.
t denotes how close the interpolated value should be to the destination value dest on a scale of 0 to 1 with 0 being the pure start color and 1 being the pure dest color.
You will find that linear interpolation in the RGB color space doesn't produce natural color paths. As an advanced step, you could utilise the HSV color space instead. See this question for further information about color interpolation.
2. Discretisation:
Unfortunately, interpolation produces real numbers. Thus, we have to discretise them to be able to use them as integer color values. The best way to do this is to round to the nearest integer by using e.g. round() in C++.
3. Finding the interpolation point:
Now, we just need a real-valued interpolation point t at each row of our image. We can deduce a formula for this by analysing what output we want to see:
For the bottommost row (row 1) we want to have t == 0 since that is where we want our pure start color to appear.
For the topmost row (row m) we want to have t == 1 since that is where we want the pure destination color to appear.
For every other row we want t to scale linearly with the distance to the bottommost row.
A formula to achieve this result is:
t = rowIndex / m
The approach can readily be adapted to other gradient directions by changing this formula appropriately.
Sample code (using linear interpolation, C++):
#include <algorithm>
#include <cmath>
Color interpolateRGB(Color from, Color to, float t)
{
// Clamp __t__ to range [0,1]
t = std::max(std::min(0.f, t), 1.f);
// Interpolate each RGB component
uint8_t r = std::roundf(from.r * (1-t) + to.r * t);
uint8_t g = std::roundf(from.g * (1-t) + to.g * t);
uint8_t b = std::roundf(from.b * (1-t) + to.b * t);
return Color(r, g, b);
}
void fillWithGradient(Image& img, Color from, Color to)
{
for(size_t row = 0; row < img.numRows(); ++row)
{
Color value = interpolateRGB(from, to, row / (img.numRows()-1));
// Set all pixels of this row to __value__
for(size_t col = 0; col < img.numCols(); ++col)
{
img.setPixel(row, col, value);
}
}
}
The basic idea would be to use the remainder of the division r of n/(m-1) and adding it to n on each iteration:
#include <iostream>
#include <vector>
using namespace std;
vector<int> gradient( int n, int m ) {
div_t q { 0, 0 };
vector<int> grad(m);
for( int i=1 ; i<m ; ++i ) {
q = div( n + q.rem, m-1 );
grad[i] = grad[i-1] + q.quot;
}
return grad;
}
int main() {
for( int i : gradient(255,10) ) cout << i << ' ';
cout << '\n';
return 0;
}
Output:
0 28 56 85 113 141 170 198 226 255

Opencv Mat vector assignment to a row of a matrix, fastest way?

What is the fastest way of assigning a vector to a matrix row in a loop? I want to fill a data matrix along its rows with vectors. These vectors are computed in a loop. This loop last until all the entries of data matrix is filled those vectors.
Currently I am using cv::Mat::at<>() method for accessing the elements of the matrix and fill them with the vector, however, it seems this process is quite slow. I have tried another way by using cv::Mat::X.row(index) = data_vector, it works fast but fill my matrix X with some garbage values which I can not understand, why.
I read that there exists another way of using pointers (fastest way), however, I can not able to understand. Can somebody explain how to use them or other different methods?
Here is a part of my code:
#define OFFSET 2
cv::Mat im = cv::imread("001.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat X = cv::Mat((im.rows - 2*OFFSET)*(im.cols - 2*OFFSET), 25, CV_64FC1); // Holds the training data. Data contains image patches
cv::Mat patch = cv::Mat(5, 5, im.type()); // Holds a cropped image patch
typedef cv::Vec<float, 25> Vec25f;
int ind = 0;
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
Vec25f data_vector = patch.reshape(0, 1); // make it row vector (1X25).
for (int i = 0; i < 25; i++)
{
X.at<float>(ind, i) = data_vector[i]; // Currently I am using this way (quite slow).
}
//X_train.row(ind) = patch.reshape(0, 1); // Tried this but it assigns some garbage values to the data matrix!
ind += 1;
}
}
To do it the regular opencv way you could do :-
ImageMat.row(RowIndex) = RowMat.clone();
or
RowMat.copyTo(ImageMat.row(RowIndex));
Haven't tested for correctness or speed.
Just a couple of edits in your code
double * xBuffer = X.ptr<double>(0);
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
memcpy(xBuffer, patch.data, 25*sizeof(double));
xBuffer += 25;
}
}
Also, you dont seem to do any computation in patch just extract grey level values, so you can create X with the same type as im, and convert it to double at the end. In this way, you could memcpy each row of your patch, the address in memory beeing `unsigned char* buffer = im.ptr(row) + col
According to the docs:
if you need to process a whole row of matrix, the most efficient way is to get the pointer to the row first, and then just use plain C operator []:
// compute sum of positive matrix elements
// (assuming that M is double-precision matrix)
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}

accessing image pixels as float array

I want to access image pixels as float array in opencv. Ive done the following:
Mat input = imread("Lena.jpg",CV_LOAD_IMAGE_GRAYSCALE);
int height = input.rows;
int width = input.cols;
Mat out;
input.convertTo(input, CV_32FC1);
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
out = Mat(height, width, input.type());
float *outdata = (float*)out.data;
float *indata = (float*)input.data;
for(int j = 0; j < height; j++){
for(int i =0; i < width; i++){
outdata[j*width + i] = indata[(j* width + i)];
}
}
normalize(out, out,0,255,NORM_MINMAX,CV_8UC1);
imshow("output", out);
waitKey();
This should return the original image in "out", however, I'm getting some weird image. Can anyone explain whats wrong with the code. I think i need to use some step size (widthStep). Thanks.
the line
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
changes the dimensions of input, it adds 6 rows and 6 columns to the image. That means your height and width variables are holding the wrong values when you define out and try to loop over the values on input.
if you change the order to
copyMakeBorder(input, input, 3, 3, 3, 3, 0);
int height = input.rows;
int width = input.cols;
it should work fine.
Some ideas:
Something like outdata[j*width + i] is a more standard pattern for this sort of thing.
According to the opencv documentation, there is a templated Mat::at(int y, int x) method that allows you to access individual elements of a matrix.
float f = input.at<float>(0, 0);
Note that this requires that your underlying matrix type is float -- it won't do a conversion for you.
Alternatively, you could access the data row-by-row, as in this example that sums up the positive elements of a matrix M of type double:
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}
If none of these work, I'd suggest creating a small matrix with known values (e.g. a 2x2 matrix with 1 black pixel and 3 white pixels) and use that to help debug your code.
To really make it apparent what the problem is, imagine a 16 by 16 image. Now think of pixel number 17 in the linear representation.
17 is a prime number. There is no j*i that will index your source image at pixel 17 if the row or column width is 16. Thus elements like 17, 19, 23 and so on will be uninitialized or at best 0, resulting in a "weird" output.
How about pixel 8 in the linear representation? that one in contrast will get hit by your loop four times, i.e. by 1x8, 2x4, 4x2, and 8x1!
The indexing #NateKohl presents in his answer will fix it since he multiplies a row position by the length of the row and then simply walks along the columns.
You can try this loop...
for(int row=0;row<height;row++)
{
for(int col=0;col<width;col++)
{
float float_data = input.at<float>(row,col);
// do some processing with value of float_data
out.at<float>(row,col) = float_data;
}
}
Is there a need to cast the uchar pointers of input and out Mats to float pointers?