C++/OpenCV Can't initialize 3D Mat - c++

I have a problem with initializing a 3D Mat with openCV.
I would like to create a 3D matrix of size (rows x cols x 16), rows and cols being the dimensions of an image given earlier in the program. I tried I can not say how many different methods, and all return to me more or less the same thing: the dimensions of my matrices are worth 0 or -858993460.
My code lines :
Mat image_Conv;
int rows = imageBicubic.rows;
int cols = imageBicubic.cols;
image_Conv = Mat::zeros(rows, cols, CV_32FC(16));
Can you tell me why I have this problem? Of course I read all the posts that speak, read the doc opencv on the class Mat, but nothing works, I still have the same problem. I specify that my data in the Mat will be float.
The code :
// Include standard headers
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <vector>
#include <ctime>
#include <iostream>
using namespace std;
//#include <opencv.hpp>
#include <opencv/cv.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv/highgui.h>
using namespace cv;
////////////////////////////////////////
// main file
int main()
{
string fileName = "myImage.jpg";
Mat imageSrc = cv::imread(fileName, CV_LOAD_IMAGE_UNCHANGED); // Read the file
if (!imageSrc.data) // Check for invalid input
{
cout << "Could not open or find the image\n";
return 1;
}
cout << "Loaded " << fileName << " (" << imageSrc.channels() << " channels)\n";
//int colorTransform = (imageSrc.channels() == 4) ? CV_BGRA2RGBA : (imageSrc.channels() == 3) ? CV_BGR2RGB : CV_GRAY2RGB;
//cv::cvtColor(imageSrc, imageSrc, colorTransform);
imageSrc.convertTo(imageSrc, CV_32F, 1 / 255.0, 0.0);
int SliceSizeWidth = imageSrc.cols / 2;
int sliceShiftWidth = imageSrc.cols / 4;
int sliceWidthNumber = (imageSrc.cols / sliceShiftWidth) - 1;
int SliceSizeHeight = imageSrc.rows / 2;
int sliceShiftHeight = imageSrc.rows / 4;
int sliceHeightNumber = (imageSrc.rows / sliceShiftHeight) - 1;
for (int sliceIndexHeight = 0; sliceIndexHeight < sliceHeightNumber; sliceIndexHeight++)
{
for (int sliceIndexWidth = 0; sliceIndexWidth < sliceWidthNumber; sliceIndexWidth++)
{
Mat patchImage = imageSrc(Rect(sliceIndexWidth*sliceShiftWidth, sliceIndexHeight*sliceShiftHeight, SliceSizeWidth, SliceSizeHeight));
Mat patchImageCopy;
patchImage.copyTo(patchImageCopy); // Deep copy => data are contiguous in patchImageCopy
Mat imageBicubic;
resize(patchImageCopy, imageBicubic, Size(2 * patchImage.cols, 2 * patchImage.rows), INTER_CUBIC);
Mat image_Padding;
int padding = 1;
copyMakeBorder(imageBicubic, image_Padding, padding, padding, padding, padding, BORDER_CONSTANT, Scalar(0));
Mat image_Conv;
int rows = imageBicubic.rows;
int cols = imageBicubic.cols;
image_Conv = Mat::zeros(rows, cols, CV_32FC(16));
/* rest of the code I have to write */
image_Conv.convertTo(image_Conv, CV_8U, 255.0, 0.0);
string nameBase = fileName.substr(0, fileName.find('.'));
string nameExt = fileName.substr(fileName.find('.'), fileName.length() - nameBase.length());
string strH = to_string(sliceIndexHeight);
string strW = to_string(sliceIndexWidth);
string outFileName = nameBase + "_H" + strH + "W" + strW + nameExt;
imwrite(outFileName, image_Conv);
}
}
return 0;
}
PS : Most of the code is not mine, I have to use it for my internship and can only edit between the lines :
resize(patchImageCopy, imageBicubic, Size(2 * patchImage.cols, 2 * patchImage.rows), INTER_CUBIC);
and
image_Conv.convertTo(image_Conv, CV_8U, 255.0, 0.0);
Thank you for your help !
EDIT : My first problem is solved, but it seems that it didn't work after all. I suppose that Mat::zeros set all the Mat elements at 0, right ? But if I write
cout << image_Conv.at<float>(0,0,0) << endl;
I have the error : "Unhandled exception at 0x000007FEFD4FA06D in xxxxxx.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000000023E540.".
I don't know what the problem is with the memory and how to fix it.
My goal is to fill my matrix element by element thanks to several for loops which will be realized several operations, before the result is written in the element of my corresponding Mat. I did that why 3D and 4D arrays, and maybe it's the easiest solution, to do all the calculs with arrays, but I can't go from a 3D array to a 3D Mat or a 3D Mat to a 3D array.

just tested this on visual studio 2015, opencv 3.4
cv::Mat mat = cv::Mat::zeros(5, 5, CV_32FC(16));
this works fine.

You should be able to create a multi-dimensional matrix filled with 0-values using:
int size[3] = { 5, 4, 3 };
cv::Mat M(3, size, CV_32F, cv::Scalar(0));
You can iterate over the matrix with M.at(i,j,k) (only for 3D matrix created as above):
for (int i = 0; i < size[0]; i++) {
for (int j = 0; j < size[1]; j++) {
for (int k = 0; k < size[2]; k++) {
M.at<float>(i,j,k) = i*12+j*3+k;
}
}
}
for (int i = 0; i < size[0]; i++) {
for (int j = 0; j < size[1]; j++) {
for (int k = 0; k < size[2]; k++) {
std::cout << "M(" << i << ", " << j << ", " << k << "): " << M.at<float>(i,j,k) << std::endl;
}
}
}
Alternatively, you should be able to create a 2D matrix with multiple channels with:
cv::Mat M(5, 4, CV_32FC(3), cv::Scalar(0));
To iterate over the 2D matrix and over the channels:
for (int i = 0; i < M.rows; i++) {
for (int j = 0; j < M.cols; j++) {
for (int k = 0; k < M.channels(); k++) {
M.at<cv::Vec<float, 3> >(i,j)[k] = i*M.cols*M.channels()+j*M.channels()+k;
}
}
}

Related

nppi resize function with 3 channels getting strange output

I'm getting a strange error when using nppi geometry transform functions from nppi cuda libraries. The code is here:
#include <nppi.h>
#include <nppi_geometry_transforms.h>
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgcodecs.hpp>
#include <vector>
void write(const cv::Mat &mat1, const std::string &path) {
auto mat2 = cv::Mat(mat1.rows, mat1.cols, CV_8UC4);
for (int i = 0; i < mat1.rows; i++) {
for (int j = 0; j < mat1.cols; j++) {
auto &bgra = mat2.at<cv::Vec4b>(i, j);
auto &rgb = mat1.at<cv::Vec3b>(i, j);
bgra[0] = rgb[2];
bgra[1] = rgb[1];
bgra[2] = rgb[0];
bgra[3] = UCHAR_MAX;
}
}
std::vector<int> compression_params;
compression_params.push_back(cv::IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
cv::imwrite(path, mat2, compression_params);
}
int main() {
std::cout << "Hello, World!" << std::endl;
auto mat = cv::Mat(256, 256, CV_8UC3);
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
auto &rgb = mat.at<cv::Vec3b>(i, j);
rgb[0] = (uint8_t)j;
rgb[1] = (uint8_t)i;
rgb[2] = (uint8_t)(UCHAR_MAX - j);
}
}
write(mat, "./test.png");
uint8_t *gpuBuffer1;
uint8_t *gpuBuffer2;
cudaMalloc(&gpuBuffer1, mat.total());
cudaMalloc(&gpuBuffer2, mat.total());
cudaMemcpy(gpuBuffer1, mat.data, mat.total(), cudaMemcpyHostToDevice);
auto status = nppiResize_8u_C3R(
gpuBuffer1, mat.cols * 3, {.width = mat.cols, .height = mat.rows},
{.x = 0, .y = 0, .width = mat.cols, .height = mat.rows}, gpuBuffer2,
mat.cols * 3, {.width = mat.cols, .height = mat.rows},
{.x = 0, .y = 0, .width = mat.cols, .height = mat.rows},
NPPI_INTER_NN);
if (status != NPP_SUCCESS) {
std::cerr << "Error executing Resize -- code: " << status << std::endl;
}
auto mat2 = cv::Mat(mat.rows, mat.cols, CV_8UC3);
cudaMemcpy(mat2.data, gpuBuffer2, mat.total(), cudaMemcpyDeviceToHost);
write(mat2, "./test1.png");
}
Basically I display a rainbow picture. Then write it to the GPU then resize it to the EXACT same size, then copy it back to the host then display it again. What I'm getting is garbled data in about 2/3s of the return picture.
First picture is the input picture.
Second input picture is the output picture.
I expect both pictures to be the same.
If I adjust the ROI with offsets and change the width and height for the destination buffer the pixels in the top 1/3 of the resized picture actually moves and resizes correctly. But the rest of the picture is garbled. Not sure what's wrong. Does anyone with experience in cuda nppi libraries or image processing in general have an idea what's going on?
CMake file included below for convenience to anyone who wants to compile it. You have to have opencv and cuda toolkit installed as C++ libs:
cmake_minimum_required(VERSION 3.18)
project(test_nppi)
enable_language(CUDA)
set(CMAKE_CXX_STANDARD 17)
find_package(CUDAToolkit REQUIRED)
find_package(OpenCV)
message(STATUS ${CUDAToolkit_INCLUDE_DIRS})
add_executable(test_nppi main.cu)
target_link_libraries(test_nppi ${OpenCV_LIBS} CUDA::nppig)
target_include_directories(test_nppi PUBLIC ${OpenCV_INCLUDE_DIRS} ${CUDAToolkit_INCLUDE_DIRS})
set_target_properties(test_nppi PROPERTIES
CUDA_SEPARABLE_COMPILATION ON)
I've used the nppi resize function for single channel pictures before and I don't have this issue. The 3 channel nppi resize function is getting weird output and I'm thinking I'm not completely understanding the input parameters. The Step is multiplied by 3 because of 3 color channels, but all other sizes just are measuring the dimensions by pixels; and the sizes of src and destination are the same... not sure what I'm not understanding here.
The issue is that mat.total() equals the total number of pixels, and not the total number of bytes.
According to OpenCV documentation:
total () const
Returns the total number of array elements.
In you code sample, mat.total() equals 256*256, while total number of bytes equals 256*256*3 (RGB applies 3 bytes per pixel).
(In OpenCV terminology "array element" is equivalent to image pixel).
cudaMemcpy(gpuBuffer1, mat.data, mat.total()... copies only 1/3 of the total image bytes, so only the upper 1/3 of the image data is valid.
According to this post, the correct way for computing the number of bytes is:
size_t mat_size_in_bytes = mat.step[0] * mat.rows;
In most cases for CV_8UC3, mat.step[0] = mat.cols*3, but for covering all the cases, we better use mat.step[0].
Corrected code sample:
#include "nppi.h"
#include "nppi_geometry_transforms.h"
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgcodecs.hpp"
#include <vector>
void write(const cv::Mat& mat1, const std::string& path) {
auto mat2 = cv::Mat(mat1.rows, mat1.cols, CV_8UC4);
for (int i = 0; i < mat1.rows; i++) {
for (int j = 0; j < mat1.cols; j++) {
auto& bgra = mat2.at<cv::Vec4b>(i, j);
auto& rgb = mat1.at<cv::Vec3b>(i, j);
bgra[0] = rgb[2];
bgra[1] = rgb[1];
bgra[2] = rgb[0];
bgra[3] = UCHAR_MAX;
}
}
std::vector<int> compression_params;
compression_params.push_back(cv::IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
cv::imwrite(path, mat2, compression_params);
}
int main() {
std::cout << "Hello, World!" << std::endl;
auto mat = cv::Mat(256, 256, CV_8UC3);
auto mat2 = cv::Mat(mat.rows, mat.cols, CV_8UC3);
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
auto& rgb = mat.at<cv::Vec3b>(i, j);
rgb[0] = (uint8_t)j;
rgb[1] = (uint8_t)i;
rgb[2] = (uint8_t)(UCHAR_MAX - j);
}
}
write(mat, "./test.png");
uint8_t* gpuBuffer1;
uint8_t* gpuBuffer2;
size_t mat_size_in_bytes = mat.step[0] * mat.rows; // https://stackoverflow.com/questions/26441072/finding-the-size-in-bytes-of-cvmat
size_t mat2_size_in_bytes = mat2.step[0] * mat2.rows;
cudaMalloc(&gpuBuffer1, mat_size_in_bytes);
cudaMalloc(&gpuBuffer2, mat2_size_in_bytes);
cudaMemcpy(gpuBuffer1, mat.data, mat_size_in_bytes, cudaMemcpyHostToDevice);
NppiSize oSrcSize = { mat.cols, mat.rows };
NppiRect oSrcRectROI = { 0, 0, mat.cols, mat.rows };
NppiSize oDstSize = { mat2.cols, mat2.rows };
NppiRect oDstRectROI = { 0, 0, mat2.cols, mat2.rows };
auto status = nppiResize_8u_C3R(
gpuBuffer1, mat.step[0], oSrcSize,
oSrcRectROI, gpuBuffer2,
mat2.step[0], oDstSize,
oDstRectROI,
NPPI_INTER_NN);
if (status != NPP_SUCCESS) {
std::cerr << "Error executing Resize -- code: " << status << std::endl;
}
cudaMemcpy(mat2.data, gpuBuffer2, mat2_size_in_bytes, cudaMemcpyDeviceToHost);
write(mat2, "./test1.png");
}
Output:

opencv slicing of a vector Mat

I am new with OpenCV. I am working on Visual Studio 2017 and use the plugin Image Watch to see Mat file of openCV.
What I've done:
I have to read a binary file to get 1000 images (256*320 pixels uint16 so 2 octets by pixel) in an array of double. After this, I wanted to see with Image Watch my data to be sure all is okay. So I convert the first image into a uchar on 8 bit to visualise it. I add my code (most part don't read it, just go to the end) :
#include "stdafx.h"
#include <iostream>
#include "stdio.h"
#include <fstream>
#include <stdint.h>
#include "windows.h"
#include <opencv2/core/core.hpp> // cv::Mat
#include <math.h>
#include <vector>
using namespace std;
using namespace cv;
template<class T>
T my_ntoh_little(unsigned char* buf) {
const auto s = sizeof(T);
T value = 0;
for (unsigned i = 0; i < s; i++)
value |= buf[i] << CHAR_BIT * i;
return value;
}
int main()
{
ifstream is("Filename", ifstream::binary);
if (is) {
// Reading size of the file and initialising variables
is.seekg(0, is.end);
int length = is.tellg();
int main_header_size = 3000;
int frame_header_size = 1000;
int width = 320, height = 256, count_frames = 1000;
int buffer_image = width * height * 2;
unsigned char *data_char = new unsigned char[length]; // Variable which will contains all the data
// Initializing 3D array for stocking all images
double ***data;
data = new double**[count_frames];
for (unsigned i = 0; i < count_frames; i++) {
data[i] = new double*[height];
for (unsigned j = 0; j < height; j++)
data[i][j] = new double[width];
}
// Reading the file once
is.seekg(0, is.beg);
is.read(reinterpret_cast<char*>(data_char), length);
// Convert pixel by pixel uchar into uint16 (using pointer on data_char)
int indice, minid = 65536.0, maxid = 0.0;
for (unsigned count = 0; count < count_frames; count++) {
// Initialize pointer address
indice = main_header_size + count * (frame_header_size + buffer_image) + frame_header_size;
for (unsigned i = 0; i < height; i++) {
for (unsigned j = 0; j < width; j++) {
data[count][i][j] = my_ntoh_little<uint16_t>(data_char + indice);
// Search for min/max for normalize after
if (data[count][i][j] < minid and count == 0)
minid = data[count][i][j];
if (data[count][i][j] > maxid and count == 0)
maxid = data[count][i][j];
// Updating pointer to next pixel
indice += 2;
}
}
}
// Get back first image, normalize between 0-255, cast into uchar to the future Mat object
uchar *dataImRGB = new uchar[width * height * 3];
int image_display = 900;
int pixel_norm;
for (unsigned i = 0; i < height; i++) {
for (unsigned j = 0; j < width; j++) {
pixel_norm = round((data[image_display][i][j] - double(minid)) / double(maxid - minid) * 255);
dataImRGB[i * 320 * 3 + 3 * j] = static_cast<uchar>(pixel_norm);
dataImRGB[i * 320 * 3 + 3 * j + 1] = static_cast<uchar>(pixel_norm);
dataImRGB[i * 320 * 3 + 3 * j + 2] = static_cast<uchar>(pixel_norm);
}
}
// Create Mat object (it is imageRGB8 I can see on Image watch)
Mat imageRGB8 = Mat(width, height, CV_8UC3, dataImRGB);
// Creating a list of Map and add first Mat
vector<Mat> listImages;
listImages.push_back(imageRGB8);
// -----------------------------------------------------------------------------------------
// -----------------------------------------------------------------------------------------
// Future : directly keep the uchar read in the original file and import it on a Mat object
// But how to get the pixel at (0,0) of the first Mat on the vector ?
// -----------------------------------------------------------------------------------------
// -----------------------------------------------------------------------------------------
// De-Allocate memory to prevent memory leak
for (int i = 0; i < count_frames; ++i) {
for (int j = 0; j < height; ++j)
delete[] data[i][j];
delete[] data[i];
}
delete[] data;
}
return 0;
}
Where I am stuck:
I don't know how to work with this vector, how to manipulate the data. For example, if i want to do the mean of all images, so the mean of all Mat objects in the vector, how to do this ? Or just how to get the first pixel of the third image in the vector ? These examples have for aim to explain me the slicing with such type of data because I know how it works with vector of double, but not with openCv object.
Thank you in advance for any help/advice.
Assuming that you have got all of your images properly packed into your image list you can do the following:
This will get the mean of all images in your list:
cv::Scalar meansum(0.0f,0.0f,0.0f);
size_t length = listImages.size();
for (size_t i = 0; i < length; i++){
//mu == mean of current image
cv::Scalar mu = cv::mean(listImages[i]);
meansum += mu;
}
float means[3] = { meansum[0] / length, meansum[1] / length, meansum[2] / length };
std::cout << "Means " << means[0] << " " << means[1] << " " << means[2] << std::endl;
To get the first pixel in your third image you can use the at() method or a row pointer. (Row pointers are faster, but don't have any guards against accessing out of bounds memory locations.)
Mat third_image = list_images[2];
//using at()
uchar first_pixel_blue_value = third_image.at<uchar>(0,0,0);
std::cout<<(int)first_pixel_blue_value<<std::endl;
//using row pointer
uchar* row = third_image.ptr<uchar>(0); //pointer to row 0
std::cout<<"blue: " <<(int)row[0];
std::cout<<" green: "<<(int)row[1];
std::cout<<" red: " <<(int)row[2];
More info can be found here:
https://docs.opencv.org/3.1.0/d2/de8/group__core__array.html (under functions)
and here:
https://docs.opencv.org/trunk/d3/d63/classcv_1_1Mat.html

Unable to predict in SVM OpenCV 3.0

I have been able to train my SVM. The program can run until it comes to prediction. I'm getting an error for SVM prediction with the testing images.
What have I missed in the code? Can anybody help me?
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F) in cv::ml::SVMImpl::predict, file C:\buildslave64\win64_amdocl\master_PackSlave-win64-vc14-shared\opencv\modules\ml\src\svm.cpp, line 1930
My prediction code is found below:
#include <opencv2/core.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc.hpp>
#include "opencv2/imgcodecs.hpp"
#include <opencv2/highgui.hpp>
#include <opencv2/ml.hpp>
#include <iostream>
#include <fstream>
#include<string.h>
using namespace std;
using namespace cv;
using namespace cv::ml;
int main(int, char**)
{
HOGDescriptor hog(cv::Size(64, 128), cv::Size(16, 16), cv::Size(8, 8), cv::Size(8, 8), 9, 1, -1, 0, 0.2, true, HOGDescriptor::DEFAULT_NLEVELS);
vector<cv::Point> locations;
std::vector<float> extractedFeature;
vector<vector< float>> features;
vector<Mat> testingImages;
vector<int> testingLabels;
int numFiles = 11; //no. of rows in matrix
int img_area = 320 * 240; //no. of columns - area of image 76800
FileStorage myfile("features.xml", FileStorage::READ);
const char* path = "C:/Testing Set/Extracted_Frames/image";
//set up labels for each training image
float label = 1.0; //positive image +1
Mat testingMat(img_area, numFiles, CV_32FC1);// 1D training matrix
cout << testingMat.rows << endl;
cout << testingMat.cols << endl;
Mat res; // output
//set up labels for each training image
Mat labels(testingMat.rows, 1, CV_32SC1, label); //flatten 1D label matrix
Ptr<ml::SVM> svm = Algorithm::load<ml::SVM>("test.xml");
std::cout << "Model Loaded" << std::endl;
for (int i = 0; i < labels.rows; i++) {
labels.at<int>(i, 0) = labels.at<int>(i, 0);
}
for (int file_num = 0; file_num < numFiles; file_num++)
{
stringstream ss(stringstream::in | stringstream::out);
ss << path << file_num << ".jpg";
cout << "read path = " << ss.str() << endl;
myfile["Descriptors" + ss.str()] >> extractedFeature;
Mat img = imread(ss.str());
int ii = 0; // Current column in training_mat
for (int i = 0; i < img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
testingMat.at<float>(ii++, file_num) = img.at<uchar>(i, j);
Mat sampleMat = (Mat_<float>(1, 2) << i, j);
float response = svm->predict(sampleMat);//error here
}
}
features.push_back(extractedFeature);
testingImages.push_back(img);
testingLabels.push_back(1);
testingLabels.push_back(file_num);
myfile.release();
}
labels.at<int>(1, 0) = -1;
}

memset fuction does not work in my c++ dynamic array initialization

This some parts of my opencv image processing codes.In it, I generate two dynamic arrays to store the total numbers of black points per col/row in binary image.
Here are the codes:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg = imread("oura.bmp");
width = srcImg.cols - 2;
height = srcImg.rows - 2;
Mat srcGrey;
Mat srcRoi(srcImg, Rect(1, 1, width, height));
cvtColor(srcRoi, srcGrey, COLOR_BGR2GRAY);
int thresh = 42;
int maxval = 255;
threshold(srcGrey, srcRoiBina, thresh, maxval, THRESH_BINARY);
int *count_cols = new int[width] ();
int *count_rows = new int[height] ();
for (int i = 0; i < width; i++)
{
cout << count_cols[i] << endl;
}
for (int i = 0; i < height; i++)
{
uchar *data = srcRoiBina.ptr<uchar>(i);
for (int j = 0; j < width; j++)
{
if (data[j] == 0)
{
count_cols[j]++;
count_rows[i]++;
}
}
}
delete[] count_cols;
delete[] count_rows;
return 0;
}
My question is that: if I use the follow codes
int *count_cols = new int[width];
int *count_rows = new int[height];
memset(count_cols, 0, sizeof(count_cols));
memset(count_rows, 0, sizeof(count_rows));
for (int i = 0; i < width; i++)
{
cout << count_cols[i] << endl;
}
to replace the corresponding codes below, why the dynamic arrays can not be initialized to zero? It seems that the memset does not work.
Platform: Visual Stdio 2013 + opencv 3.0.0
Could you please help me?
Additionally, the original image oura.bmp is 2592*1944.Thus the length of the dynamic array count_cols is 2590(ie, 2592-2). Is there some potential problems?
count_cols is of type int*, so sizeof(count_cols) will be 8 (64bit) or 4 (32bit). You'll want to use sizeof(int) * width instead (and similarly for rows).
sizeof(count_rows) is returning the size of the pointer, not the size of the array.
Use height * sizeof(int) instead. Same applies for the columns too.

vector<Mat> opencv issues

I am trying to read images of size 19x19 into vector. I have 2429 number of such images. But when I run my code, I am sure some Mat images are not read into the vector. Is it a memory issue. If yes, can anyone help me. I confirmed this after having assert statements in my code.Thank you for the help. EDIT: I removed all the if else statements and replaced it with format specifier. When I am building the design matrix X_train, exactly at ex = 1703 my assertion fails. I checked my image set around those ex values and they look fine. I am not able to understand where I am going wrong.
#include <iostream>
#include <vector>
#include <istream>
#include <fstream>
#include <random>
#include <algorithm>
#include "opencv2/opencv.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#define NO_OF_IMAGES 2429
using namespace std;
using namespace cv;
static int colSize = 0;
vector<Mat> read_faces() {
vector<Mat> training_images;
string images_path = "images/train/face";
string suffix = ".pgm";
Mat img(19, 19, CV_8UC1);
for (int i = 0; i < NO_OF_IMAGES; i++) {
img = imread( cv::format("%s%05d.pgm", images_path.c_str(), i), 0 );
training_images.push_back(img);
}
return training_images;
}
vector<Mat> extract_train_test_set(
vector<Mat> faces/**< [in] vector of faces or matrices*/,
vector<Mat> &test_set /**< [out] 10% of images*/) {
/**
* Randomly select 90% of these images and collect them into a set training_set and
* the rest 10% in test_set.
*/
int percentage_train = (0.9f * NO_OF_IMAGES);
vector<Mat> training_set;
for (int i = 0; i < percentage_train; i++) {
Mat img = faces[i];
assert(img.empty() == false);
training_set.push_back(img);
}
for (int i = percentage_train; i < NO_OF_IMAGES; i++) {
Mat img = faces[i];
assert(img.empty() == false);
test_set.push_back(img);
}
return training_set;
}
int main(int argc, char **argv) {
vector<Mat> faces = read_faces(); /**< Reading faces into a vector of matrices. */
random_shuffle(faces.begin(), faces.end()); /**< Shuffle the faces vector for creating a training set*/
cout << faces.size() << endl; /**< Size of the vector of faces is 2429*/
vector<Mat> training_set; /**< 90% images i.e 2186 are test images. */
vector<Mat> test_set; /**< 10% images i.e 243 are test images. */
training_set = extract_train_test_set(faces, test_set);
cout << " Training set size " << training_set.size() << endl;
cout << " Test set size " << test_set.size() << endl;
int dim = training_set[0].rows * training_set[0].cols; /**< 361 dimension vector. */
Mat X_train(dim, training_set.size(), CV_8UC1); /**< 361 rows and 2186 columns.*/
Mat m(19, 19, CV_8UC1);
int ex = 0; /**< Counter for indexing the images */
while (ex < training_set.size()) {
m = training_set[ex];/**< Retrieve the image from training vector. */
for (int i = 0; i < 19; i++) {
for (int j = 0; j < 19; j++) {
assert(m.empty() == false);
X_train.at<uchar>(colSize, ex) = m.at<uchar>(i, j); //each image is a 361 element vector
colSize++;
}
}
ex++; /**< Continue to next image. */
colSize = 0; /**< Set to zero so as to continue to next image. That is a reset row index for next image.*/
}
ofstream file_handle("images/train.dat", ios::trunc);
file_handle << X_train;
file_handle.close();
cout << "Height " << X_train.rows << " Width " << X_train.cols << endl;
waitKey(0);
return 0;
}
I got it working. Instead of looping across the image in which I manually set 19 rows and 19 cols (given that each image as 19x19), I used that Mat's class members 'rows' and 'cols'. Solution I found is just replacement with the following:
while (ex < training_set.size()) {
m = training_set[ex];/**< Retrieve the image from training vector. */
cout << "Fine!! " << ex << endl;
assert(m.empty() == false);
for (int i = 0; i < m.rows; i++) {
for (int j = 0; j < m.cols; j++) {
X_train.at<uchar>(colSize, ex) = m.at<uchar>(i, j); //each image is a 361 element vector
colSize++;
}
}
ex++; /**< Continue to next image. */
colSize = 0; /**< Set to zero so as to continue to next image. That is a reset row index for next image.*/
}