OpenCV 3 C++ Mat fetching with pointer goes random - c++

I'm quite new to OpenCV and I'm now using version 3.4.1 with C++ implementation. I'm still exploring, so this question is not specific to a project, but is more of a "try to understand how it works". Please consider, with the same idea in mind, that I know that I'm somehow "reinventing the will" with this code, but I wrote this example to understand "HOW IT WORKS".
The idea is:
Read an RGB image
Make it binary
Find Connected areas
Colour each area differently
As an example I'm using a 5x5 pixel RGB image saved as BMP. The image is a white box with black pixels all around it's contour.
Up to the point where I get the ConnectedComponents matrix, named Mat::Labels, it all goes fine. If I print the Matrix I see exactly what I expect:
11111
10001
10001
10001
11111
Remember that I've inverted the threshold so it is correct to get 1 on the edges...
I then create a Mat with same size of Mat::Labels but 3 channels to colour it with RGB. This is named Mat::ColoredLabels.
Next step is to instanciate a pointer that runs through the Mat::Labels and for each position in the Mat::Labels where the value is 1 fill the corresponding Mat:.ColoredLabels position with a color.
HERE THINGS GOT VERY WRONG ! The pointer does not fetch the Mat::Labels row byt row as I would expect but follows some other order.
Questions:
Am I doing something wrong or it is "obvious" that the pointer fetching follows some "umpredictable" order ?
How could I set values of a Matrix (Mat::ColoredLabels) based on the values of another matrix (Mat::Labels) ?
.
#include "opencv2\highgui.hpp"
#include "opencv2\opencv.hpp"
#include <stdio.h>
using namespace cv;
int main(int argc, char *argv[]) {
char* FilePath = "";
Mat Img;
Mat ImgGray;
Mat ImgBinary;
Mat Labels;
uchar *P;
uchar *CP;
// Image acquisition
if (argc < 2) {
printf("Missing argument");
return -1;
}
FilePath = argv[1];
Img = imread(FilePath, CV_LOAD_IMAGE_COLOR);
if (Img.empty()) {
printf("Invalid image");
return -1;
}
// Convert to Gray...I know I could convert it right away while loading....
cvtColor(Img, ImgGray, CV_RGB2GRAY);
// Threshold (inverted) to obtain black background and white blobs-> it works
threshold(ImgGray, ImgBinary, 170, 255, CV_THRESH_BINARY_INV);
// Find Connected Components and put the 1/0 result in Mat::Labels
int BlobsNum = connectedComponents(ImgBinary, Labels, 8, CV_16U);
// Just to see what comes out with a 5x5 image. I get:
// 11111
// 10001
// 10001
// 10001
// 11111
std::cout << Labels << "\n";
// Prepare to fetch the Mat(s) with pointer to be fast
int nRows = Labels.rows;
int nCols = Labels.cols * Labels.channels();
if (Labels.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
// Prepare a Mat as big as LAbels but with 3 channels to color different blobs
Mat ColoredLabels(Img.rows, Img.cols, CV_8UC3, cv::Scalar(127, 127, 127));
int ColoredLabelsNumChannels = ColoredLabels.channels();
// Fetch Mat::Labels and Mat::ColoredLabes with the same for cycle...
for (int i = 0; i < nRows; i++) {
// !!! HERE SOMETHING GOES WRONG !!!!
P = Labels.ptr<uchar>(i);
CP = ColoredLabels.ptr<uchar>(i);
for (int j = 0; j < nCols; j++) {
// The coloring operation does not work
if (P[j] > 0) {
CP[j*ColoredLabelsNumChannels] = 0;
CP[j*ColoredLabelsNumChannels + 1] = 0;
CP[j*ColoredLabelsNumChannels + 2] = 255;
}
}
}
std::cout << "\n" << ColoredLabels << "\n";
namedWindow("ColoredLabels", CV_WINDOW_NORMAL);
imshow("ColoredLabels", ColoredLabels);
waitKey(0);
printf("Execution completed succesfully");
return 0;
}

You used connectedComponents function with CV_16U parameter. This means that the single element of the image will consist of 16 bits (hence '16') and you have to interpret them as unsigned integer (hence 'U'). And since ptr returns a pointer, you have to dereference it to get the value.
Therefore you should access label image elements in the following way:
unsigned short val = *Labels.ptr<unsigned short>(i) // or uint16_t
unsigned short val = Labels.at<unsigned short>.at(y, x);
Regarding your second question, it is as simple as that, but of course you have to understand which type casts result in loss of precisions or overflows and which ones not.
mat0.at<int>(y, x) = mat1.at<int>(y, x); // both matrices have CV_32S types
mat2.at<int>(y, x) = mat3.at<char>(y,x); // CV_32S and CV_8S
// Implicit cast occurs. Possible information loss: assigning 32-bit integer values to 8-bit ints
// mat4.at<unsigned char>(y, x) = mat5.at<unsigned int>(y, x); // CV_8U and CV_32U

Related

Apply Mask in OpenCV

I start out with this image:
for which I want to color in the lane markings directly in front of the vehicle (yes this is for a Udacity online class, but they want me to do this in python, but I'd rather do it in C++)
Finding the right markers is easy:
This works for coloring the markers:
cv::MatIterator_<cv::Vec3b> output_pix_it = output.begin<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> output_end = output.end<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> mask_pix_it = lane_markers.begin<cv::Vec3b>();
//auto t1 = std::chrono::high_resolution_clock::now();
while (output_pix_it != output_end)
{
if((*mask_pix_it)[0] == 255)
{
(*output_pix_it)[0] = 0;
(*output_pix_it)[1] = 0;
(*output_pix_it)[2] = 255;
}
++output_pix_it;
++mask_pix_it;
}
correctly producing
however I was a little surprised that it seemed to be kind of slow, taking 1-2 ms (on a core i7-7700HQ w/ 16gb ram, compiled with -O3) for the image which is 960 x 540
Following "the efficient way" here: https://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#howtoscanimagesopencv
I came up with:
unsigned char *o; // pointer to first element in output Mat
unsigned char *m; //pointer to first element in mask Mat
o = output.data;
m = lane_markers.data;
size_t pixel_elements = output.rows * output.cols * output.channels();
for( size_t i=0; i < pixel_elements; i+=3 )
{
if(m[i] == 255)
{
o[i] = 0;
o[i+1] = 0;
o[i+2] = 255;
}
}
which is about 3x faster....but doesn't produce the correct results:
All cv::Mat objects are of type 8UC3 type (standard BGR pixel format).
As far as I can tell the underlying data of the Mat objects should be an array of unsigned chars of the length pixel width * pixel height * num channels. But it seems like I'm missing something. isContinuous() is true for both the output and mask matrices. I'm using openCV 3.4.4 on Ubuntu 18.04. What am I missing?
Typical way of setting a masked area of a Mat to a specific value is to use Mat::setTo function:
cv::Mat mask;
cv::cvtColor(lane_markers, mask, cv::COLOR_BGR2GRAY); //mask Mat has to be 8UC1
output.setTo(cv::Scalar(0, 0, 255), mask);

Make 32x32 sections on an image in C++ OpenCV?

I want to take a gray scaled image and divide it into 32x32 sections. Each section will contain pixels and based their intensity and volume, they would be considered a 1 or a 0.
My thought is that I would name the sections like "(x,y)". For example:
Section(1,1) contains this many pixels that are within this range of intensity so this is a 1.
Does that make sense? I tried looking for the answer to this question but dividing up the image into overlaying sections doesn't seem to yield any results in the OpenCV community. Keep in mind I don't want to change the way the image looks, just divide it up into a 32x32 table with (x,y) being a "section" of the picture.
Yes you can do that. Here is the code. It is rough around the edges, but it does what you request. See comments in the code for explanations.
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
struct BradleysImage
{
int rows;
int cols;
cv::Mat data;
int intensity_threshold;
int count_threshold;
cv::Mat buff = cv::Mat(32, 32, CV_8UC1);
// When we call the operator with arguments y and x, we check
// the region(y,x). We then count the number of pixels within
// that region that are greater than some threshold. If the
// count is greater than desired number, we return 255, else 0.
int operator()(int y, int x) const
{
int j = y*32;
int i = x*32;
auto window = cv::Rect(i, j, 32, 32);
// threshold window contents
cv::threshold(data(window), buff, intensity_threshold, 1, CV_THRESH_BINARY);
int num_over_threshold = cv::countNonZero(buff);
return num_over_threshold > count_threshold ? 255 : 0;
}
};
int main() {
// Input image
cv::Mat img = cv::imread("walken.jpg", CV_8UC1);
// I resize it so that I get dimensions divisible
// by 32 and get better looking result
cv::Mat resized;
cv::resize(img, resized, cv::Size(3200, 3200));
BradleysImage b; // I had no idea how to name this so I used your nick
b.rows = resized.rows / 32;
b.cols = resized.cols / 32;
b.data = resized;
b.intensity_threshold = 128; // just some threshold
b.count_threshold = 512;
cv::Mat result(b.rows -1, b.cols-1, CV_8UC1);
for(int y = 0; y < result.rows; ++y)
for(int x = 0; x < result.cols; ++x)
result.at<uint8_t>(y, x) = b(y, x);
imwrite("walken.png", result);
return 0;
}
I used Christopher Walken's image from Wikipedia and obtained this result:

Bit planes of a 1-plane image in OpenCV only work for 1/3 of the image

I'm trying to learn OpenCV by doing a few things on my own. In this particular case, I wanted to take the bit planes of a grayscale image. The code seems to have worked, but it only works well for the bit 7 and 6, not so much for the remaining 6, as it only shows a good result for about 1/3 of the image. I just haven't found what's wrong with it as of yet. I'd greatly appreciate some help on the matter, as I'm just doing my first codes with the libraries.
Here's what I get for the first bit:
And here is it for the 7th bit:
And here's my code:
#include <opencv2\opencv.hpp>
#include <math.h>
using namespace cv;
using namespace std;
int main( int argc, char** argv ) {
Mat m1 = imread("grayscalerose.jpg");
imshow("Original",m1);
int cols, rows, x, y;
cols = m1.cols;
rows = m1.rows;
printf("%d %d \n",m1.rows,m1.cols);
Mat out1(rows, cols, CV_8UC1, Scalar(0));
out1 = (m1/128); //Here's where I divide by either 1,2,4,8,16,32,64, or 128 to get the corresponding bit planes
for (int y = 0; y < rows; y++){
for (int x = 0; x < cols; x++){
out1.at<uchar>(y,x) = (out1.at<uchar>(y,x) % 2);
} }
out1 = out1*255;
imshow("out1",out1);
waitKey(0);
destroyWindow( "out1" );
}
Thanks in advance. I hope my explanation wasn't too messy.
First let's read the image in as grayscale only. (As mentioned by user3896254).
Then, let's prepare a mask image, where only the least significant bit is set -- i.e. all the values are 1.
Then the algorithm is simple. Let's avoid per-pixel manipulation (the two nested for loops), and try to take advantage of the optimized operations provided by OpenCV.
For each bit (0..7):
Mask out the lowest order bit in the work image.
Scale the masked image by 255 to make it black/white.
Store the output.
Divide values in work image by 2 -- i.e. shift all bits by 1 position to the right.
Code:
#include <opencv2\opencv.hpp>
#include <cstdint>
int main(int argc, char** argv)
{
cv::Mat input_img(cv::imread("peppers.png", 0));
int32_t rows(input_img.rows), cols(input_img.cols);
cv::Mat bit_mask(cv::Mat::ones(rows, cols, CV_8UC1));
cv::Mat work_img(input_img.clone());
std::string file_name("peppers_bit0.png");
for (uint32_t i(0); i < 8; ++i) {
cv::Mat out;
cv::bitwise_and(work_img, bit_mask, out);
out *= 255;
cv::imwrite(file_name, out);
work_img = work_img / 2;
file_name[11] += 1;
}
}
We can develop even shorter (and probably faster) version using a single matrix expression.
We can calculate the appropriate divisor using the expression (1<<i). We divide every element by this value to shift the bits, mask each element by ANDing it with 1, and then scale all the elements by 255:
#include <opencv2\opencv.hpp>
#include <cstdint>
int main(int argc, char** argv)
{
cv::Mat input_img(cv::imread("peppers.png", 0));
std::string file_name("peppers_bit0.png");
for (uint32_t i(0); i < 8; ++i) {
cv::Mat out(((input_img / (1<<i)) & 1) * 255);
cv::imwrite(file_name, out);
file_name[11] += 1;
}
}
Sample run
Input image:
Bit 0:
Bit 1:
Bit 2:
Bit 3:
Bit 4:
Bit 5:
Bit 6:
Bit 7:
When you divide 15 (0x00001111) by 2 (0x00000010) you get 7 (0x00000111), which is not what you expect. You can check if a bit is set like: 15 & 2, which produces 0 if second bit is not set, else a value greater then 0. The same applies for other values.
Try the following code. Note that:
you need to load the image as grayscale (using IMREAD_GRAYSCALE in imread)
you can directly put values either 0 or 255 when you select the bit
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat m1 = imread("path_to_image", IMREAD_GRAYSCALE);
imshow("Original", m1);
int cols, rows, x, y;
cols = m1.cols;
rows = m1.rows;
printf("%d %d \n", m1.rows, m1.cols);
Mat out1(rows, cols, CV_8UC1, Scalar(0));
for (int y = 0; y < rows; y++){
for (int x = 0; x < cols; x++){
out1.at<uchar>(y, x) = (m1.at<uchar>(y, x) & uchar(64)) ? uchar(255) : uchar(0); //Here's where I AND by either 1,2,4,8,16,32,64, or 128 to get the corresponding bit planes
}
}
imshow("out1", out1);
waitKey(0);
destroyWindow("out1");
return 0;
}
By default, cv::imread reads the image as BGR matrix, but you index the matrix as if it was one-channeled.
Just change the reading line to Mat m1 = imread("grayscalerose.jpg", 0); and it will work fine.
Mat Out (In / (1<<i)), the division of it will generate an integer value that equals to "round operation", let's say if Mat Out (6/5) will be 2. But, in bit-slicing, it uses floor operation instead of round. Thus, for Mat Out (6/5) it should be 1 instead of 2. For some cases, the result will be quite similar. But, in other cases, it can be really different, especially for bit-plane near to MSB (most significant bits). CMIIW.

Convert Mat to Array/Vector in OpenCV

I am novice in OpenCV. Recently, I have troubles finding OpenCV functions to convert from Mat to Array. I researched with .ptr and .at methods available in OpenCV APIs, but I could not get proper data. I would like to have direct conversion from Mat to Array(if available, if not to Vector). I need OpenCV functions because the code has to be undergo high level synthesis in Vivado HLS. Please help.
If the memory of the Mat mat is continuous (all its data is continuous), you can directly get its data to a 1D array:
std::vector<uchar> array(mat.rows*mat.cols*mat.channels());
if (mat.isContinuous())
array = mat.data;
Otherwise, you have to get its data row by row, e.g. to a 2D array:
uchar **array = new uchar*[mat.rows];
for (int i=0; i<mat.rows; ++i)
array[i] = new uchar[mat.cols*mat.channels()];
for (int i=0; i<mat.rows; ++i)
array[i] = mat.ptr<uchar>(i);
UPDATE: It will be easier if you're using std::vector, where you can do like this:
std::vector<uchar> array;
if (mat.isContinuous()) {
// array.assign(mat.datastart, mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign(mat.data, mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<uchar>(i), mat.ptr<uchar>(i)+mat.cols*mat.channels());
}
}
p.s.: For cv::Mats of other types, like CV_32F, you should do like this:
std::vector<float> array;
if (mat.isContinuous()) {
// array.assign((float*)mat.datastart, (float*)mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign((float*)mat.data, (float*)mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<float>(i), mat.ptr<float>(i)+mat.cols*mat.channels());
}
}
UPDATE2: For OpenCV Mat data continuity, it can be summarized as follows:
Matrices created by imread(), clone(), or a constructor will always be continuous.
The only time a matrix will not be continuous is when it borrows data (except the data borrowed is continuous in the big matrix, e.g. 1. single row; 2. multiple rows with full original width) from an existing matrix (i.e. created out of an ROI of a big mat).
Please check out this code snippet for demonstration.
Can be done in two lines :)
Mat to array
uchar * arr = image.isContinuous()? image.data: image.clone().data;
uint length = image.total()*image.channels();
Mat to vector
cv::Mat flat = image.reshape(1, image.total()*image.channels());
std::vector<uchar> vec = image.isContinuous()? flat : flat.clone();
Both work for any general cv::Mat.
Explanation with a working example
cv::Mat image;
image = cv::imread(argv[1], cv::IMREAD_UNCHANGED); // Read the file
cv::namedWindow("cvmat", cv::WINDOW_AUTOSIZE );// Create a window for display.
cv::imshow("cvmat", image ); // Show our image inside it.
// flatten the mat.
uint totalElements = image.total()*image.channels(); // Note: image.total() == rows*cols.
cv::Mat flat = image.reshape(1, totalElements); // 1xN mat of 1 channel, O(1) operation
if(!image.isContinuous()) {
flat = flat.clone(); // O(N),
}
// flat.data is your array pointer
auto * ptr = flat.data; // usually, its uchar*
// You have your array, its length is flat.total() [rows=1, cols=totalElements]
// Converting to vector
std::vector<uchar> vec(flat.data, flat.data + flat.total());
// Testing by reconstruction of cvMat
cv::Mat restored = cv::Mat(image.rows, image.cols, image.type(), ptr); // OR vec.data() instead of ptr
cv::namedWindow("reconstructed", cv::WINDOW_AUTOSIZE);
cv::imshow("reconstructed", restored);
cv::waitKey(0);
Extended explanation:
Mat is stored as a contiguous block of memory, if created using one of its constructors or when copied to another Mat using clone() or similar methods. To convert to an array or vector we need the address of its first block and array/vector length.
Pointer to internal memory block
Mat::data is a public uchar pointer to its memory.
But this memory may not be contiguous. As explained in other answers, we can check if mat.data is pointing to contiguous memory or not using mat.isContinous(). Unless you need extreme efficiency, you can obtain a continuous version of the mat using mat.clone() in O(N) time. (N = number of elements from all channels). However, when dealing images read by cv::imread() we will rarely ever encounter a non-continous mat.
Length of array/vector
Q: Should be row*cols*channels right?
A: Not always. It can be rows*cols*x*y*channels.
Q: Should be equal to mat.total()?
A: True for single channel mat. But not for multi-channel mat
Length of the array/vector is slightly tricky because of poor documentation of OpenCV. We have Mat::size public member which stores only the dimensions of single Mat without channels. For RGB image, Mat.size = [rows, cols] and not [rows, cols, channels]. Mat.total() returns total elements in a single channel of the mat which is equal to product of values in mat.size. For RGB image, total() = rows*cols. Thus, for any general Mat, length of continuous memory block would be mat.total()*mat.channels().
Reconstructing Mat from array/vector
Apart from array/vector we also need the original Mat's mat.size [array like] and mat.type() [int]. Then using one of the constructors that take data's pointer, we can obtain original Mat. The optional step argument is not required because our data pointer points to continuous memory. I used this method to pass Mat as Uint8Array between nodejs and C++. This avoided writing C++ bindings for cv::Mat with node-addon-api.
References:
Create memory continuous Mat
OpenCV Mat data layout
Mat from array
Here is another possible solution assuming matrix have one column( you can reshape original Mat to one column Mat via reshape):
Mat matrix= Mat::zeros(20, 1, CV_32FC1);
vector<float> vec;
matrix.col(0).copyTo(vec);
None of the provided examples here work for the generic case, which are N dimensional matrices. Anything using "rows" assumes theres columns and rows only, a 4 dimensional matrix might have more.
Here is some example code copying a non-continuous N-dimensional matrix into a continuous memory stream - then converts it back into a Cv::Mat
#include <iostream>
#include <cstdint>
#include <cstring>
#include <opencv2/opencv.hpp>
int main(int argc, char**argv)
{
if ( argc != 2 )
{
std::cerr << "Usage: " << argv[0] << " <Image_Path>\n";
return -1;
}
cv::Mat origSource = cv::imread(argv[1],1);
if (!origSource.data) {
std::cerr << "Can't read image";
return -1;
}
// this will select a subsection of the original source image - WITHOUT copying the data
// (the header will point to a region of interest, adjusting data pointers and row step sizes)
cv::Mat sourceMat = origSource(cv::Range(origSource.size[0]/4,(3*origSource.size[0])/4),cv::Range(origSource.size[1]/4,(3*origSource.size[1])/4));
// correctly copy the contents of an N dimensional cv::Mat
// works just as fast as copying a 2D mat, but has much more difficult to read code :)
// see http://stackoverflow.com/questions/18882242/how-do-i-get-the-size-of-a-multi-dimensional-cvmat-mat-or-matnd
// copy this code in your own cvMat_To_Char_Array() function which really OpenCV should provide somehow...
// keep in mind that even Mat::clone() aligns each row at a 4 byte boundary, so uneven sized images always have stepgaps
size_t totalsize = sourceMat.step[sourceMat.dims-1];
const size_t rowsize = sourceMat.step[sourceMat.dims-1] * sourceMat.size[sourceMat.dims-1];
size_t coordinates[sourceMat.dims-1] = {0};
std::cout << "Image dimensions: ";
for (int t=0;t<sourceMat.dims;t++)
{
// calculate total size of multi dimensional matrix by multiplying dimensions
totalsize*=sourceMat.size[t];
std::cout << (t>0?" X ":"") << sourceMat.size[t];
}
// Allocate destination image buffer
uint8_t * imagebuffer = new uint8_t[totalsize];
size_t srcptr=0,dptr=0;
std::cout << std::endl;
std::cout << "One pixel in image has " << sourceMat.step[sourceMat.dims-1] << " bytes" <<std::endl;
std::cout << "Copying data in blocks of " << rowsize << " bytes" << std::endl ;
std::cout << "Total size is " << totalsize << " bytes" << std::endl;
while (dptr<totalsize) {
// we copy entire rows at once, so lowest iterator is always [dims-2]
// this is legal since OpenCV does not use 1 dimensional matrices internally (a 1D matrix is a 2d matrix with only 1 row)
std::memcpy(&imagebuffer[dptr],&(((uint8_t*)sourceMat.data)[srcptr]),rowsize);
// destination matrix has no gaps so rows follow each other directly
dptr += rowsize;
// src matrix can have gaps so we need to calculate the address of the start of the next row the hard way
// see *brief* text in opencv2/core/mat.hpp for address calculation
coordinates[sourceMat.dims-2]++;
srcptr = 0;
for (int t=sourceMat.dims-2;t>=0;t--) {
if (coordinates[t]>=sourceMat.size[t]) {
if (t==0) break;
coordinates[t]=0;
coordinates[t-1]++;
}
srcptr += sourceMat.step[t]*coordinates[t];
}
}
// this constructor assumes that imagebuffer is gap-less (if not, a complete array of step sizes must be given, too)
cv::Mat destination=cv::Mat(sourceMat.dims, sourceMat.size, sourceMat.type(), (void*)imagebuffer);
// and just to proof that sourceImage points to the same memory as origSource, we strike it through
cv::line(sourceMat,cv::Point(0,0),cv::Point(sourceMat.size[1],sourceMat.size[0]),CV_RGB(255,0,0),3);
cv::imshow("original image",origSource);
cv::imshow("partial image",sourceMat);
cv::imshow("copied image",destination);
while (cv::waitKey(60)!='q');
}
Instead of getting image row by row, you can put it directly to an array. For CV_8U type image, you can use byte array, for other types check here.
Mat img; // Should be CV_8U for using byte[]
int size = (int)img.total() * img.channels();
byte[] data = new byte[size];
img.get(0, 0, data); // Gets all pixels
byte * matToBytes(Mat image)
{
int size = image.total() * image.elemSize();
byte * bytes = new byte[size]; //delete[] later
std::memcpy(bytes,image.data,size * sizeof(byte));
}
You can use iterators:
Mat matrix = ...;
std::vector<float> vec(matrix.begin<float>(), matrix.end<float>());
cv::Mat m;
m.create(10, 10, CV_32FC3);
float *array = (float *)malloc( 3*sizeof(float)*10*10 );
cv::MatConstIterator_<cv::Vec3f> it = m.begin<cv::Vec3f>();
for (unsigned i = 0; it != m.end<cv::Vec3f>(); it++ ) {
for ( unsigned j = 0; j < 3; j++ ) {
*(array + i ) = (*it)[j];
i++;
}
}
Now you have a float array. In case of 8 bit, simply change float to uchar, Vec3f to Vec3b and CV_32FC3 to CV_8UC3.
If you know that your img is 3 channel, than you can try this code
Vec3b* dados = new Vec3b[img.rows*img.cols];
for (int i = 0; i < img.rows; i++)
for(int j=0;j<img.cols; j++)
dados[3*i*img.cols+j] =img.at<Vec3b>(i,j);
If you wanna check the (i,j) vec3b you can write
std::cout << (Vec3b)img.at<Vec3b>(i,j) << std::endl;
std::cout << (Vec3b)dados[3*i*img.cols+j] << std::endl;
Since answer above is not very accurate as mentioned in its comments but its "edit queue is full", I have to add correct one-liners.
Mat(uchar, 1 channel) to vector(uchar):
std::vector<uchar> vec = (image.isContinuous() ? image : image.clone()).reshape(1, 1); // data copy here
vector(any type) to Mat(the same type):
Mat m(vec, false); // false(by default) -- do not copy data

Converting to Floating Point Image from .tif

I am relatively new to C++ and coding in general and have run into a problem when attempting to convert an image to a floating point image. I am attempting to do this to eliminate round off errors with calculating the mean and standard deviation of pixel intensity for images as it starts to effect data quite substantially. My code is below.
Mat img = imread("Cells2.tif");
cv::namedWindow("stuff", CV_WINDOW_NORMAL);
cv::imshow("stuff",img);
CvMat cvmat = img;
Mat dst = cvCreateImage(cvGetSize(&cvmat),IPL_DEPTH_32F,1);
cvConvertScale(&cvmat,&dst);
cvScale(&dst,&dst,1.0/255);
cvNamedWindow("Test",CV_WINDOW_NORMAL);
cvShowImage("Test",&dst);
And I am running into this error
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in an unknown function, file ......\modules\core\src\array.cpp, line 1238
I've looked everywhere and everyone was saying to convert img to CvMat which I attempted above.
When I did that as above code shows I get
OpenCV Error: Bad argument (Unknown array type) in unknown function, file ......\modules\core\src\matrix.cpp line 697
Thanks for your help in advance.
Just use C++ OpenCV interface instead of C interface and use convertTo function to convert between data types.
Mat img = imread("Cells2.tif");
cv::imshow("source",img);
Mat dst; // destination image
// check if we have RGB or grayscale image
if (img.channels() == 3) {
// convert 3-channel (RGB) 8-bit uchar image to 32 bit float
src.convertTo(dst, CV_32FC3);
}
else if (img.channels() == 1) {
// convert 1-chanel (grayscale) 8-bit uchar image to 32 bit float
img1.convertTo(dst, CV_32FC1);
}
// display output, note that to display dst image correctly
// we have to divide each element of dst by 255 to keep
// the pixel values in the range [0,1].
cv::imshow("output",dst/255);
waitKey();
Second part of the question To calculate the mean of all elements in dst
cv::Salar avg_pixel;
double avg;
// note that Scalar is a vector.
// If your image is RGB, Scalar will contain 3 values,
// representing color values for each channel.
avg_pixel = cv::mean(dst);
if (dst.channels() == 3) {
//if 3 channels
avg = (avg_pixel[0] + avg_pixel[1] + avg_pixel[2]) / 3;
}
if(dst.channels() == 1) {
avg = avg_pixel[0];
}
cout << "average element of m: " << avg << endl;
Here is my code for calculating the average in C++ OpenCV.
int NumPixels = img.total();
double avg;
double c;
for(int y = 0; y <= img.cols; y++)
for(int x = 0; x <= dst.rows; x++)
c+=img.at<uchar>(x,y);
avg = c/NumPixels;
cout << "Avg Value\n" << 255*avg;
For MATLAB I just load the image and take Q = mean(img(:)); which returns 1776.23
And for the return of 1612.36 I used cv:Scalar z = mean(dst);