I'm trying to convert an image from the container IplImage to a Mat object instead using cvarrToMat
I realized that the converted Mat image would display a number of random data bytes at the end (aka just some uninitialized bytes from memory) but I don't understand why this is happening and/or how to fix this? See the code and results below.
I'm using opencv 2.4.13.7 and working in Visual Studio 2017 (Visual C++ 2017)
I produced a data array with pixelwise recognizable data to contain data of a 3*4 resolution image with a depth of 8 bit and 3 color channels. When the data from the converted image is printed it shows that it skips a pixel (3 bytes) at each row end of the data.
#include "pch.h"
#include <iostream>
#include "cv.hpp"
#include "highgui.hpp"
using namespace std;
using namespace cv;
int main()
{
IplImage* ipl = NULL;
const char* windowName = "Mat image";
int i = 0;
ipl = cvCreateImage(cvSize(3, 4), IPL_DEPTH_8U, 3);
char array[3* 4 * 3] = { 11,12,13, 21,22,23, 31,32,33, 41,42,43, 51, 52, 53, 61, 62, 63, 71, 72, 73, 81, 82, 83, 91, 92, 93, 101, 102, 103, 111, 112, 113, 121, 122, 123 };
ipl->imageData = array;
printf("ipl->imageData = [ ");
for (i = 0; i < (ipl->width*ipl->height*ipl->nChannels); i++) {
printf("%u, ", ipl->imageData[i]);
}
printf("]\n\n");
Mat ipl2 = cvarrToMat(ipl);
cout << "ipl2 = " << endl << " " << ipl2 << endl << endl;
//display dummy image in window to use waitKey function
Mat M(3, 3, CV_8UC3, Scalar(0, 0, 255));
namedWindow(windowName, CV_WINDOW_AUTOSIZE);
imshow(windowName, M);
waitKey(0);
cvReleaseImage(&ipl);
}
Result:
Console window output for 3*4 resolution image
If the same is done for only a 2*2 pixel resolution image then only two bytes are skipped at the row end.. I can not explain this either.
Console window output for same code only with 2*2 resolution image
The reason why I would like to do this conversion is because I have a working routine in C of importing image data from a file (long story about old image file formats with raw image data) to an IplImage for further processing which I would like to keep for now - but I would like to start processing the images as Mat as this seems more widely supported and more simple to use in general, at least until I saw this.
Disclaimer: That is not an answer to the question itself, but should help the author to further investigate his problem. Also, see the comments beneath the question.
As a small test, I use this 3x3 image (you can hardly see - have a look at the "raw" input of my question for the link):
In Image Watch (Visual Studio extension), it'll look like this:
Let's try the following code:
// Read input image.
cv::Mat img = cv::imread("test.png", cv::IMREAD_COLOR);
// Output pixel values.
for (int x = 0; x < img.cols; x++)
{
for (int y = 0; y < img.rows; y++)
{
printf("%d ", img.at<cv::Vec3b>(y, x)[0]);
printf("%d ", img.at<cv::Vec3b>(y, x)[1]);
printf("%d \n", img.at<cv::Vec3b>(y, x)[2]);
}
}
We'll get this output:
0 255 255
0 255 255
255 255 255
255 0 255
255 255 255
255 0 255
0 0 255
0 255 255
255 0 255
Now, you could use the nested loops to check, whether the image data (or better: the pixel values) are identical in your IplImage ipl and in your Mat ipl2.
Related
Question: How to upload an image in OpenCV using the pointer.
Input: Pointer to image
Required Output: cv::Mat image
Explanation: you can do this (below) if the picture is in a directory;
String imageName("C:/Images/1.jpg");
Mat image;
image = imread(samples::findFile(imageName), IMREAD_COLOR);
I try to get the same, but using pointer.
Thank you in advance for your attention to my question :)
There are a few cv::Mat constructors, that create/initialize matrix headers for already existing user data, e.g. this:
cv::Mat::Mat(int rows, int cols, int type, void* data, size_t step = AUTO_STEP)
Please see the given link for the complete parameter description. Nevertheless, regarding the user data data, you must pay attention to the following:
Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.
A very small example is given by this code snippet:
// Set up byte array: 3 rows, 3 columns, BGR values.
uint8_t data[3][3][3] = {
{
{ 100, 110, 120 }, // row 0, col 0, BGR
{ 130, 140, 150 }, // row 0, col 1, BGR
{ 160, 170, 180 } // row 0, col 2, BGR
},
{
{ 190, 200, 210 }, // row 1, col 0, BGR
{ 220, 230, 240 }, // row 1, col 1, BGR
{ 100, 120, 140 } // row 1, col 2, BGR
},
{
{ 160, 180, 200 }, // row 2, col 0, BGR
{ 100, 130, 160 }, // row 2, col 1, BGR
{ 190, 220, 250 } // row 2, col 2, BGR
},
};
// Create cv::Mat header pointing to the image data.
cv::Mat image = Mat(3, 3, CV_8UC3, *data);
Inspecting image at runtime (here: Visual Studio, Image Watch) shows a proper result:
I am trying to figure out how should look input buffer for tensorflow lite model.
My input should be (1, 224, 224, 3) image buffer.
When I put to input buffer with 0 or 255 (black or white) images on the answer I am getting same answer.
uchar* in_data = new uchar[224*224*3];
for(int i=0; i<224*224*3;i++){
// in_data[i] = 0;
in_data[i] = 255;
}
uchar* input_1 = interpreter_stage1->typed_input_tensor<uchar>(0);
input_1 = in_data;
This code is giving me the same answer for all data which I am putting as input.
How should be constructed, proper input for the case when model dimensions are (1, 224, 224, 3)?
For the easy case when I have only (1, 128) single dimension vector everything is working good. But with this multidimensional case I don't know how to proceed.
I'm quite new to OpenCV and I'm now using version 3.4.1 with C++ implementation. I'm still exploring, so this question is not specific to a project, but is more of a "try to understand how it works". Please consider, with the same idea in mind, that I know that I'm somehow "reinventing the will" with this code, but I wrote this example to understand "HOW IT WORKS".
The idea is:
Read an RGB image
Make it binary
Find Connected areas
Colour each area differently
As an example I'm using a 5x5 pixel RGB image saved as BMP. The image is a white box with black pixels all around it's contour.
Up to the point where I get the ConnectedComponents matrix, named Mat::Labels, it all goes fine. If I print the Matrix I see exactly what I expect:
11111
10001
10001
10001
11111
Remember that I've inverted the threshold so it is correct to get 1 on the edges...
I then create a Mat with same size of Mat::Labels but 3 channels to colour it with RGB. This is named Mat::ColoredLabels.
Next step is to instanciate a pointer that runs through the Mat::Labels and for each position in the Mat::Labels where the value is 1 fill the corresponding Mat:.ColoredLabels position with a color.
HERE THINGS GOT VERY WRONG ! The pointer does not fetch the Mat::Labels row byt row as I would expect but follows some other order.
Questions:
Am I doing something wrong or it is "obvious" that the pointer fetching follows some "umpredictable" order ?
How could I set values of a Matrix (Mat::ColoredLabels) based on the values of another matrix (Mat::Labels) ?
.
#include "opencv2\highgui.hpp"
#include "opencv2\opencv.hpp"
#include <stdio.h>
using namespace cv;
int main(int argc, char *argv[]) {
char* FilePath = "";
Mat Img;
Mat ImgGray;
Mat ImgBinary;
Mat Labels;
uchar *P;
uchar *CP;
// Image acquisition
if (argc < 2) {
printf("Missing argument");
return -1;
}
FilePath = argv[1];
Img = imread(FilePath, CV_LOAD_IMAGE_COLOR);
if (Img.empty()) {
printf("Invalid image");
return -1;
}
// Convert to Gray...I know I could convert it right away while loading....
cvtColor(Img, ImgGray, CV_RGB2GRAY);
// Threshold (inverted) to obtain black background and white blobs-> it works
threshold(ImgGray, ImgBinary, 170, 255, CV_THRESH_BINARY_INV);
// Find Connected Components and put the 1/0 result in Mat::Labels
int BlobsNum = connectedComponents(ImgBinary, Labels, 8, CV_16U);
// Just to see what comes out with a 5x5 image. I get:
// 11111
// 10001
// 10001
// 10001
// 11111
std::cout << Labels << "\n";
// Prepare to fetch the Mat(s) with pointer to be fast
int nRows = Labels.rows;
int nCols = Labels.cols * Labels.channels();
if (Labels.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
// Prepare a Mat as big as LAbels but with 3 channels to color different blobs
Mat ColoredLabels(Img.rows, Img.cols, CV_8UC3, cv::Scalar(127, 127, 127));
int ColoredLabelsNumChannels = ColoredLabels.channels();
// Fetch Mat::Labels and Mat::ColoredLabes with the same for cycle...
for (int i = 0; i < nRows; i++) {
// !!! HERE SOMETHING GOES WRONG !!!!
P = Labels.ptr<uchar>(i);
CP = ColoredLabels.ptr<uchar>(i);
for (int j = 0; j < nCols; j++) {
// The coloring operation does not work
if (P[j] > 0) {
CP[j*ColoredLabelsNumChannels] = 0;
CP[j*ColoredLabelsNumChannels + 1] = 0;
CP[j*ColoredLabelsNumChannels + 2] = 255;
}
}
}
std::cout << "\n" << ColoredLabels << "\n";
namedWindow("ColoredLabels", CV_WINDOW_NORMAL);
imshow("ColoredLabels", ColoredLabels);
waitKey(0);
printf("Execution completed succesfully");
return 0;
}
You used connectedComponents function with CV_16U parameter. This means that the single element of the image will consist of 16 bits (hence '16') and you have to interpret them as unsigned integer (hence 'U'). And since ptr returns a pointer, you have to dereference it to get the value.
Therefore you should access label image elements in the following way:
unsigned short val = *Labels.ptr<unsigned short>(i) // or uint16_t
unsigned short val = Labels.at<unsigned short>.at(y, x);
Regarding your second question, it is as simple as that, but of course you have to understand which type casts result in loss of precisions or overflows and which ones not.
mat0.at<int>(y, x) = mat1.at<int>(y, x); // both matrices have CV_32S types
mat2.at<int>(y, x) = mat3.at<char>(y,x); // CV_32S and CV_8S
// Implicit cast occurs. Possible information loss: assigning 32-bit integer values to 8-bit ints
// mat4.at<unsigned char>(y, x) = mat5.at<unsigned int>(y, x); // CV_8U and CV_32U
I'm starting manipulating images with OpenCV recently.
As far as I know, the cv::max(input1, input2, output) is used for finding the maximum BGR values of 2 images. I'd like to max within one colour channel, see the following example for 2 BGR mats (mat size 2x2):
input1= [110, 100, 90, 109, 99, 89;
111, 99, 89, 108, 98, 88]
inout2= [97, 141, 158, 95, 138, 157;
98, 149, 169, 97, 148, 168]
I want to max only the values in the blue channel, thus I will carry to my output mat whatsoever values for the green and red channels; thus I want the result to be like this:
output= [110, 100, 90, 109, 99, 89;
111, 99, 89, 108, 98, 88]
Yes it happened to be that the output mat becomes a copy of input1, but please notice that running
cv::max(input1, input2, output);
gives
output= [110, 141, 158, 109, 138, 157;
111, 149, 169, 108, 148, 168]
which somehow mixes the 2 mats channels.
Sorry for writing this long; I just wanted to be clear. Thank you,,,
UPDATE: I already implemented a solution using C++ for loops. Honestly they work but I'm looking for something faster and simpler, if any.
UPDATE2: From 2 input images, I need the max value from, say the blue channel, and store it into an output with its associate green and red values.
You can split images, calculate max for the channels you want then you can merge it into a output array. Not much simpler but I think it is better than a classical for loop approach. In a limited time this is what I wrote, and it works as you want.
int ssz[2] = {2,2};
double data1[12] = {110,100,90,109,99,89,111,99,89,108,98,88};
double data2[12] = {120,141,158,95,138,157,98,149,169,97,148,168};
Mat in1(2,ssz,CV_64FC3,data1);
Mat in2(2,ssz,CV_64FC3,data2);
Mat out = in1.clone();
Mat c1[3]; //0 indexes will be blue channel
Mat c2[3];
Mat oc[3];
split(in1,c1);
split(in2,c2);
split(out,oc);
cv::max(c1[0],c2[0],oc[0]);
cv::merge(oc,3,out);
To see it works I changed input2 ' s first element to 120. I tested and debuge, it works.
I made this using the classic C++ for loop, which is all what I'm good for :D However, I'm still expecting a better answer at least regarding avoiding .at .
int imdepth=CV_32F;
Mat outBlue (2, 2, imdepth);
Mat outGreen (2, 2, imdepth);
Mat outRed (2, 2, imdepth);
for (int i=0; i<2; i++) {
for (int j=0; j<2; j++) {
float blue1 = in1b.at<float>(i,j); //in1b, in2b, in1g, etc. are
float blue2 = in2b.at<float>(i,j); //channels from img1 and img2
if (blue1>blue2) {
outBlue.at<float>(i,j) = blue1;
outGreen.at<float>(i,j) = in1g.at<float>(i,j);
outRed.at<float>(i,j) = in1r.at<float>(i,j);
} else {
outBlue.at<float>(i,j) = blue2;
outGreen.at<float>(i,j) = in2g.at<float>(i,j);
outRed.at<float>(i,j) = in2r.at<float>(i,j);
}
}
}
vector<Mat> out(3);
out[0] = outBlue;
out[1] = outGreen;
out[2] = outRed;
Mat output (2, 2, CV_32FC3);
merge (out, output);
We are writing a program that takes input from a webcam, substracts all color except the green values, finds the seperated BLOBs and numerates them. Eventually, this will be used as an input for a video game, but that is irrelevent right now.
The code in question is not the code that actually does all of this, but instead, a rewritten segment of the code to test how FindContours actually works.
Usually, with imageprocessing, we have been tought that the image is read from top left to bottom right, but after some testing, it seems that it does the exact opposite, starting at the lower right corner and moving to the upper left!
So the question here is: In which order does FindContours find it's Contours? Am I right in my assumption or is it my own code confusing me?
Input: Blobtest06
"Components" window
Console
#include <opencv2/opencv.hpp>
#include <iostream>
#include <opencv2/core/mat.hpp>
#include <Windows.h> //for sleep function
using namespace cv;
using namespace std;
void IsolateGreen(Mat mIn, Mat& mOut)
{
Mat inImg (mIn.rows, mIn.cols, CV_8UC3, Scalar(1,2,3));
inImg.data = mIn.data;
Mat channelRed (inImg.rows, inImg.cols, CV_8UC1);
Mat channelGreen (inImg.rows, inImg.cols, CV_8UC1);
Mat channelBlue (inImg.rows, inImg.cols, CV_8UC1);
Mat outImg[] = {channelRed, channelGreen, channelBlue};
int fromTo[] = { 0,2, 1,1, 2,0};
mixChannels( &inImg, 1, outImg, 3, fromTo, 3);
mOut = (channelGreen) - (channelRed + channelBlue);
threshold(mOut, mOut, 5, 255, THRESH_BINARY);
erode(mOut, mOut, Mat(), Point (-1,-1), 1);
dilate(mOut, mOut, Mat(), Point(-1,-1), 2);
}
void FindContours(Mat& mDst, Mat mGreenScale, vector<vector<Point>>& vecContours, vector<Vec4i>& vecHierarchy, Mat img)
{
//This is empty at all times. We need it to avoid crashes.
vector<Vec4i> vecHierarchy2;
// mGreenScale = mGreenScale > 1; //// MIGHT be entirely unneeded
mDst = img > 1;
findContours( mGreenScale, vecContours, vecHierarchy,
CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
/* Colors, in order:
1st. = Red
2nd. = Dark red
3rd. = Purple
4th. = Blue
5th. = Baby blue
6th. = Green
7th. = Olive green
8th. = Dark green
*/
int aRed[] = {255, 100, 128, 0, 191, 0, 202, 0};
int aGreen[] = {0, 0, 0, 0, 239, 255, 255, 100};
int aBlue[] = {0, 0, 128, 255, 255, 0, 112, 0};
string sColor[] = {"Red", "Dark red", "Purple", "Blue", "Baby blue", "Green", "Light green", "Dark green"};
//its important that we check if there is anything in vecHierarchy (else) {crash} :P
//function drawContours cannot handle an empty vedHierarchy
if (vecHierarchy != vecHierarchy2)
{
// iterate through all the top-level contours,
for(int idx = 0; idx >= 0; idx = vecHierarchy[idx][0] )
{
// draw each connected component with its own FIXED color
Scalar color( aBlue[idx], aGreen[idx], aRed[idx] );
drawContours( mDst, vecContours, idx, color, /*1*/ CV_FILLED, 8, vecHierarchy );
cout << vecContours[idx][0] << " - - " << sColor[idx] << " - - Index: " << idx << endl;
}
}
cout << "Objects: ";
cout << vecContours.size();
cout << endl;
}
int main()
{
Mat img = imread("Blobtest06.png");
Mat mGreenScale;
//These next 5 instances ties to contourfinding
cvtColor(img, mGreenScale, CV_8UC3); //sets the right rows and cols
vector<vector<Point>> vecContours; //points to each pixel in a contour
vector<Vec4i> vecHierarchy; //A hierarchy for the functions
Mat mDst = Mat::zeros(mGreenScale.rows, mGreenScale.cols, CV_8UC3); //mDst image
IsolateGreen(img, mGreenScale);
FindContours(mDst, mGreenScale, vecContours, vecHierarchy, img);
namedWindow( "Components", 1 );
imshow( "Components", mDst );
namedWindow( "Source", 1 );
imshow( "Source", mGreenScale );
waitKey();
return 0;
}
PS: Sorry for horrible syntax. The site is being difficult and it's just about lunchtime.
If you care about the details of implementation of OpenCV, which is an Open Source library by the way, you can always download the source and read it yourself.
Warning: the C++ API uses the C API for some things, including FindCountours(). So if you check the file: modules/imgproc/src/contours.cpp line 1472 you'll see the C++ implementation of this function:
1472 void cv::findContours( InputOutputArray _image, OutputArrayOfArrays _contours,
1473 OutputArray _hierarchy, int mode, int method, Point offset )
1474 {
1475 Mat image = _image.getMat();
1476 MemStorage storage(cvCreateMemStorage());
1477 CvMat _cimage = image;
1478 CvSeq* _ccontours = 0;
1479 if( _hierarchy.needed() )
1480 _hierarchy.clear();
1481 cvFindContours(&_cimage, storage, &_ccontours, sizeof(CvContour), mode, method, offset);
1482 if( !_ccontours )
1483 {
1484 _contours.clear();
1485 return;
1486 }
calling cvFindContours(), which is from the C API, defined in this same file on line 1424.
The scanning process itself is described on cvFindNextContour(), located at line 794:
793 CvSeq *
794 cvFindNextContour( CvContourScanner scanner )
795 {
and you can clearly see:
824 for( ; y < height; y++, img += step )
825 {
826 for( ; x < width; x++ )
827 {