How to copy cv::Mat & Img to unsigned char* img?opencv - c++

How to copy "Mat& imgSrc" to "unsigned char* imgSrc"?
void BGR2NV21( unsigned char* bgrData, int w, int h, unsigned char* nv21Data)
{.............}
int enhanceRGB( cv::Mat& fieldFaceImgSrc, int iWidth,int iHeight, cv::Mat& faceImgDst)
{
cv::imwrite("enhanceInput.jpg", fieldFaceImgSrc); //it's ok
cv::Mat faceBackupData;
faceBackupData = fieldFaceImgSrc.clone(); //it's ok
cv::imwrite("enhanceput.jpg", faceBackupData); //it's ok
unsigned char*pcfieldFaceDataNV21 = (unsigned char*)faceBackupData .data;
BGR2NV21(pcfieldFaceDataNV21, //it's bad,pcfieldFaceDataNV21 is bad data;
iWidth, iHeight, pcfieldFaceDataNV21);
}
Thanks for your help.

Is there an error message or is the image just "wrong"?
Two obvious things to check, have you passed image width and height the right way around? It's a constant source of pain in image between camera APIs (x,y) and image processing matrix APIs (row,col)
And opencv pads rows of an image to 32bit (4byte boundaries) see Stride on image using Opencv C++

One of the reason why it may failed is about the management of the channels.
OpenCV's Mat object store the data continuously if they are not a submatrix.
This storage follow the raw line ordering that mean:
The matrix :
|1 2 3|
|4 5 6|
|7 8 9|
will be stored a a linear pointer in that order : 1,2,3,4,5,6,7,8,9.
That is also true for the channels i.e. let supose two pixel with the value [255,64,128] and [64,12,34] corresponding the the coordinate row = 0, col =0 and row =0 col = 1 they will stored in memory continuously (the six first values of the data will be 255,64,128,64,12,34)
Now the easiest way to copy the data pointer of Mat object to another data pointer can:
std::memcpy(dst,src,rows*cols*elemsize); or if you pass the Mat object:
if(obj.isContiguous())
std::memcpy(dst,src.ptr(),src.rows*src.step); // if you data are of type unsigned char src.step == src.cols otherwise src.step = src.cols*src.elemSize()
else
for(int r=0;r<src.rows;r++,dst+=src.step)
std::memcpy(dst,src.ptr(r),src.step);

so sorry,it always some questions,code is :
int enhanceRGB( cv::Mat fieldFaceImgSrc, int iWidth,int iHeight, cv::Mat& faceImgDst)
{
cv::imwrite("enhanceInput.jpg", fieldFaceImgSrc); //it's ok
cv::Mat faceBackupData;
faceBackupData = fieldFaceImgSrc.clone();
cv::imwrite("enhanceput.jpg", faceBackupData); //it's ok
unsigned char *testdata = (unsigned char*)malloc(iWidth * iHeight * 3 * sizeof(unsigned char));
memcpy(testdata, faceBackupData.ptr(), iWidth * iHeight * 3);
cv::Mat FaceData(iHeight , iWidth, CV_8UC3, testdata);
cv::imwrite("testput.jpg", FaceData); //it's bad
}

Related

Read a 3D Dicom image with DCMTK and convert it to OpenCV Mat

I have a dicom 3D image which is [512,512,5] (rows, cols, slices). I want to read it with DCMTK toolkit and convert it to a OpenCV Mat object. The image is 16 bits unsigned int.
My question is: Does anyone know the correct way to convert this dicom image into a Mat object? How to properly read all the slices with the method getOutputData?
Based on the comments of #Alan Birtles, there is the possibility to specify the frame you want to read on the getOutputData method. After reading each frame, you simply merge the Mat objects into a single Mat.
I wrote this code to get the whole volume:
DicomImage *image = new DicomImage(file);
// Get the information
unsigned int nRows = image->getHeight();
unsigned int nCols = image->getWidth();
unsigned int nImgs = image->getFrameCount();
vector <Mat> slices(nImgs);
// Loop for each slice
for(int k = 0; k<nImgs; k++){
(Uint16 *) pixelData = (Uint16 *)(image->getOutputData(16 /* bits */,k /* slice */));
slices[k] = Mat(nRows, nCols, CV_16U, pixelData).clone();
}
Mat img;
// Merge the slices in a single img
merge(slices,img);
cout << img.size() << endl;
cout << img.channels() << endl;
// Output:
// [512 x 512]
// 5

OpenCV 3 C++ Mat fetching with pointer goes random

I'm quite new to OpenCV and I'm now using version 3.4.1 with C++ implementation. I'm still exploring, so this question is not specific to a project, but is more of a "try to understand how it works". Please consider, with the same idea in mind, that I know that I'm somehow "reinventing the will" with this code, but I wrote this example to understand "HOW IT WORKS".
The idea is:
Read an RGB image
Make it binary
Find Connected areas
Colour each area differently
As an example I'm using a 5x5 pixel RGB image saved as BMP. The image is a white box with black pixels all around it's contour.
Up to the point where I get the ConnectedComponents matrix, named Mat::Labels, it all goes fine. If I print the Matrix I see exactly what I expect:
11111
10001
10001
10001
11111
Remember that I've inverted the threshold so it is correct to get 1 on the edges...
I then create a Mat with same size of Mat::Labels but 3 channels to colour it with RGB. This is named Mat::ColoredLabels.
Next step is to instanciate a pointer that runs through the Mat::Labels and for each position in the Mat::Labels where the value is 1 fill the corresponding Mat:.ColoredLabels position with a color.
HERE THINGS GOT VERY WRONG ! The pointer does not fetch the Mat::Labels row byt row as I would expect but follows some other order.
Questions:
Am I doing something wrong or it is "obvious" that the pointer fetching follows some "umpredictable" order ?
How could I set values of a Matrix (Mat::ColoredLabels) based on the values of another matrix (Mat::Labels) ?
.
#include "opencv2\highgui.hpp"
#include "opencv2\opencv.hpp"
#include <stdio.h>
using namespace cv;
int main(int argc, char *argv[]) {
char* FilePath = "";
Mat Img;
Mat ImgGray;
Mat ImgBinary;
Mat Labels;
uchar *P;
uchar *CP;
// Image acquisition
if (argc < 2) {
printf("Missing argument");
return -1;
}
FilePath = argv[1];
Img = imread(FilePath, CV_LOAD_IMAGE_COLOR);
if (Img.empty()) {
printf("Invalid image");
return -1;
}
// Convert to Gray...I know I could convert it right away while loading....
cvtColor(Img, ImgGray, CV_RGB2GRAY);
// Threshold (inverted) to obtain black background and white blobs-> it works
threshold(ImgGray, ImgBinary, 170, 255, CV_THRESH_BINARY_INV);
// Find Connected Components and put the 1/0 result in Mat::Labels
int BlobsNum = connectedComponents(ImgBinary, Labels, 8, CV_16U);
// Just to see what comes out with a 5x5 image. I get:
// 11111
// 10001
// 10001
// 10001
// 11111
std::cout << Labels << "\n";
// Prepare to fetch the Mat(s) with pointer to be fast
int nRows = Labels.rows;
int nCols = Labels.cols * Labels.channels();
if (Labels.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
// Prepare a Mat as big as LAbels but with 3 channels to color different blobs
Mat ColoredLabels(Img.rows, Img.cols, CV_8UC3, cv::Scalar(127, 127, 127));
int ColoredLabelsNumChannels = ColoredLabels.channels();
// Fetch Mat::Labels and Mat::ColoredLabes with the same for cycle...
for (int i = 0; i < nRows; i++) {
// !!! HERE SOMETHING GOES WRONG !!!!
P = Labels.ptr<uchar>(i);
CP = ColoredLabels.ptr<uchar>(i);
for (int j = 0; j < nCols; j++) {
// The coloring operation does not work
if (P[j] > 0) {
CP[j*ColoredLabelsNumChannels] = 0;
CP[j*ColoredLabelsNumChannels + 1] = 0;
CP[j*ColoredLabelsNumChannels + 2] = 255;
}
}
}
std::cout << "\n" << ColoredLabels << "\n";
namedWindow("ColoredLabels", CV_WINDOW_NORMAL);
imshow("ColoredLabels", ColoredLabels);
waitKey(0);
printf("Execution completed succesfully");
return 0;
}
You used connectedComponents function with CV_16U parameter. This means that the single element of the image will consist of 16 bits (hence '16') and you have to interpret them as unsigned integer (hence 'U'). And since ptr returns a pointer, you have to dereference it to get the value.
Therefore you should access label image elements in the following way:
unsigned short val = *Labels.ptr<unsigned short>(i) // or uint16_t
unsigned short val = Labels.at<unsigned short>.at(y, x);
Regarding your second question, it is as simple as that, but of course you have to understand which type casts result in loss of precisions or overflows and which ones not.
mat0.at<int>(y, x) = mat1.at<int>(y, x); // both matrices have CV_32S types
mat2.at<int>(y, x) = mat3.at<char>(y,x); // CV_32S and CV_8S
// Implicit cast occurs. Possible information loss: assigning 32-bit integer values to 8-bit ints
// mat4.at<unsigned char>(y, x) = mat5.at<unsigned int>(y, x); // CV_8U and CV_32U

How to use cv::mat in opencl

Problem
I have opencl code which must take cv::mat as input and return cv::mat as output.
For now I convert the input to regular array of chars and pass it to opencl and convert the output (which is char array) to cv::mat.
What I have
I try to use cv::mat raw data but there are some gaps in the data. For that reason I copy cv::mat to the contiguous array, but I'm sure that I can force opencl to use data with gaps .
Question
Is it possible for someone to explain how I can avoid copying data to and from the array, and directly use cv::mat as input and output?
Array to Mat: you can use the "constructor for matrix headers pointing to user-allocated data" (see this answer):
Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP);
Mat to Ptr: you can use the data attribute (from this answer)
unsigned char *input = (unsigned char*)(img.data);
for(int j = 0;j < img.rows;j++){
for(int i = 0;i < img.cols;i++){
unsigned char b = input[img.step * j + img.channels() * i ] ;
unsigned char g = input[img.step * j + img.channels() * i + 1];
unsigned char r = input[img.step * j + img.channels() * i + 2];
}
}
Of course, you need to adapt this to your data type.
Maybe this answer helps you with your question: How to launch custom OpenCL kernel in OpenCV (3.0.0) OCL?
You could maybe use the UMat class that OpenCV provides.
cv::Mat mat = ...;
// Upload input mat
cv::UMat input_gpu = mat.getUMat(cv::ACCESS_READ, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
// Create output mat on the GPU
cv::UMat output_gpu(mat_src.size(), CV_32F, cv::ACCESS_WRITE, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
// Download output mat
cv::Mat output = output_gpu.getMat(cv::ACCESS_READ);
From what I understand you should be able to pass the UMat directly to your kernel
using cv::ocl::KernelArg::ReadWrite(output_gpu). The kernel argument for that mat would then be __global uchar*. I'm not sure though, I have only used OpenCV in combination with CUDA so far.

Transpose a 1D byte-wise array to a 2D array

I am reading a 64x128 grayscale image in an array, one line at a time. Each pixel is 8-bit wide. The read operation is done in a byte addressable manner. Now I want to transpose each line and store it in a 2D array. This architecture is designed towards memory optimization on a specific device. Once the 2D array is filled, I need to read it byte-by-byte such that each of the 8 bits lie in a different row of the array. Can anyone give a sample code?
Thanks!
sample
#include <stdio.h>
#include <string.h>
int main(void){
unsigned char a[64*128];
int i,r,c;
for(i=0;i<64*128;i++){
a[i]=i;
}
//not fill
unsigned char (*b)[64][128] = (unsigned char (*)[64][128])a;
for(r=0;r<64;++r){
for(c=0;c<128;++c){
printf("%i,", (*b)[r][c]);
}
printf("\n");
}
//FILL
unsigned char b2[64][128];
memcpy(b2, a, sizeof(b2));
printf("\nFill ver\n");
for(r=0;r<2;++r){
for(c=0;c<16;++c){
printf("%i,", b2[r][c]);
}
printf("\n");
}
return 0;
}
just construct a Mat from your array:
uchar array[w*h];
// read bytes:
Mat m( h, w, CV_8U, array );
or start with a Mat in the 1st place, and read into it's data member:
Mat m(h, w, CV_8U);
// read into m.data
if you have a 1d Mat, you can just reshape it:
Mat m( 1, w*h, CV_8U );
// make it 2d HxW now:
m = m.reshape(1, w);

Moving my array to Mat and showing image with open CV

I am having problems using opencv to display an image. As my code is currently working, I have function that loads 78 images of size 710X710 of unsigned shorts into a single array. I have verified this works by writing the data to a file and reading it with imageJ. I am now trying to extract a single image frame from the array and load it into a Mat in order to perform some processing on it. Right now I have tried two ways to do this. The code will compile and run if I do not try to read the output, but if I cout<
My question would be, how do I extract the data from my large, 1-D array of 78images of size 710*710 into single Mat images. Or is there an more efficient way where I can load the images into a 3-D mat of dimensions 710X710X78 and operate on each 710X710 slice as needed?
int main(int argc, char *argv[])
{
Mat OriginalMat, TestImage;
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
unsigned short int *testplane = new unsigned short int[VImageSize];
/////Load PlaneStack/////
Load_Vimage(PlaneStack, Path, NumberofPlanes);
//Here I try to extract a single plane image to the mat testplane, I try it two different ways with the same results
memcpy(testplane, &PlaneStack[710*710*40], VImageSize*sizeof(unsigned short int));
//copy(&PlaneStack[VImageSize*40],&PlaneStack[VImageSize*41], testplane);
// move single plane to a mat file
OriginalMat = Mat(710,710,CV_8U, &testplane) ;
//cout<<OriginalMat;
namedWindow("Original");
imshow("Original", OriginalMat);
}
The problem is you are using the constructor Mat::Mat(int rows, int cols, int type, void* data) with a pointer to 16 bit data (unsigned short int) but you are specifying the type CV_8U (8 bit).
Therefore the first byte of your 16 bit pixel becomes the first pixel in OriginalMat, and the second byte of the first pixel becomes the second pixel in OriginalMat, etc.
You need to create a 16 bit Mat, then convert it to 8 bit if you want to display it, e.g.:
int main(int argc, char *argv[])
{
long int VImageSize = 710*710;
int NumberofPlanes = 78;
int FrameNum = 150;
/////Load PlaneStack/////
unsigned short int *PlaneStack = new unsigned short int[NumberofPlanes*VImageSize];
Load_Vimage(PlaneStack, Path, NumberofPlanes);
// Get a pointer to the plane we want to view
unsigned short int *testplane = &PlaneStack[710*710*40];
// "move" single plane to a mat file
// actually nothing gets moved, OriginalMat will just contain a pointer to your data.
Mat OriginalMat(710,710,CV_16UC1, &testplane) ;
double scale_factor = 1.0 / 256.0;
Mat DisplayMat;
OriginalMat.convertTo(DisplayMat, CV_8UC1, scale_factor);
namedWindow("Original");
imshow("Original", DisplayMat);
}