Understanding why it doesn't copy correctly using memcpy - c++

I have some misunderstanding about OpenCV 4.1.0 and memcpy in C++. The question is why the image is zoomed in a lot?
I read an image like this:
Mat img = imread("lena512.bmp", 1); // Black and White Image
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", img);
After this I have 2 byte array:
int inputSize = width * height * channels;
byte* pixels = new byte[width * height * channels];
byte* out = new byte[width * height * channels];
I copy the img to pixels array:
memcpy(pixels, img.data, inputSize * sizeof(byte));
And then I want to check if retrieving image is the same as input:
Mat image = Mat(width, height , CV_8U);
memcpy(image.data, out, inputSize * sizeof(byte));

Mat img = imread("lena512.bmp", 1); // Black and White Image
That's the problem, the comment is a lie, and by using a magic number instead of a named constant, you can't easily tell that's the case. 1 in this context means IMREAD_COLOR -- i.e. the image is always read as a 3 channel BGR image.
However, after the shenanigans with memcpy and raw pointers, you create new Mat in the following manner:
Mat image = Mat(width, height , CV_8U);
Note that CV_8U is equivalent to CV_8UC1. Hence, you create a single channel (grayscale) Mat, but give it 3-channel data.
Getting garbage as a result is the lesser issue. The much more serious issue is that you copy 3x as much data as the target pixel buffer can hold -- basically you clobber half a megabyte of memory that doesn't belong to the Mat. That can either end with a segfault, or some really hard to find bugs (in case you overwrite some memory used by other data structures).
Update: There's another issue that I've missed (thanks to #Micka for catching that). The order of parameters of the cv::Mat constructor is rows, columns, datatype. It appears you switched width and height, although since your input image appears to be square (i.e. width == height) it didn't matter.
The correct way to allocate the second Mat would be
Mat image = Mat(height, width, CV_8UC3);

Related

Creating a cv::Mat image from a 3d array

Suppose I have an array
uint8_t img[1000][1200][3]
where the first 2 dimensions represent the size of the image (height, width),
and the third one the channels (BGR).
E.g.
img[200][100][1]
gives the value of the Green pixel with coordinates (200, 100).
How can I convert this array to a cv::Mat image?
I tried
cv::Mat my_image(1000, 1200, CV_8UC3, img);
but I am not sure if the result I am getting is correct. Any hints?
Not an expert in cpp, but the idea is following:
uint8_t img[1000][1200][3];
uint8_t *p = img;
//Consturctor takes a pointer to image data, possibly you will need to swap height and width.
cv::Mat my_image(1000, 1200 CV_8UC3, (void*)p);

C++: Get BGR image (cv::Mat) from GPU memory (cudaMemcpy2D)

I am working on image processing and developed camera wrappers with OpenCV for a RGB and a monochrome camera. Now I have to use an existing algorithm that works with CUDA to process those two camera image streams. For that I have to copy the Mat images to my device (the algorithm does not take gpumat). I use cv::Mat::ptr to access the data of the images. When I use cudaMemcpy2D to get the image back to the host, I receive a dark image (zeros only) for the RGB image. Even when I use cudaMemcpy2D to just load it to the device and bring it back in the next step with cudaMemcpy2D it won't work (by that I mean I don't do any image processing in between). It works fine for the mono image though:
width = 1920; (image dimensions are the same for mono and BGR)
height = 1080;
Mat mat_mono(height, width, CV_8UC1);
Mat mat_mono_disp(height, width, CV_8UC1);
size_t pitch_mono;
uint8_t* image_mono_gpu,
size_t matrixLenMono = width;
cudaMallocPitch(&image_mono_gpu, &pitch_mono, width, height);
mat_mono = MonoCamera.CaptureMat(1); // wrapper for the mono camera that grabs the image
// copy to device
cudaMemcpy2D(image_mono_gpu, pitch_mono, mat_mono.ptr(), width, matrixLenMono, height, cudaMemcpyHostToDevice);
// copy back to host
cudaMemcpy2D(mat_mono_disp.ptr(), matrixLenMono, image_mono_gpu, pitch_mono, matrixLenMono, height, cudaMemcpyDeviceToHost);
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", mat_mono_disp);
This is the code for the RGB (or rather BGR) image, where I only receive a dark image after retrieving the image from the device:
Mat mat_BGR(height, width, CV_8UC3);
Mat mat_BGR_disp(height, width, CV_8UC3);
size_t pitch_BGR;
uint8_t* image_BGR_gpu,
size_t matrixLenBGR = width * 3;
cudaMallocPitch(&image_BGR_gpu, &pitch_BGR, matrixLenBGR, height);
mat_BGR = RGBCamera.CaptureMat(1); // wrapper for the RGB camera that grabs the image
// copy to device
cudaMemcpy2D(image_BGR_gpu, pitch_BGR, mat_BGR.ptr(), width, matrixLenBGR, height, cudaMemcpyHostToDevice);
// copy back to host
cudaMemcpy2D(mat_BGR_disp.ptr(), matrixLenBGR, image_BGR_gpu, pitch_BGR, matrixLenBGR, height, cudaMemcpyDeviceToHost);
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", mat_BGR_disp);
Does this mean that using cv::Mat:ptr with a mono image works as this is a special case? I don't know what I have to consider additionally when using the BGR image instead.
As pointed out in a previous answer, when performing 2D memory copy of OpenCV Mat to device memory allocated using cudaMallocPitch ( or any strided 2D memory ), we have to use the step member of the OpenCV Mat to specify the alignment of each row.
In the provided code, the correct way would be to use mat_BGR.step instead of width in the 4th argument of cudaMemcpy2D.
cudaMemcpy2D(image_BGR_gpu, pitch_BGR, mat_BGR.ptr(), mat_BGR.step, matrixLenBGR, height, cudaMemcpyHostToDevice);
^^^^

Opencv c++ resize function: new Width should be multiplied by 3

I am programming in Qt environment and I have a Mat image with size 2592x2048 and I want to resize it to the size of a "label" that I have. But when I want to show the image, I have to multiply the width by 3, so the image is shown in its correct size. Is there any explanation for that?
This is my code:
//Here I get image from the a buffer and save it into a Mat image.
//img_width is 2592 and img_height is 2048
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
Mat cimg;
double r; int n_width, n_height;
//Get the width of label (lbl) into which I want to show the image
n_width = ui->lbl->width();
r = (double)(n_width)/img_width;
n_height = r*(img_height);
cv::resize(image, cimg, Size(n_width*3, n_height), INTER_AREA);
Thanks.
The resize function works well, because if you save the resized image as a file is displayed correctly. Since you want to display it on QLabel, I assume you have to transform your image to QImage first and then to QPixmap. I believe the problem lies either in the step or the image format.
If we ensure the image data passed in
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
are indeed an RGB image, then below code should work:
ui->lbl->setPixmap(QPixmap::fromImage(QImage(cimg.data, cimg.cols, cimg.rows, *cimg.step.p, QImage::Format_RGB888 )));
Finally, instead of using OpenCV, you could construct a QImage object using the constructor
QImage((uchar*)img, img_width, img_height, QImage::Format_RGB888)
and then use the scaledToWidth method to do the resize. (beware thought that this method returns the scaled image, and does not performs the resize operation to the image per se)

Initializing cv::Mat with negative step to flip image vertically

I have a vertically flipped RGBA image stored in uchar[] raw_data, but I need it in grayscale cv::Mat. This can be easily achieved using following code:
cv::Mat src(width, height, CV_8UC4, raw_data), tmp, dst;
cvtColor(src, tmp, CV_RGBA2GRAY);
flip(tmp, dst, 0);
However, I found out that following code is up to two times faster:
int linesize = width * 4; // 4 bytes per RGBA pixel
uchar *data_ptr = raw_data + linesize * (height-1); // ptr to last line
cv::Mat tmp(width, height, CV_8UC4, data_ptr, -linesize), dst;
cvtColor(tmp, dst, CV_RGBA2GRAY);
The trick is quite obvious: tmp is created with pointer to last line and negative line size, so it moves back in memory when iterating over lines. This results in cvtColor by-the-way vertical image flip. Image data is iterated over only once instead of twice, which gives aforementioned boost. I've tested it, it works, end of story.
The questions is: is there any reason to do it the first way? I'm aware, that the fourth parameter in used cv::Mat ctor have size_t type, so in fact this is based on pointer overflows. The code goes to different devices, including smartphones and tablets, so performance is important. On the other hand, it will be compiled to different architectures (x86, ARM), so portability must be preserved.
Thanks in advance!

Count the black pixels using OpenCV

I'm working in opencv 2.4.0 and C++
I'm trying to do an exercise that says I should load an RGB image, convert it to gray scale and save the new image. The next step is to make the grayscale image into a binary image and store that image. This much I have working.
My problem is in counting the amount of black pixels in the binary image.
So far I've searched the web and looked in the book. The method that I've found that seems the most useful is.
int TotalNumberOfPixels = width * height;
int ZeroPixels = TotalNumberOfPixels - cvCountNonZero(cv_image);
But I don't know how to store these values and use them in cvCountNonZero(). When I pass the the image I want counted from to this function I get an error.
int main()
{
Mat rgbImage, grayImage, resizedImage, bwImage, result;
rgbImage = imread("C:/MeBGR.jpg");
cvtColor(rgbImage, grayImage, CV_RGB2GRAY);
resize(grayImage, resizedImage, Size(grayImage.cols/3,grayImage.rows/4),
0, 0, INTER_LINEAR);
imwrite("C:/Jakob/Gray_Image.jpg", resizedImage);
bwImage = imread("C:/Jakob/Gray_Image.jpg");
threshold(bwImage, bwImage, 120, 255, CV_THRESH_BINARY);
imwrite("C:/Jakob/Binary_Image.jpg", bwImage);
imshow("Original", rgbImage);
imshow("Resized", resizedImage);
imshow("Resized Binary", bwImage);
waitKey(0);
return 0;
}
So far this code is very basic but it does what it's supposed to for now. Some adjustments will be made later to clean it up :)
You can use countNonZero to count the number of pixels that are not black (>0) in an image. If you want to count the number of black (==0) pixels, you need to subtract the number of pixels that are not black from the number of pixels in the image (the image width * height).
This code should work:
int TotalNumberOfPixels = bwImage.rows * bwImage.cols;
int ZeroPixels = TotalNumberOfPixels - countNonZero(bwImage);
cout<<"The number of pixels that are zero is "<<ZeroPixels<<endl;