Initializing cv::Mat with negative step to flip image vertically - c++

I have a vertically flipped RGBA image stored in uchar[] raw_data, but I need it in grayscale cv::Mat. This can be easily achieved using following code:
cv::Mat src(width, height, CV_8UC4, raw_data), tmp, dst;
cvtColor(src, tmp, CV_RGBA2GRAY);
flip(tmp, dst, 0);
However, I found out that following code is up to two times faster:
int linesize = width * 4; // 4 bytes per RGBA pixel
uchar *data_ptr = raw_data + linesize * (height-1); // ptr to last line
cv::Mat tmp(width, height, CV_8UC4, data_ptr, -linesize), dst;
cvtColor(tmp, dst, CV_RGBA2GRAY);
The trick is quite obvious: tmp is created with pointer to last line and negative line size, so it moves back in memory when iterating over lines. This results in cvtColor by-the-way vertical image flip. Image data is iterated over only once instead of twice, which gives aforementioned boost. I've tested it, it works, end of story.
The questions is: is there any reason to do it the first way? I'm aware, that the fourth parameter in used cv::Mat ctor have size_t type, so in fact this is based on pointer overflows. The code goes to different devices, including smartphones and tablets, so performance is important. On the other hand, it will be compiled to different architectures (x86, ARM), so portability must be preserved.
Thanks in advance!

Related

uint8_t buffer to cv::Mat conversion results in distorted image

I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).

Understanding why it doesn't copy correctly using memcpy

I have some misunderstanding about OpenCV 4.1.0 and memcpy in C++. The question is why the image is zoomed in a lot?
I read an image like this:
Mat img = imread("lena512.bmp", 1); // Black and White Image
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", img);
After this I have 2 byte array:
int inputSize = width * height * channels;
byte* pixels = new byte[width * height * channels];
byte* out = new byte[width * height * channels];
I copy the img to pixels array:
memcpy(pixels, img.data, inputSize * sizeof(byte));
And then I want to check if retrieving image is the same as input:
Mat image = Mat(width, height , CV_8U);
memcpy(image.data, out, inputSize * sizeof(byte));
Mat img = imread("lena512.bmp", 1); // Black and White Image
That's the problem, the comment is a lie, and by using a magic number instead of a named constant, you can't easily tell that's the case. 1 in this context means IMREAD_COLOR -- i.e. the image is always read as a 3 channel BGR image.
However, after the shenanigans with memcpy and raw pointers, you create new Mat in the following manner:
Mat image = Mat(width, height , CV_8U);
Note that CV_8U is equivalent to CV_8UC1. Hence, you create a single channel (grayscale) Mat, but give it 3-channel data.
Getting garbage as a result is the lesser issue. The much more serious issue is that you copy 3x as much data as the target pixel buffer can hold -- basically you clobber half a megabyte of memory that doesn't belong to the Mat. That can either end with a segfault, or some really hard to find bugs (in case you overwrite some memory used by other data structures).
Update: There's another issue that I've missed (thanks to #Micka for catching that). The order of parameters of the cv::Mat constructor is rows, columns, datatype. It appears you switched width and height, although since your input image appears to be square (i.e. width == height) it didn't matter.
The correct way to allocate the second Mat would be
Mat image = Mat(height, width, CV_8UC3);

opencv Mat CV_8UC1 type (uchar) to *unsigned short (*UINT16)

This is mainly a C++ variable/pointer handling/casting question.
I am trying to apply one of the openCV library image filters to a depth Image from the Kinect v2 SDK (16bit grayscale, values between 0 and 8092).
I want to do this after getting the depth image but BEFORE using the kinect SDK to do rgb-depth registration and conversion to a point cloud. Therefore I want the final filtered image/array to be of the same type as I received before filtering so I can pass it back to the Kinect SDK.
Initial code:
Get the kinect depth frame as a pointer
UINT nBufferSize = nDepthFrameHeight * nDepthFrameWidth;
hr = pDepthFrame->CopyFrameDataToArray(nBufferSize, pDepth);
create 2 matrices along with the conversion between the 16bit and 8bit(openCV works with 8bit greyscale)
Mat depthMat(height, width, CV_16UC1, depth); // from kinect
Mat depthf(height, width, CV_8UC1);
depthMat.convertTo(depthf, CV_8UC1, 255.0/2048.0);
imshow("original-depth", depthf);
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value
Mat temp, temp2;
1 step - downsize for performance, use a smaller version of depth image
Mat small_depthf;
resize(depthf, small_depthf, Size(), 0.2, 0.2);
2 step - inpaint only the masked "unknown" pixels
cv::inpaint(small_depthf, (small_depthf == noDepth), temp, 5.0, INPAINT_TELEA);
3 step - upscale to original size and replace inpainted regions in original depth image
resize(temp, temp2, depthf.size());
temp2.copyTo(depthf, (depthf == noDepth)); // add to the original signal
imshow("depth-inpaint", depthf); // show results
Problematic Part:
When I try to reverse the process (even with loss of information for now)
cv::Mat newDepth(nDepthFrameHeight, nDepthFrameWidth, CV_16UC1);
depthf.convertTo(newDepth, CV_16UC1, 8092.0 / 255.0);
I have found no way to convert these cv::Mat types back to *ushort (*UINT16 in this case).
I have tried things like reinterpret_cast, depthf.data and depthf.ptr() but it keeps showing uchar when hovering over the final data, unless I force it like in the ptr case above, in which case it crashes.
Any ideas?
P.S.: Code works flawlessly if I don't try to filter the depth. Also, crash occurs when the SDK tries to map color and depth and tries to use pDepth in
pCoordinateMapper->MapColorFrameToDepthSpace(nDepthFrameWidth * nDepthFrameHeight, pDepth, nColorFrameWidth * nColorFrameHeight, (DepthSpacePoint*)pDepthSpacePoints);

OpenCV - Filling empty spaces in object using c++

How can I fill empty space of object in OpenCV ?
Let me clarify my question.
I have an image below
Now I want to fill all the gaps in the image like this :
in Matlab I have done it by convex hull, but I don't know how to do it in C++.
Thanks.
Try morphological operations. If you go this way, note, that you may vary either kernel size (increase to decrease number of iterations), or iterations (more iterations will eliminate empty space even if kernel is small), or both.
cv::Mat img = cv::imread("cwyX5.jpeg");
cv::imshow("image", img);
cv::Size kernelSize(5, 5);
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, kernelSize);
cv::Mat result;
int iterations = 3;
cv::morphologyEx(img, result, cv::MORPH_OPEN, kernel, cv::Point(-1,-1), iterations);
cv::imshow("result", result);
cv::waitKey();

grayscale image creation 16 bits

I am using openCV for the first time. I am using openCV3 and XCode to code it. I want to create a 16 bit grayscale image but I want to the data I have is defined such that 4000 is the pixel value for white and 0 for black. I have the information for these pixels in an array of type int. How can I create a Mat and assign the values in the array to the Mat?
short data[] = { 0,0,4000,4000,0,0,4000, ...};
Mat gray16 = Mat(h, w, CV_16S, data);
again, the types must match. for 16bit, you need CV_16S and a shortarray, for 8bit CV_8U and a uchar* array, for float CV_32S and a float* ....
You can create your Mat with
cv::Mat m(rows, cols, CV_16UC1);
but to my knowledge, there is no way to define a custom value for "white", you'll have to multiply m with std::numeric_limits::max / 4000. However, this is only necessary when displaying the image.
A lookup-table could do the same (potentially slower), see cv::LUT. However, it appearently only supports 8-bit images.
edit: OK, I missed the part about assigning existing array values; see berak's answer. I hope the answer is still useful.