Suppose I have an array
uint8_t img[1000][1200][3]
where the first 2 dimensions represent the size of the image (height, width),
and the third one the channels (BGR).
E.g.
img[200][100][1]
gives the value of the Green pixel with coordinates (200, 100).
How can I convert this array to a cv::Mat image?
I tried
cv::Mat my_image(1000, 1200, CV_8UC3, img);
but I am not sure if the result I am getting is correct. Any hints?
Not an expert in cpp, but the idea is following:
uint8_t img[1000][1200][3];
uint8_t *p = img;
//Consturctor takes a pointer to image data, possibly you will need to swap height and width.
cv::Mat my_image(1000, 1200 CV_8UC3, (void*)p);
Related
I am trying to convert a float image that I get from a simulated depth camera to CV_16UC1. The camera publishes the depth in CV_32FC1 format. I tried many ways but the result was not reasonable.
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
depth_cv.convertTo(depth_converted,CV_16UC1);
The result is a black image. If I use a scale factor, the image will be white.
I also tried to do it this way:
float depthValueF [512*512];
for (int i=0;i<resolution[1];i++){ // go through the rows (y)
for (int j=0;j<resolution[0];j++){ // go through the columns (x)
depthValueOfPixel=depth[i*resolution[0]+j]; // this is location j/i, i.e. x/y
depthValueF[i*resolution[0]+j] = (depthValueOfPixel) * (65535.0f);
}
}
It was not successful either.
Try using cv::normalize instead, which will not only convert the image into the proper data type, but it will properly do the scaling for you under the hood.
Therefore:
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
cv::normalize(depth_cv, depth_converted, 0, 65535, NORM_MINMAX, CV_16UC1);
I have some misunderstanding about OpenCV 4.1.0 and memcpy in C++. The question is why the image is zoomed in a lot?
I read an image like this:
Mat img = imread("lena512.bmp", 1); // Black and White Image
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", img);
After this I have 2 byte array:
int inputSize = width * height * channels;
byte* pixels = new byte[width * height * channels];
byte* out = new byte[width * height * channels];
I copy the img to pixels array:
memcpy(pixels, img.data, inputSize * sizeof(byte));
And then I want to check if retrieving image is the same as input:
Mat image = Mat(width, height , CV_8U);
memcpy(image.data, out, inputSize * sizeof(byte));
Mat img = imread("lena512.bmp", 1); // Black and White Image
That's the problem, the comment is a lie, and by using a magic number instead of a named constant, you can't easily tell that's the case. 1 in this context means IMREAD_COLOR -- i.e. the image is always read as a 3 channel BGR image.
However, after the shenanigans with memcpy and raw pointers, you create new Mat in the following manner:
Mat image = Mat(width, height , CV_8U);
Note that CV_8U is equivalent to CV_8UC1. Hence, you create a single channel (grayscale) Mat, but give it 3-channel data.
Getting garbage as a result is the lesser issue. The much more serious issue is that you copy 3x as much data as the target pixel buffer can hold -- basically you clobber half a megabyte of memory that doesn't belong to the Mat. That can either end with a segfault, or some really hard to find bugs (in case you overwrite some memory used by other data structures).
Update: There's another issue that I've missed (thanks to #Micka for catching that). The order of parameters of the cv::Mat constructor is rows, columns, datatype. It appears you switched width and height, although since your input image appears to be square (i.e. width == height) it didn't matter.
The correct way to allocate the second Mat would be
Mat image = Mat(height, width, CV_8UC3);
I am programming in Qt environment and I have a Mat image with size 2592x2048 and I want to resize it to the size of a "label" that I have. But when I want to show the image, I have to multiply the width by 3, so the image is shown in its correct size. Is there any explanation for that?
This is my code:
//Here I get image from the a buffer and save it into a Mat image.
//img_width is 2592 and img_height is 2048
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
Mat cimg;
double r; int n_width, n_height;
//Get the width of label (lbl) into which I want to show the image
n_width = ui->lbl->width();
r = (double)(n_width)/img_width;
n_height = r*(img_height);
cv::resize(image, cimg, Size(n_width*3, n_height), INTER_AREA);
Thanks.
The resize function works well, because if you save the resized image as a file is displayed correctly. Since you want to display it on QLabel, I assume you have to transform your image to QImage first and then to QPixmap. I believe the problem lies either in the step or the image format.
If we ensure the image data passed in
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
are indeed an RGB image, then below code should work:
ui->lbl->setPixmap(QPixmap::fromImage(QImage(cimg.data, cimg.cols, cimg.rows, *cimg.step.p, QImage::Format_RGB888 )));
Finally, instead of using OpenCV, you could construct a QImage object using the constructor
QImage((uchar*)img, img_width, img_height, QImage::Format_RGB888)
and then use the scaledToWidth method to do the resize. (beware thought that this method returns the scaled image, and does not performs the resize operation to the image per se)
I have a 32-bit integer array containing pixel values of a 3450x3450 image I want to create a Mat image with. Tried the following:
int *image_array;
image_array = (int *)malloc( 3450*3450*sizeof(int) );
memset( (char *)image_array, 0, sizeof(int)*3450*3450 );
image_array[0] = intensity_of_first_pixel;
...
image_array[11902499] = intensity_of_last_pixel;
Mat M(3450, 3450, CV_32FC1, image_array);
and upon displaying the image I get a black screen. I should also note the array contains a 16-bit grayscale image.
I guess you should try to convert the input image, which I assume is in RGB[A] format using:
cv::Mat m(3450, 3450, CV_8UC1, image_array) // For GRAY image
cv::Mat m(3450, 3450, CV_8UC3, image_array) // For RGB image
cv::Mat m(3450, 3450, CV_8UC4, image_array) // For RGBA image
I'm new to opencv and i'm trying on some sample codes.
in one code, Mat gr(row1,col1,CV_8UC1,scalar(0));
int x = gr.at<uchar> (row,col);
And in another one,
Mat grHistrogram(301,260,CV_8UC1,Scalar(0,0,0));
line(grHistrogram,pt1,pt2,Scalar(255,255,255),1,8,0);
Now my question is if i used scalar(0) instead of scalar(0,0,0) in second code, The code doesn't work.
1.Why this happening since, Both create a Mat image structure.
2.what is the purpose of const cv:Scalar &_s.
I search the Documentaion from Opencv site (opencv.pdf,opencv2refman.pdf) and Oreilly's Opencv book. But couldn't find a explained answer.
I think i'm using the Mat(int _rows,int _cols,int _type,const cv:Scalar &_s) struct.
First, you need the following information to create the image:
Width: 301 pixels
Height: 260 pixels
Each pixel value (intensity) is 0 ~ 255: an 8-bit unsigned integer
Supports all RGB colors: 3 channels
Initial color: black = (B, G, R) = (0, 0, 0)
You can create the Image using cv::Mat:
Mat grHistogram(260, 301, CV_8UC3, Scalar(0, 0, 0));
The 8U means the 8-bit Usigned integer, C3 means 3 Channels for RGB color, and Scalar(0, 0, 0) is the initial value for each pixel. Similarly,
line(grHistrogram,pt1,pt2,Scalar(255,255,255),1,8,0);
is to draw a line on grHistogram from point pt1 to point pt2. The color of line is white (255, 255, 255) with 1-pixel thickness, 8-connected line, and 0-shift.
Sometimes you don't need a RGB-color image, but a simple grayscale image. That is, use one channel instead of three. The type can be changed to CV_8UC1 and you only need to specify the intensity for one channel, Scalar(0) for example.
Back to your problem,
Why this happening since, both create a Mat image structure?
Because you need to specify the type of the Mat. Is it a color image CV_8UC3 or a grayscale image CV_8UC1? They are different. Your program may not work as you think if you use Scalar(255) on a CV_8UC3 image.
What is the purpose of const cv:Scalar &_s ?
cv::Scalar is use to specify the intensity value for each pixel. For example, Scalar(255, 0, 0) is blue and Scalar(0, 0, 0) is black if type is CV_8UC3. Or Scalar(0) is black if it's a CV_8UC1 grayscale image. Avoid mixing them together.
You can create single channel image or multi channel image.
creating single channel image : Mat img(500, 1000, CV_8UC1, Scalar(70));
creating multi channel image : Mat img1(500, 1000, CV_8UC3, Scalar(10, 100, 150));
you can see more example and detail from following page.
https://progtpoint.blogspot.com/2017/01/tutorial-3-create-image.html