I have a uchar* raw from an API which represents an image raw data. The width, height and number of channels of this raw is already known. I have already pre-allocated an cv::Mat (OpenCV) with this width and height.
My question is - how is it possible to set raw into this cv::Mat. I would like to copy raw into cv::Mat instead of just switching pointers. Is there a function to accomplish this or I need to do so manually myself?
I guess it isn't the most sophisticated way but it should work:
uchar* raw;
cv::Mat image(size, type, raw);
image = image.clone();
cv::Mat mat(cv::Size(width, height), CV_8UC1, raw, cv::Mat::AUTO_STEP);
copiedImage = mat.clone();
Related
If I have an opencv image that I read from a png file like this:
cv::Mat img = cv::imread("/to/path/test.png");
how do I get that image in bytes? I know using img.data returns an unsigned char* but that is not what I need. Any suggestions?
If I got your question right, you want, for example, a 250*250 image to return a 250*250 matrix so I would suggest using grey-scale instead of BGR
imgData = cv2.imread(path, 0)
I believe this is written in C++ like this
cv::Mat img = cv::imread(file_name);//It returns a matrix object
cv::Mat graymat;
cvtColor(img, graymat,cv::COLOR_BGR2GRAY);
I have some misunderstanding about OpenCV 4.1.0 and memcpy in C++. The question is why the image is zoomed in a lot?
I read an image like this:
Mat img = imread("lena512.bmp", 1); // Black and White Image
namedWindow("Display window", WINDOW_AUTOSIZE);
imshow("Display window", img);
After this I have 2 byte array:
int inputSize = width * height * channels;
byte* pixels = new byte[width * height * channels];
byte* out = new byte[width * height * channels];
I copy the img to pixels array:
memcpy(pixels, img.data, inputSize * sizeof(byte));
And then I want to check if retrieving image is the same as input:
Mat image = Mat(width, height , CV_8U);
memcpy(image.data, out, inputSize * sizeof(byte));
Mat img = imread("lena512.bmp", 1); // Black and White Image
That's the problem, the comment is a lie, and by using a magic number instead of a named constant, you can't easily tell that's the case. 1 in this context means IMREAD_COLOR -- i.e. the image is always read as a 3 channel BGR image.
However, after the shenanigans with memcpy and raw pointers, you create new Mat in the following manner:
Mat image = Mat(width, height , CV_8U);
Note that CV_8U is equivalent to CV_8UC1. Hence, you create a single channel (grayscale) Mat, but give it 3-channel data.
Getting garbage as a result is the lesser issue. The much more serious issue is that you copy 3x as much data as the target pixel buffer can hold -- basically you clobber half a megabyte of memory that doesn't belong to the Mat. That can either end with a segfault, or some really hard to find bugs (in case you overwrite some memory used by other data structures).
Update: There's another issue that I've missed (thanks to #Micka for catching that). The order of parameters of the cv::Mat constructor is rows, columns, datatype. It appears you switched width and height, although since your input image appears to be square (i.e. width == height) it didn't matter.
The correct way to allocate the second Mat would be
Mat image = Mat(height, width, CV_8UC3);
I am programming in Qt environment and I have a Mat image with size 2592x2048 and I want to resize it to the size of a "label" that I have. But when I want to show the image, I have to multiply the width by 3, so the image is shown in its correct size. Is there any explanation for that?
This is my code:
//Here I get image from the a buffer and save it into a Mat image.
//img_width is 2592 and img_height is 2048
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
Mat cimg;
double r; int n_width, n_height;
//Get the width of label (lbl) into which I want to show the image
n_width = ui->lbl->width();
r = (double)(n_width)/img_width;
n_height = r*(img_height);
cv::resize(image, cimg, Size(n_width*3, n_height), INTER_AREA);
Thanks.
The resize function works well, because if you save the resized image as a file is displayed correctly. Since you want to display it on QLabel, I assume you have to transform your image to QImage first and then to QPixmap. I believe the problem lies either in the step or the image format.
If we ensure the image data passed in
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
are indeed an RGB image, then below code should work:
ui->lbl->setPixmap(QPixmap::fromImage(QImage(cimg.data, cimg.cols, cimg.rows, *cimg.step.p, QImage::Format_RGB888 )));
Finally, instead of using OpenCV, you could construct a QImage object using the constructor
QImage((uchar*)img, img_width, img_height, QImage::Format_RGB888)
and then use the scaledToWidth method to do the resize. (beware thought that this method returns the scaled image, and does not performs the resize operation to the image per se)
I've created a filter extending QAbstractVideoFilter and
QVideoFilterRunnable and I've overrided the
QVideoFrame run(QVideoFrame* input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags)`
method
The problem is that QVideoFrame format is Format_YUV420P and has no handle. I need to convert it into a CV_8UC1 in order to use OpenCV algorithms.
Which is the best way to accomplish this?
First you need to create a cv::Mat which has an API for initializing using data pointer as:
cv::Mat img = cv::Mat(rows, cols, CV_8UC3, input.data/*Change this to point the first element of array containing the YUV color info*/)
Now since the img is initialized with YUV color data, you may use various cvtColor modes to convert the YUV mat to other formats, for converting it to gray-scale you may try:
cv::Mat gray;
cv::cvtColor(img, gray, cv::COLOR_YUV2GRAY_I420);
IplImage* img = cvLoadImage("something.jpg");
IplImage* src = cvLoadImage("src.jpg");
cvSub(src, img, img);
But the size of the source image is different from img.
Is there any opencv function to resize it to the img size?
You can use cvResize. Or better use c++ interface (eg cv::Mat instead of IplImage and cv::imread instead of cvLoadImage) and then use cv::resize which handles memory allocation and deallocation itself.
The two functions you need are documented here:
imread: read an image from disk.
Image resizing: resize to just any size.
In short:
// Load images in the C++ format
cv::Mat img = cv::imread("something.jpg");
cv::Mat src = cv::imread("src.jpg");
// Resize src so that is has the same size as img
cv::resize(src, src, img.size());
And please, please, stop using the old and completely deprecated IplImage* classes
For your information, the python equivalent is:
imageBuffer = cv.LoadImage( strSrc )
nW = new X size
nH = new Y size
smallerImage = cv.CreateImage( (nH, nW), imageBuffer.depth, imageBuffer.nChannels )
cv.Resize( imageBuffer, smallerImage , interpolation=cv.CV_INTER_CUBIC )
cv.SaveImage( strDst, smallerImage )
Make a useful function like this:
IplImage* img_resize(IplImage* src_img, int new_width,int new_height)
{
IplImage* des_img;
des_img=cvCreateImage(cvSize(new_width,new_height),src_img->depth,src_img->nChannels);
cvResize(src_img,des_img,CV_INTER_LINEAR);
return des_img;
}
You can use CvInvoke.Resize for Emgu.CV 3.0
e.g
CvInvoke.Resize(inputImage, outputImage, new System.Drawing.Size(100, 100), 0, 0, Inter.Cubic);
Details are here