I want to convert an SVG graphic to an OpenCV Mat object. Therefore the SVG graphic is loaded into a QSvgRenderer object and afterwards converted into a QImage object from which I use its raw data to create my final Mat object:
void scaleSvg(const cv::Mat &in, QSvgRenderer &svg, cv::Mat &out)
{
if (!svg.isValid())
{
return;
}
QImage image(in.cols, in.rows, QImage::Format_ARGB32);
// Get QPainter that paints to the image
QPainter painter(&image);
svg.render(&painter);
std::cout << "Image byte count: " << image.byteCount() << std::endl;
std::cout << "Image bits: " << (int*)image.constBits() << std::endl;
std::cout << "Image depth: " << image.depth() << std::endl;
uchar *data = new uchar[image.byteCount()];
memcpy(data, image.constBits(), image.byteCount());
out = cv::Mat(image.height(), image.width(), CV_8UC4, data, CV_AUTOSTEP);
std::cout << "New byte count: " << out.size() << std::endl;
std::cout << "New depth: " << out.depth() << std::endl;
std::cout << "First bit: " << out.data[0] << std::endl;
}
Unfortunately, I get a "memory access violation" error when writing my resulting object into a file:
std::cout << (int*)out.data << std::endl; // pointer can still be accessed without errors
cv::imwrite("scaled.png", out); // memory access error
The file which is being written gets to to size of 33 Bytes not more (header data only??).
On the Internet there is some explanation of pointer ownership in cv::Mat and I thought it would be released after the last reference to it is release which should not be the case since "out" is a reference. Btw. another way to convert an SVG into a cv::Mat is always welcome. As OpenCV seem to do not support SVGs this looked like a simple way to get it done.
As constBits does indeed not work and it is not always safe to assume that the number of bytes per line will be the same (it causes a segfault for me). I found the following suggestion in StereoMatching's anwser:
cv::Mat(img.height(), img.width(), CV_8UC4, img.bits(), img.bytesPerLine()).clone()
Baradé's concern about using bits is valid, but because you clone the result the Mat will copy the data from bits, so it will not be a issue.
usually, cv::Mat is refcounted, but this case is special. if you use an external / borrowed data pointer, you'll have to clone() the mat, to make sure it owns its own copy of the pixels, else, as soon as you leave the scope of 'scaleSvg()', the Mat 'out' holds a 'dangling pointer'.
you tried to 'new' the data to be copied, unfortunately, that does not solve it (you only added another problem there).
you also would have to delete[] the uchar *data pixels on your own, too, and you don't, so your code currently combines the worst of all worlds.
instead, try :
out = cv::Mat(image.height(), image.width(), CV_8UC4, image.constBits(), CV_AUTOSTEP).clone();
Related
I'm suffered by using OpenCV Mat due to unexpected results.
There is an example code:
cv::Mat local_mat = cv::Mat::eye(cv::Size(1000, 1000), CV_8UC1);
qDebug() << "1. local_mat.data: " << local_mat.data;
cv::Mat sobel_img_ = cv::Mat::eye(cv::Size(1000, 1000), CV_8UC1);
qDebug() << "2. sobel_img_.data: " << sobel_img_.data;
sobel_img_ = local_mat; // copy address but no clone()
qDebug() << "3. sobel_img_.data: " << sobel_img_.data;
sobel_img_ = cv::Mat::eye(cv::Size(1000, 1000), CV_8UC1); // renew
qDebug() << "4. sobel_img_.data: " << sobel_img_.data;
local_mat.data: 0x55aa19a53e40
sobel_img_.data: 0x55aa19b480c0
sobel_img_.data: 0x55aa19a53e40
sobel_img_.data: 0x55aa19a53e40
1 and 2 should be different because I create new Mat(), so it is fine.
However, 3 and 4 are same even though I create new Mat() after copying the local_mat into sobel_mat.
I meet many problems like this when I use OpenCV Mat.
Could you explain why it happens and how can I solve this?
Initializing of matrix is a form of matrix expression.
cv::Mat has overloads of operator=. One of them handles MatExpr as its argument:
Assigned matrix expression object. As opposite to the first form of
the assignment operation, the second form can reuse already allocated
matrix if it has the right size and type to fit the matrix expression
result. It is automatically handled by the real function that the
matrix expressions is expanded to. For example, C=A+B is expanded to
add(A, B, C), and add takes care of automatic C reallocation.
by bold font I emphasized what happens in your case. Already allocated memory is used to create identity matrix by cv::Eye.
You can turn MatExpr into cv::Mat just by casting:
sobel_img_ = (cv::Mat)cv::Mat::eye(cv::Size(1000, 1000), CV_8UC1); // renew
then sobel_img will refer to new allocated matrix.
i'm learning CUDA and i came across a course that is helping even though the code is very old and i'm having problems running it i'm trying to understand it, so he reads images using openCV imread which gives a Mat obj i guess but the data is saved as a uchar*
cv::Mat image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
but after i was stuck in converting uchar to uchar4 and i was reading the code from the teacher and he wrote .
cv::Mat image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
if (image.empty()) {
std::cerr << "Couldn't open file: " << filename << std::endl;
exit(1);
}
cv::cvtColor(image, imageInputRGBA, CV_BGR2RGBA);
//allocate memory for the output
imageOutputRGBA.create(image.rows, image.cols, CV_8UC4);
//This shouldn't ever happen given the way the images are created
//at least based upon my limited understanding of OpenCV, but better to check
if (!imageInputRGBA.isContinuous() || !imageOutputRGBA.isContinuous()) {
std::cerr << "Images aren't continuous!! Exiting." << std::endl;
exit(1);
}
*h_inputImageRGBA = (uchar4 *)imageInputRGBA.ptr<unsigned char>(0);
*h_outputImageRGBA = (uchar4 *)imageOutputRGBA.ptr<unsigned char>(0);
are the two last lines the ones where he subtly converts from uchar to uchar4 ...
h_inputImageRGBA
h_outputImageRGBA
are both uchar4**
can somebody help me understand the code
here is the link to the source
function name : Preprocess
edit: In trying to give a straight forward example of the problem it appears I left out what was causing the real issue. I have modified the example to illustrate the problem.
I am trying to use opencv to perform operations on a cv::Mat that is composed of external data.
Consider this example:
unsigned char *extern_data = new unsigned char[1280*720*3];
cv::Mat mat = cv::Mat(1280, 720, CV_8UC3, extern_data); //Create cv::Mat external
//Edit - Added cv::imdecode
mat = cv::imdecode(mat,1);
//In real implementation it would be mat = cv::imdecode(image,'1')
// where image is a cv::Mat of an image stored in a mmap buffer
mat.data[100] = 99;
std::cout << "External array: " << static_cast<int>(extern_data[100]) << std::endl;
std::cout << "cv::Mat array: " << static_cast<int>(mat.data[100]) << std::endl;
The result of this is:
> External array: 0
> cv::Mat array: 100
It is clear this external array is not being modified, therefore new memory is being allocated for the cv::Mat array. From my understanding this was not suppose to happen! This should have caused no copy operation, and mat.data should be a pointer to extern_data[0].
What am I misunderstanding?
So far the way I have got my program to work is to use std::copy. I am still wondering if there is a way to assign the result of cv::imdecode() directly to the external data.
Currently I am using
unsigned char *extern_data = new unsigned char[1280*720*3];
cv::Mat mat = cv::Mat(1280, 720, CV_8UC3, extern_data); //Create cv::Mat external
mat = cv::imdecode(mat,1);
std::copy(mat.data, mat.data + 1280*720*3, extern_data);
I just wish i could figure out how to assign the result of cv::imdecode() directly to extern_data without the additional std::copy line!
I have a image buffer stored as a linear array[640*480] of unsigned integer type, and I want to save this array as a bitmap image which can be viewed. I have captured an image from my camera and retrieved its image buffer from a GigE cable using in c++ code. So please tell me how to write an integer array of RGB values to Bitmap in C++ along with the header files required. I have stream buffer as
if (Result.Succeeded())
{
// Grabbing was successful, process image
cout << "Image #" << n << " acquired!" << endl;
cout << "Size: " << Result.GetSizeX() << " x "
<< Result.GetSizeY() << endl;
// Get the pointer to the image buffer
const unsigned int *pImageBuffer = (int *) Result.Buffer();
the pImagebuffer is the image Buffer and please ignore the Functions as they belong to a custom compiler. I just want to convert the RGB values to bitmap image and then save it
also the pImageBuffer is giving me the R=G=B as photo is mono chrome.
Save the pixel data together with a simple BMP-file header, appropriately initialized. See the format description here.
I have a function that I would like to apply to each pixel in a YUN image (call it src). I would like the output to be saved to a separate image, call it (dst).
I know I can achieve this through pointer arithmetic and accessing the underlying matrix of the image. I was wondering if there was a easier way, say a predefined "map" function that allows me to map a function to all the pixels?
Thanks,
Since I don't know what a YUN image is, I'll assume you know how to convert RGB to that format.
I'm not aware of an easy way to do the map function you mentioned. Anyway, OpenCV has a few predefined functions to do image conversion, including
cvCvtColor(color_frame, gray_frame, CV_BGR2GRAY);
which you might want to take a closer look.
If you would like to do your own, you would need to access each pixel of the image individually, and this code shows you how to do it (the code below skips all kinds of error and return checks for the sake of simplicity):
// Loading src image
IplImage* src_img = cvLoadImage("input.png", CV_LOAD_IMAGE_UNCHANGED);
int width = src_img->width;
int height = src_img->height;
int bpp = src_img->nChannels;
// Temporary buffer to save the modified image
char* buff = new char[width * height * bpp];
// Loop to iterate over each pixel of the original img
for (int i=0; i < width*height*bpp; i+=bpp)
{
/* Perform pixel operation inside this loop */
if (!(i % (width*bpp))) // printing empty line for better readability
std::cout << std::endl;
std::cout << std::dec << "R:" << (int) src_img->imageData[i] <<
" G:" << (int) src_img->imageData[i+1] <<
" B:" << (int) src_img->imageData[i+2] << " ";
/* Let's say you wanted to do a lazy grayscale conversion */
char gray = (src_img->imageData[i] + src_img->imageData[i+1] + src_img->imageData[i+2]) / 3;
buff[i] = gray;
buff[i+1] = gray;
buff[i+2] = gray;
}
IplImage* dst_img = cvCreateImage(cvSize(width, height), src_img->depth, bpp);
dst_img->imageData = buff;
if (!cvSaveImage("output.png", dst_img))
{
std::cout << "ERROR: Failed cvSaveImage" << std::endl;
}
Basically, the code loads a RGB image from the hard disk and performs a grayscale conversion on each pixel of the image, saving it to a temporary buffer. Later, it will create another IplImage with the grayscale data and then it will save it to a file on the disk.