I am working on a project, where I want to process my images using C++ OpenCV.
For simplicity's sake, I just want to convert Uint8List to cv::Mat and back.
Following this tutorial, I managed to make a pipeline that doesn't crash the app. Specifically:
I created a function in a .cpp that takes the pointer to my Uint8List, rawBytes, and encodes it as a .jpg:
int encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput) {
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes); //CV_8UC3
vector<uchar> buf;
cv:imencode(".jpg", img, buf); // save output into buf. Note that Dart Image.memory can process either .png or .jpg, which is why we're doing this encoding
*encodedOutput = (unsigned char *) malloc(buf.size());
for (int i=0; i < buf.size(); i++)
(*encodedOutput)[i] = buf[i];
return (int) buf.size();
}
Then I wrote a function in a .dart that calls my c++ encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput):
//allocate memory heap for the image
Pointer<Uint8> imgPtr = malloc.allocate(imgBytes.lengthInBytes);
//allocate just 8 bytes to store a pointer that will be malloced in C++ that points to our variably sized encoded image
Pointer<Pointer<Uint8>> encodedImgPtr = malloc.allocate(8);
//copy the image data into the memory heap we just allocated
imgPtr.asTypedList(imgBytes.length).setAll(0, imgBytes);
//c++ image processing
//image in memory heap -> processing... -> processed image in memory heap
int encodedImgLen = _encodeIm(height, width, imgPtr, encodedImgPtr);
//
//retrieve the image data from the memory heap
Pointer<Uint8> cppPointer = encodedImgPtr.elementAt(0).value;
Uint8List encodedImBytes = cppPointer.asTypedList(encodedImgLen);
//myImg = Image.memory(encodedImBytes);
return encodedImBytes;
//free memory heap
//malloc.free(imgPtr);
//malloc.free(cppPointer);
//malloc.free(encodedImgPtr); // always frees 8 bytes
}
Then I linked c++ with dart via:
final DynamicLibrary nativeLib = Platform.isAndroid
? DynamicLibrary.open("libnative_opencv.so")
: DynamicLibrary.process();
final int Function(int height, int width, Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)
_encodeIm = nativeLib
.lookup<NativeFunction<Int32 Function(Int32 height, Int32 width,
Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)>>('encodeIm').asFunction();
And finally I show the result in Flutter via:
Image.memory(...)
Now, the pipeline doesn't crash, which means I haven't goofed up memory handling completely, but it doesn't return the original image either, which means I did mess up somewhere.
Original image:
Pipeline output:
Thanks to Richard Heap's guidance in the comments, I managed to fix the pipeline by changing my matrix definition from
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes);
to
vector<uint8_t> buffer(rawBytes, rawBytes + inBytesCount);
Mat img = imdecode(buffer, IMREAD_COLOR);
where inBytesCount is the length of imgBytes.
Related
I'm very new at C++ and I'm trying to create a DLL which uses OpenCV library.
My DLL gets a raw image from other application and creates a MAT from the application's memory buffer. I send the buffer's address, which has a raw image, to the DLL and get raw image to OpenCV. This part works.
But after processing image with OpenCV, I can't write raw image to same memory address.
This is the code snippet that I've tried:
fn_export double createImage(char* address double width, double height) {
unsigned char* pBuffer = (unsigned char*)address;
memcpy(&pBuffer,&address, sizeof(pBuffer));
cv::Mat img = cv::Mat(height,width, CV_8UC4, pBuffer);
cv::imshow("Original", img);
memcpy(&address, &img.data[0], sizeof(address));
return 1;
}
char* address is memory address from my application. Other application's buffer doesn't change this way. Anybody has any advice about this situation?
Ok. I solved this issue ;
Mat img = Mat(height, width, CV_8UC4, address);
cv::imshow("Image from GM", img);
// same image copy to buffer back;
memcpy(&address[0], &img.data[0], width*height*4.);
I am new to C++ (aswell as Cuda and OpenCV), so I am sorry for any mistakes on my side.
I have an existing code that uses Cuda. Recently it worked with .png (that was decoded) as an input but now I use a camera to generate live images. These images are the new input for the code. Here it is:
using namespace cv;
INT height = 2160;
INT width = 3840;
Mat image(height, width, CV_8UC3);
size_t pitch;
uint8_t* image_gpu;
// capture image
VideoCapture camera(0);
camera.set(CAP_PROP_FRAME_WIDTH, width);
camera.set(CAP_PROP_FRAME_HEIGHT, height);
camera.read(image);
// here I checked if image is definitly still a CV_8UC3 Mat with the initial height and width; and it is
cudaMallocPitch(&image_gpu, &pitch, width * 4, height);
// here I use cv::Mat::data to get the pointer to the data of the image:
cudaMemcpy2D(image_gpu, pitch, image.data, width*4, width*4, height, cudaMemcpyHostToDevice);
The code compiles but I get an "Exception Thrown" at the last line (cudaMemcpy2D) with the following error code:
Exception thrown at 0x00007FFE838D6660 (nvcuda.dll) in realtime.exe: 0xC0000005: Access violation reading location 0x000001113AE10000.
Google did not give me an answer and I do not know ho to proceed from here on.
Thanks for any hints!
A rather generic way to copy an OpenCV Mat to the device memory allocated using cudaMallocPitch is to utilize the step member of the Mat object. Also, while allocating device memory, you must have a visual intuition in mind that how the device memory will be allocated and how the Mat object will be copied to it. Here is a simple example demonstrating the procedure for a video frame captured using VideoCapture.
#include<iostream>
#include<cuda_runtime.h>
#include<opencv2/opencv.hpp>
using std::cout;
using std::endl;
size_t getPixelBytes(int type)
{
switch(type)
{
case CV_8UC1:
case CV_8UC3:
return sizeof(uint8_t);
break;
case CV_16UC1:
case CV_16UC3:
return sizeof(uint16_t);
break;
case CV_32FC1:
case CV_32FC3:
return sizeof(float);
break;
case CV_64FC1:
case CV_64FC3:
return sizeof(double);
break;
default:
return 0;
}
}
int main()
{
cv::VideoCapture cap(0);
cv::Mat frame;
if(cap.grab())
{
cap.retrieve(frame);
}
else
{
cout<<"Cannot read video"<<endl;
return -1;
}
uint8_t* gpu_image;
size_t gpu_pitch;
//Get number of bytes occupied by a single pixel. Although VideoCapture mostly returns CV_8UC3 type frame thus pixelBytes is 1 , but just in case.
size_t pixelBytes = getPixelBytes(frame.type());
//Number of actual data bytes occupied by a row.
size_t frameRowBytes = frame.cols * frame.channels * pixelBytes;
//Allocate pitch linear memory on device
cudaMallocPitch(&gpu_image, &gpu_pitch, frameRowBytes , frame.rows);
//Copy memory from frame to device mempry
cudaMemcpy2D(gpu_image, gpu_pitch, frame.ptr(), frame.step, frameRowBytes, frame.rows, cudaMemcpyHostToDevice);
//Rest of the code ...
return 0;
}
Disclaimer:
Code is written in the browser. Not tested yet. Please add CUDA error checking as required
I am trying convert a RGB image into YUV.
I am loading image using openCV.
I am calling the function as follows:
//I know IplImage is outdated
IplImage* im = cvLoadImage("1.jpg", 1);
//....
bgr2yuv(im->imageData, dst, im->width, im->height);
the function to convert Color image to yuv image is given below.
I am using ffmpeg to do that.
void bgr2yuv(unsigned char *src, unsigned char *dest, int w, int h)
{
AVFrame *yuvIm = avcodec_alloc_frame();
AVFrame *rgbIm = avcodec_alloc_frame();
avpicture_fill(rgbIm, src, PIX_FMT_BGR24, w, h);
avpicture_fill(yuvIm, dest, PIX_FMT_YUV420P, w, h);
av_register_all();
struct SwsContext * imgCtx = sws_getCachedContext(imgCtx,
w, h,(::PixelFormat)PIX_FMT_BGR24,
w, h,(::PixelFormat)PIX_FMT_YUV420P,
SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(imgCtx, rgbIm->data, rgbIm->linesize,0, h, yuvIm->data, yuvIm->linesize);
av_free(yuvIm);
av_free(rgbIm);
}
I am getting wrong output after conversion.
I am thinking this is due to padding happening in the IplImage.
(My input image width is not multiple of 4).
I updated linesize variable even after that I am not getting correct output.
Its working fine when I am using images whose width is multiple of 4.
Can anybody tell what is the problem in the code.
Check IplImage::align or IplImage::widthStep and use these to set AVFrame::linesize. For the RGB frame, for example, you would set:
frame->linesize[0] = img->widthStep;
The layout of the dst array can be whatever you want, it depends on how you're using it afterwards.
We need to do as follows:
rgbIm->linesize[0] = im->widthStep;
But I think output data from sws_scale() is not padded to make it multiple of 4.
So when you are copying this data (dest) again to IplImage this will
create problem in displaying, saving etc..
So we need to set widthStep=width as follows:
IplImage* yuvImage = cvCreateImageHeader(cvGetSize(im), 8, 1);
yuvImage->widthStep = yuvImage->width;
yuvImage->imageData = dest;
So I'm trying to use the webp API to encode images. Right now I'm going to be using openCV to open and manipulate the images, then I want to save them off as webp. Here's the source I'm using:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <webp/encode.h>
int main(int argc, char *argv[])
{
IplImage* img = 0;
int height,width,step,channels;
uchar *data;
int i,j,k;
if (argc<2) {
printf("Usage:main <image-file-name>\n\7");
exit(0);
}
// load an image
img=cvLoadImage(argv[1]);
if(!img){
printf("could not load image file: %s\n",argv[1]);
exit(0);
}
// get the image data
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
printf("processing a %dx%d image with %d channels \n", width, height, channels);
// create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin",100,100);
// invert the image
for (i=0;i<height;i++) {
for (j=0;j<width;j++) {
for (k=0;k<channels;k++) {
data[i*step+j*channels+k] = 255-data[i*step+j*channels+k];
}
}
}
// show the image
cvShowImage("mainWin", img);
// wait for a key
cvWaitKey(0);
// release the image
cvReleaseImage(&img);
float qualityFactor = .9;
uint8_t** output;
FILE *opFile;
size_t datasize;
printf("encoding image\n");
datasize = WebPEncodeRGB((uint8_t*)data,width,height,step,qualityFactor,output);
printf("writing file out\n");
opFile=fopen("output.webp","w");
fwrite(output,1,(int)datasize,opFile);
}
When I execute this, I get this:
nato#ubuntu:~/webp/webp_test$ ./helloWorld ~/Pictures/mars_sunrise.jpg
processing a 2486x1914 image with 3 channels
encoding image
Segmentation fault
It displays the image just fine, but segfaults on the encoding. My initial guess was that it's because I'm releasing the img before I try to write out the data, but it doesn't seem to matter whether I release it before or after I try the encoding. Is there something else I'm missing that might cause this problem? Do I have to make a copy of the image data or something?
The WebP api docs are... sparse. Here's what the README says about WebPEncodeRGB:
The main encoding functions are available in the header src/webp/encode.h
The ready-to-use ones are:
size_t WebPEncodeRGB(const uint8_t* rgb, int width, int height,
int stride, float quality_factor, uint8_t** output);
The docs specifically do not say what the 'stride' is, but I'm assuming that it's the same as the 'step' from opencv. Is that reasonable?
Thanks in advance!
First, don't release the image if you use it later. Second, your output argument is pointing to non-initialized address. This is how to use initialized memory for the output address:
uint8_t* output;
datasize = WebPEncodeRGB((uint8_t*)data, width, height, step, qualityFactor, &output);
You release the image with cvReleaseImage before you try to use the pointer to the image data for the encoding. Probably that release function frees the image buffer and your data pointer now doesn't point to valid memory anymore.
This might be the reason for your segfault.
so it looks like the problem was here:
// load an image
img=cvLoadImage(argv[1]);
The function cvLoadImage takes an extra parameter
cvLoadImage(const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR)
and when I changed to
img=cvLoadImage(argv[1],1);
the segfault went away.
I am trying to acquire single images from a camera, do some processing to them and release the used memory. I've been doing it for quite some time with a code similar to the one that follows:
char* img_data = new char[ len ]; // I retrieve len from the camera.
// Grab the actual image from the camera:
// It fills the previous buffer with the image data.
// It gives width and height.
CvSize size;
size.width = width;
size.height = height;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
// Do the processing
cvReleaseImage( &img );
This code runs fine. I have recently read here (in the imageData description) that I shouldn't be assigning data directly to img->imageData but use SetData() instead, like so:
cvSetData( img, img_data, width );
However, when I do it like this, I get a Segmentation fault at the cvReleaseImage() call.
What am I doing wrong?
Thank you.
EDIT: I have tried to compile and run the program that #karlphillip suggested and I DO get a segmentation fault using cvSetData but it runs fine when assigning the data directly.
I'm using Debian 6 and OpenCV 2.3.1.
I had this problem as well, but believe it was fixed from what I gathered in
this comment; namely to use cvReleaseImageHeader() and not cvReleaseImage().
For example:
unsigned int width = 100;
unsigned int height = 100;
unsigned int channels = 1;
unsigned char* imageData = (unsigned char*)malloc(width*height*channels);
// set up the image data
IplImage *img = cvCreateImageHeader(cvSize(width, height), IPL_DEPTH_8U, channels);
cvSetData(img, imageData, width*channels);
// use img
cvReleaseImageHeader(&img);
// free(imageData) when finished
The problem is that you are allocating memory the C++ way (using new), while using the C interface of OpenCV, which tries to free that memory block with free() inside cvReleaseImage(). Memory allocations of C and C++ can't be mixed together.
Solution: use malloc() to allocate the memory:
char* img_data = (char*) malloc(len * sizeof(char));
// casting the return of malloc might not be necessary
// if you are using a C++ compiler
EDIT: (due to OP's comment that is still crashing)
Something else that you are not showing us is crashing your application! I seriously recommend that you write a complete/minimal application that reproduces the problem you are having.
The following application works fine in my Mac OS X with OpenCV 2.3.1.
#include <cv.h>
#include <highgui.h>
int main()
{
char* img_data = (char*) malloc(625);
CvSize size;
size.width = 25;
size.height = 25;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
//img->imageData = img_data;
cvSetData( img, img_data, size.width );
cvReleaseImage( &img );
return 0;
}
Generally, if you allocate a header and set data manually, you should deallocate only the header and free the data yourself:
// allocate img_data
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
cvReleaseImageHeader( &img ); // frees only the header
// free img_data
If you call cvReleaseImage, it will also try to free the data, so you then rely on the OpenCV implementation to do it, which failed in your case because it uses free and you allocated with new which are incompatible with each other.
The other option is to allocate with malloc and call cvReleaseImage.
I want to add a detail to this discussion. There seems to be some sort of bug in
cvReleaseHeader() function.
You have to release the header first and then free the image data.
If you do it the other way, i.e., free image data and then call cvReleaseHeader() function, then it deceptively works. But internally it leaks memory and after sometime your application will crash.