I have a curl callback that contains a some header info and then a jpeg image.
I want to copy out the jpeg image from this data and save it to a file.
I have never used malloc or memcpy before but I have did the following:
//data = the data that curl has returned
//datalength = the length of the data that curl has returned
//startpos = the starting position of the jpeg image in data
//the length of the jpeg image
//example data
//datalength=13209
//startpos = 62
//imagelangth=13127
bool SaveImage( void* data, size_t datalength, int startpos, int imageLength)
{
//1. Allocate a buffer to store the jpeg image
BYTE* image = (BYTE*)malloc(sizeof(BYTE)*imageLength);
if( image != nullptr)
{
//2. Copy out the image info to the buffer
BYTE* imageStartPos = (BYTE*)data + startpos;
memcpy( image, imageStartPos, imageLength);
//3. Save the image to file
FILE* pFile;
fopen_s(&pFile, "image.jpeg", "w");
if(pFile != NULL)
{
fwrite(image,sizeof(BYTE), imageLength, pFile);
fclose(pFile);
}
}
}
The result is that I get a jpeg image created that is about 13k in size but I cannot open it in ms paint as it says its corrupt. I assume that I have made a mistake in my pointer calculations above.
Any ideas anyone as to what I'm doing wrong?
Related
I want to read image from a database, image column is a MYSQL_TYPE_BLOB type and I read column using this code. Currently, Blob image converted as a char * array
//Get the total number of fields
int fieldCount = mysql_num_fields(result);
//Get field information of a row of data
MYSQL_FIELD *fields = mysql_fetch_fields(result);
while (m_row = mysql_fetch_row(result))
{
for (int i = 0;i < fieldCount; ++i)
{
if (fields[i].type == MYSQL_TYPE_BLOB)
{
unsigned long length = mysql_fetch_lengths(result)[i];
char* buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
}
}
}
In order to do some tests on image, I should know the image dimension without writing image on disk and reading it again?
Another way to read data from Mysql database is :
res = stmt->executeQuery("MY QUERY TO DATABASE");
while (res->next())
{
std::istream *blobData = res->getBlob("image");
std::istreambuf_iterator<char> isb = std::istreambuf_iterator<char>(*blobData);
std::string blobString = std::string(isb, std::istreambuf_iterator<char>());
tempFR.image = blobString;
blobData->seekg(0, ios::end);
tempFR.imageSize = blobData->tellg();
std::istream *blobIn;
char buffer[tempFR.imageSize];
memset(buffer, '\0', tempFR.imageSize);
blobIn = res->getBlob("image");
blobIn->read((char*)buffer, tempFR.imageSize);
}
Notice:
imageSize and length are the overall image size, for example 1000.
Update#1: How image will be reconstruct meanwhile writing it to disk?
In the first case it's possible to write the blob_image to disk via this codes:
stringstream pic_name;
pic_name << "car.jpeg";
ofstream outfile(pic_name.str(), ios::binary);
outfile.write(buffer, length);
and in the second ones:
std::ofstream outfile ("car.jpeg",std::ofstream::binary);
outfile.write (buffer, tempFR.imageSize);
outfile.close();
In both cases image writed to disk correctly. But I want to find image dimension without writing it to disk and reading it again?
By decoding buffered image:
length = mysql_fetch_lengths(result)[i];
buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
matImg = cv::imdecode(cv::Mat(1, length, CV_8UC1, buffer), cv::IMREAD_UNCHANGED);
At first, copy array to buffer, then convert it to a cv::Mat and finally decode it. It will be a cv::Mat image.
I am working on a project, where I want to process my images using C++ OpenCV.
For simplicity's sake, I just want to convert Uint8List to cv::Mat and back.
Following this tutorial, I managed to make a pipeline that doesn't crash the app. Specifically:
I created a function in a .cpp that takes the pointer to my Uint8List, rawBytes, and encodes it as a .jpg:
int encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput) {
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes); //CV_8UC3
vector<uchar> buf;
cv:imencode(".jpg", img, buf); // save output into buf. Note that Dart Image.memory can process either .png or .jpg, which is why we're doing this encoding
*encodedOutput = (unsigned char *) malloc(buf.size());
for (int i=0; i < buf.size(); i++)
(*encodedOutput)[i] = buf[i];
return (int) buf.size();
}
Then I wrote a function in a .dart that calls my c++ encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput):
//allocate memory heap for the image
Pointer<Uint8> imgPtr = malloc.allocate(imgBytes.lengthInBytes);
//allocate just 8 bytes to store a pointer that will be malloced in C++ that points to our variably sized encoded image
Pointer<Pointer<Uint8>> encodedImgPtr = malloc.allocate(8);
//copy the image data into the memory heap we just allocated
imgPtr.asTypedList(imgBytes.length).setAll(0, imgBytes);
//c++ image processing
//image in memory heap -> processing... -> processed image in memory heap
int encodedImgLen = _encodeIm(height, width, imgPtr, encodedImgPtr);
//
//retrieve the image data from the memory heap
Pointer<Uint8> cppPointer = encodedImgPtr.elementAt(0).value;
Uint8List encodedImBytes = cppPointer.asTypedList(encodedImgLen);
//myImg = Image.memory(encodedImBytes);
return encodedImBytes;
//free memory heap
//malloc.free(imgPtr);
//malloc.free(cppPointer);
//malloc.free(encodedImgPtr); // always frees 8 bytes
}
Then I linked c++ with dart via:
final DynamicLibrary nativeLib = Platform.isAndroid
? DynamicLibrary.open("libnative_opencv.so")
: DynamicLibrary.process();
final int Function(int height, int width, Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)
_encodeIm = nativeLib
.lookup<NativeFunction<Int32 Function(Int32 height, Int32 width,
Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)>>('encodeIm').asFunction();
And finally I show the result in Flutter via:
Image.memory(...)
Now, the pipeline doesn't crash, which means I haven't goofed up memory handling completely, but it doesn't return the original image either, which means I did mess up somewhere.
Original image:
Pipeline output:
Thanks to Richard Heap's guidance in the comments, I managed to fix the pipeline by changing my matrix definition from
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes);
to
vector<uint8_t> buffer(rawBytes, rawBytes + inBytesCount);
Mat img = imdecode(buffer, IMREAD_COLOR);
where inBytesCount is the length of imgBytes.
I am using Video4Linux2 to open a connection to the camera connected to my machine. I have the ability to either request YUV or MJPEG data from my camera device. Since increasing the requested resolution from the camera, while also requesting YUV, causes the program to slow past the refresh rate of the camera (presumably because there is too much data to send in that amount of time), I require using the MJPEG data from the camera. I have been stuck for a while, and have found very little resources online on how to decode an MJPEG.
By the way, I have all of the following data:
unsigned char *data; // pointing to the data for the most current mjpeg frame from v4l2
size_t data_size; // the size (in bytes) of the mjpeg frame received from v4l2
unsigned char *r, *g, *b; // three heap allocated arrays in which to store the resulting data
// Can easily be altered to represent an array of structs holding all 3 components,
// as well as using yuv at different rates.
All I need is the ability to convert my mjpeg frame live into raw data, either RGB, or YUV.
I have heard of libraries like libjpeg, mjpegtools, nvjpeg, etc, however I have not been able to find much on how to use them to decode an mjpeg from where I am. Any help whatsoever would be greatly appreciated!
I figured it out via the sources linked in the comments. My working example is as follows:
// variables:
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
unsigned int width, height;
// data points to the mjpeg frame received from v4l2.
unsigned char *data;
size_t data_size;
// a *to be allocated* heap array to put the data for
// all the pixels after conversion to RGB.
unsigned char *pixels;
// ... In the initialization of the program:
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
pixels = new unsigned char[width * height * sizeof(Pixel)];
// ... Every frame:
if (!(data == nullptr) && data_size > 0) {
jpeg_mem_src(&cinfo, data, data_size);
int rc = jpeg_read_header(&cinfo, TRUE);
jpeg_start_decompress(&cinfo);
while (cinfo.output_scanline < cinfo.output_height) {
unsigned char *temp_array[] = {pixels + (cinfo.output_scanline) * width * 3};
jpeg_read_scanlines(&cinfo, temp_array, 1);
}
jpeg_finish_decompress(&cinfo);
}
If this still does not work for anyone who is trying to figure the same thing out, try to incorporate the "Huffman tables", which are needed by some cameras as said in the second comment.
https://github.com/jamieguinan/cti/blob/master/jpeg_misc.c#L234
https://github.com/jamieguinan/cti/blob/master/jpeghufftables.c
Recently, i am having trouble with converting a Mat frame captured from my webcam by OpenCV to a normal JPEG unsigned char array. I've tried one or two way on Google but the result seems not the correct jpeg uchar array. Here is a piece of my code:
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
Mat frame;
cap >> frame;
if( frame.empty())
return -1;
int size = frame.total() * frame.elemSize();
unsigned char* buffer = new unsigned char[size];
memcpy(buffer, frame.data, size * sizeof(unsigned char));
Then i used fwrite to write that buffer into a file.jpg (it looks silly but it does work if the buffer is correct),but the file cannot be openned or be determined as a jpeg image.
Can anyone help me figure this out?
Check out the OpenCV function imencode(). It will fill a buffer with data encoded as the correct image type (based on the file type argument) so that it can be written to a file and other programs will know what to do with it.
The problem with your current approach is that you are attempting to write raw image data as a JPEG, but JPEG is a compressed data format so programs won't know what to do with the data you've written. It would be equivalent of taking a binary file and just saving it as a JPEG, the file won't have the right headers to be decoded as an image and the data otherwise likely won't match up with the JPEG format anyways.
A program I am using is reading some bitmaps, and expects 32FC1 images.
I am trying to create these images
cv::Mat M1(255, 255, CV_32FC1, cv::Scalar(0,0,0));
cv::imwrite( "my_bitmap.bmp", M1 );
but when I check the depth - it is always CV_8U
How can I create the files so that they will contain the correct info ?
Update: It makes no difference if I use a different file extension - e.g. tif or png
I am reading it - using code that is already implemented - with cvLoadImage.
I am trying to CREATE the files that the existing code - that checks for the image type - can use.
I cannot convert files in the existing code. The existing code does not try to read random image type and convert it to desired type, but checks that the files are of the type it needs.
I found out - thank you for the answers - that cv::imwrite only writes integer type images.
Is there another way - either using OpenCV or something else - to write the images so that I end up with CV_32F type ?
Update again:
The code to read image... if into a cv::Mat:
cv::Mat x = cv::imread(x_files, CV_LOAD_IMAGE_ANYDEPTH|CV_LOAD_IMAGE_ANYCOLOR);
The existing code:
IplImage *I = cvLoadImage(x_files.c_str(), CV_LOAD_IMAGE_ANYDEPTH|CV_LOAD_IMAGE_ANYCOLOR);
cv::imwrite() .bmp encoder assumes 8 bit channels.
If you only need to write .bmp files with OpenCV , you can convert your 32FC1 image to 8UC4, then use cv::imwrite() to write it and you will get a 32bits per pixel .bmp file.
I am guessing that your program that reads the file will interpret the 32 bit pixels as a 32FC1.
The .bmp format doesn't have an explicit channel structure, just a number of bits per pixel. Therefore you should be able to write 32 bit pixels as 4 channels of 8 bits in OpenCV and read them as single channel 32 bit pixels in another program - if you do this you need to be aware of endianness assumptions by the reader. Someting like the following should work:
cv::Mat m1(rows, cols, CV_32FC1);
... // fill m1
cv::Mat m2(rows, cols, CV_8UC4, m1.data); // provide different view of m1 data
// depending on endianess of reader, you may need to swap byte order of m2 pixels
cv::imwrite("my_bitmap.bmp", m2);
You will not be able to read properly the files you created in OpenCV because the .bmp decoder in OpenCV assumes the file is 1 or 3 channel of 8 bit data (i.e. it can't read 32 bit pixels).
EDIT
Probably a much better option would be to use the OpenEXR format, for which OpenCV has a codec. I assume you just need to save your files with a .exr extension.
Your problem is that bitmaps store data internally as integers not floats. If your problem is rounding error when saving you will need to either use a different file format or scale your data up before saving and then back down after saving. If you just want to convert the matrix you get after reading the file to a float you can use cv::convertto
I was struggling with the same problem. At the end i decided it would just be easier to write a custom function that can write and load an arbitrary CV Mat.
bool writeRawImage(const cv::Mat& image, const std::string& filename)
{
ofstream file;
file.open (filename, ios::out|ios::binary);
if (!file.is_open())
return false;
file.write(reinterpret_cast<const char *>(&image.rows), sizeof(int));
file.write(reinterpret_cast<const char *>(&image.cols), sizeof(int));
const int depth = image.depth();
const int type = image.type();
const int channels = image.channels();
file.write(reinterpret_cast<const char *>(&depth), sizeof(depth));
file.write(reinterpret_cast<const char *>(&type), sizeof(type));
file.write(reinterpret_cast<const char *>(&channels), sizeof(channels));
int sizeInBytes = image.step[0] * image.rows;
file.write(reinterpret_cast<const char *>(&sizeInBytes), sizeof(int));
file.write(reinterpret_cast<const char *>(image.data), sizeInBytes);
file.close();
return true;
}
bool readRawImage(cv::Mat& image, const std::string& filename)
{
int rows, cols, data, depth, type, channels;
ifstream file (filename, ios::in|ios::binary);
if (!file.is_open())
return false;
try {
file.read(reinterpret_cast<char *>(&rows), sizeof(rows));
file.read(reinterpret_cast<char *>(&cols), sizeof(cols));
file.read(reinterpret_cast<char *>(&depth), sizeof(depth));
file.read(reinterpret_cast<char *>(&type), sizeof(type));
file.read(reinterpret_cast<char *>(&channels), sizeof(channels));
file.read(reinterpret_cast<char *>(&data), sizeof(data));
image = cv::Mat(rows, cols, type);
file.read(reinterpret_cast<char *>(image.data), data);
} catch (...) {
file.close();
return false;
}
file.close();
return true;
}