I can use imwrite() to write the image(like "face.jpg") into disk,
then use fstream to read this jpg into a array.this array is what I want.
but,how to get this quickly? from memmory not disk.
I thought the image data in Mat.data,length is Mat.cols*Mat.rows.I was not sure it is or not right.so,I used fstream write it into disk,then opened it with image viewer,nothing.there must something wrong.
Mat frame;
VideoCapture cap(0);
if (!cap.isOpened())
{
return -1;
}
cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap >> frame;
if(frame.empty()){
return -2;
}
//I just want the pointer and length of image information,the following is just for testing
//whether that the same as I thought,if it's right ,frame.data and len is what I want,but it not work.
FILE *fp = fopen("face.jpg", "wb");
if (NULL==fp)
{
return -1;
}
int len = frame.cols*frame.rows; //or 3*frame.cols*frame.rows
fwrite(frame.data, len, sizeof(char), fp);
fclose(fp);
namedWindow("face", 1);
imshow("face", frame);
waitKey(1000);
I'm new in opencv,and I just want get the image data.thanks for help!
Have you check the dimensions before you write it to disk? It'll be helpful for the others to see your code here. In the case of Mat, unless your data is grayscale, the size will be more than cols * rows. You should verify if the format is RGB, RGBA, or YUV, etc. In the case of JPEG, it'll be most likely RGBX; so you should really check that your stream size is either 3 * cols * rows or 4 * cols * rows.
I did this just with imencode(),thanks for #ZdaR.
vector<uchar> buff;
vector<int>param = vector<int>(2);
param[0] = CV_IMWRITE_JPEG_QUALITY;
param[1] = 95;
imencode(".jpg", frame, buff, param);
int len = buff.size();
FILE *fout;
fout = fopen("555.jpg", "wb");
if(NULL==fout){
return -3;
}
fwrite(&buff[0], 1, len*sizeof(uchar), fout);
fclose(fout);
Related
I want to read image from a database, image column is a MYSQL_TYPE_BLOB type and I read column using this code. Currently, Blob image converted as a char * array
//Get the total number of fields
int fieldCount = mysql_num_fields(result);
//Get field information of a row of data
MYSQL_FIELD *fields = mysql_fetch_fields(result);
while (m_row = mysql_fetch_row(result))
{
for (int i = 0;i < fieldCount; ++i)
{
if (fields[i].type == MYSQL_TYPE_BLOB)
{
unsigned long length = mysql_fetch_lengths(result)[i];
char* buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
}
}
}
In order to do some tests on image, I should know the image dimension without writing image on disk and reading it again?
Another way to read data from Mysql database is :
res = stmt->executeQuery("MY QUERY TO DATABASE");
while (res->next())
{
std::istream *blobData = res->getBlob("image");
std::istreambuf_iterator<char> isb = std::istreambuf_iterator<char>(*blobData);
std::string blobString = std::string(isb, std::istreambuf_iterator<char>());
tempFR.image = blobString;
blobData->seekg(0, ios::end);
tempFR.imageSize = blobData->tellg();
std::istream *blobIn;
char buffer[tempFR.imageSize];
memset(buffer, '\0', tempFR.imageSize);
blobIn = res->getBlob("image");
blobIn->read((char*)buffer, tempFR.imageSize);
}
Notice:
imageSize and length are the overall image size, for example 1000.
Update#1: How image will be reconstruct meanwhile writing it to disk?
In the first case it's possible to write the blob_image to disk via this codes:
stringstream pic_name;
pic_name << "car.jpeg";
ofstream outfile(pic_name.str(), ios::binary);
outfile.write(buffer, length);
and in the second ones:
std::ofstream outfile ("car.jpeg",std::ofstream::binary);
outfile.write (buffer, tempFR.imageSize);
outfile.close();
In both cases image writed to disk correctly. But I want to find image dimension without writing it to disk and reading it again?
By decoding buffered image:
length = mysql_fetch_lengths(result)[i];
buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
matImg = cv::imdecode(cv::Mat(1, length, CV_8UC1, buffer), cv::IMREAD_UNCHANGED);
At first, copy array to buffer, then convert it to a cv::Mat and finally decode it. It will be a cv::Mat image.
I am using cvtColor to convert an image from YUYV format to RGB24. The output is fine as far as color is concerned but half of the image is cut. The image is 640x480 YUYV image buffer without any headers. I am using the following code:
FILE* fd = fopen("imgdump", "r+b");
char buffer[640*480*2]; // Each pixel takes two bytes in YUYV
if (fd != NULL)
{
fread(buffer, sizeof(char), 640*480*2, fd);
fclose(fd);
}
Mat s_sImageMat = Mat(640, 480, CV_8UC2);
Mat s_sConvertedImageMat;
cout << "before conversion\n";
s_sImageMat.data = (uchar*) buffer;
cvtColor(s_sImageMat, s_sConvertedImageMat, CV_YUV2RGB_YUYV);
cout << "after conversion\n";
FILE* fw = fopen("converted", "w+b");
if (fw != NULL)
{
fwrite((char*)s_sConvertedImageMat.data, sizeof(char), 640*480*2, fw);
fclose(fw);
}
Original file: https://drive.google.com/file/d/0B0YG1rjiNkBUQ0ZuaWN6Y1E2LUU/view?usp=sharing
Additional info: I am using opencv 3.2
The issue seems to be in the following line :
fwrite((char*)s_sConvertedImageMat.data, sizeof(char), 640*480*2, fw);
For RGB24, it should be be :
fwrite((char*)s_sConvertedImageMat.data, sizeof(char), 640*480*3, fw);
Each pixel is 3 bytes in RGB24
There is anyway to convert opencv mat object to base64.
I was using the below url for base64 encoding and decoding:
http://www.adp-gmbh.ch/cpp/common/base64.html
Below is the code snippet:
const unsigned char* inBuffer = reinterpret_cast(image.data);
There you go! (C++11)
Encode img -> jpg -> base64 :
std::vector<uchar> buf;
cv::imencode(".jpg", img, buf);
auto *enc_msg = reinterpret_cast<unsigned char*>(buf.data());
std::string encoded = base64_encode(enc_msg, buf.size());
Decode base64 -> jpg -> img :
string dec_jpg = base64_decode(encoded);
std::vector<uchar> data(dec_jpg.begin(), dec_jpg.end());
cv::Mat img = cv::imdecode(cv::Mat(data), 1);
Note that you can change JPEG compression quality by setting the IMWRITE_JPEG_QUALITY flag.
I'm encountering nearly the same problem, but I'm trying to encode a Mat into jpeg format and then convert it into base64.
The code on that page works fine!
So here is my code:
VideoCapture cam(0);
cam>>img;
vector<uchar> buf;
imencode(".jpg", img, buf);
uchar *enc_msg = new uchar[buf.size()];
for(int i=0; i < buf.size(); i++) enc_msg[i] = buf[i];
string encoded = base64_encode(enc_msg, buf.size());
if you just want to convert a Mat into base64, you need make sure the Mat size and channels. For a CV_8UC1, this will work:
string encoded = base64_encode(img.data, img.rows * img.cols);
I have created an example for this using Qt5 and OpenCV:
cv::Mat1b image;
this->cameraList[i]->getImage(image);
std::vector<uint8_t> buffer;
cv::imencode(".png", image, buffer);
QByteArray byteArray = QByteArray::fromRawData((const char*)buffer.data(), buffer.size());
QString base64Image(byteArray.toBase64());
base64ImageList.append(base64Image);
I was looking for a solution to the same problem. Using Jean-Christophe's answer above, this worked for me:
cv::Mat image = cv::imread("path/to/file");
std::vector<uchar> buffer;
buffer.resize(static_cast<size_t>(image.rows) * static_cast<size_t>(image.cols));
cv::imencode(".jpg", image, buffer);
std::string encoding = base64_encode(buffer.data(), buffer.size());
Also, c++ std does not have a base64_encode implementation so you can look at this answer which aggregated a bunch of implementations.
Without using opencv, we can convert the image or file into base 64.Read the file byte by byte, store it in a buffer and base64 encode it. Cheers!
FILE* f = fopen(imagePath, "rb");
fseek(f, 0, SEEK_END);
size_t length = ftell(f);
rewind(f);
BYTE* buffer = (BYTE*)malloc(length + 2);
while ((!feof(f))) {
BYTE c;
if (fread(&c, 1, 1, f) == 0) break; //read byte by byte of the PNG image file
buffer[i++] = (int)c;
}
fclose(f);
string base64String = base64_encode(&buffer[0], i + 1);
I have a array double dc[][] and want to convert this as to a IplImage* image and further to a video frame.
What I had to do was I was given a video and I extracted out some features and then make a new video of the extracted features.
My approach was I divided the video into frames extracted the features from each frame then did the updation like this and in each iteration of frame I get a new dc
double dc[48][44];
for(int i=0;i<48;i++)
{
for(int j=0;j<44;j++)
{
dc[i][j]=max1[i][j]/(1+max2[i][j]);
}
}
Now I need to save this dc in such a way that I can reconstruct the video.Anybody help me with this.
Thanks in advance
If you're okay with using Mat, then you can make a Mat for existing user-allocated memory. One of the Mat constructors has the signature:
Mat::Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP)
where the parameters are:
rows: the memory height,
cols: the width,
type: one of the OpenCV data types (e.g. CV_8UC3),
data: pointer to your data,
step: (optional) stride of your data
I'd encourage you to take a look at the documentation for Mat here
EDIT: Just to make things more concrete, here's an example of making a Mat from some user-allocated data
int main()
{
//allocate and initialize your user-allocated memory
const int nrows = 10;
const int ncols = 10;
double data[nrows][ncols];
int vals = 0;
for (int i = 0; i < nrows; i++)
{
for (int j = 0; j < ncols; j++)
{
data[i][j] = vals++;
}
}
//make the Mat from the data (with default stride)
cv::Mat cv_data(nrows, ncols, CV_64FC1, data);
//print the Mat to see for yourself
std::cout << cv_data << std::endl;
}
You can save a Mat to a video file via the OpenCV VideoWriter class. You just need to create a VideoWriter, open a video file, and write your frames (as Mat). You can see an example of using VideoWriter here
Here's a short example of using the VideoWriter class:
//fill-in a name for your video
const std::string filename = "...";
const double FPS = 30;
VideoWriter outputVideo;
//opens the output video file using an MPEG-1 codec, 30 frames per second, of size height x width and in color
outputVideo.open(filename, CV_FOURCC('P','I','M,'1'), FPS, Size(height, width));
Mat frame;
//do things with the frame
// ...
//writes the frame out to the video file
outputVideo.write(frame);
The tricky part of the VideoWriter is the opening of the file, as you have a lot of options. You can see the names for different codecs here
I'm starting to write my first program in C++ with OpenCV and I would like to represent a set of images (stored in my project and libelled "brain_mri_001.jpg -> brain_mri_015.jpg) as vectors of length LxL where L is the number of pixels in the x(y) direction.
Here is my code :
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
using namespace std;
int main()
{
//load images
for(int i=1; i<=25; i++)
{
char filename[50];
sprintf( filename, "brain_mri_%d.jpg", i );
IplImage *img=cvLoadImage( filename, CV_LOAD_IMAGE_GRAYSCALE);
if (!img)
{
printf("Error: Image not found.\n");
return 2; //error : not found image
}
cvNamedWindow("Projet Image", CV_WINDOW_AUTOSIZE);// create a window
IplImage *img2=cvCloneImage(img); //clone img
cvShowImage("Projet Image", img2); // display the image in a window
cvWaitKey(0); //attendre touche
cvDestroyWindow("Projet Image"); // destroy the window
cvReleaseImage(&img); // memory
return 0; //finish with success
//convert IplImage -> Matrix
int height = img->height;
int width = img->width;
CvMat *mat = cvCreateMat(height,width,CV_32FC3);
//convert Matrix -> Vector
//CvMat row_header, *row;
//row = cvReshape(mat, &row_header, 0, 1);
CvMat vector_header;
cvReshape(img, &vector_header, 0, 1);
//check the height and width of vector_header
if(vector_header.height != 1)
{
fprintf(stderr, "vector_header's height is %d\n", vector_header.height);
}
if(vector_header.width != width*height)
{
fprintf(stderr, "vector_header's width is %d\n", vector_header.width);
}
}
}
I should have made a mistake but I don't know where :(
I would be grateful if anyone can answer me !
P.S. Excuse my bad English...
I think what you need to do is a) cvReleaseImage your images when you're done with them, and 2) your reshape code should be something like this
CvMat vector_header;
cvReshape(img2, &vector_header, 0, 1); /* same # of channels, 1 row */
Note that cvReshape doesn't copy the data, so vector_header doesn't need to be allocated with cvCreateMat. I'm not sure how to test this, except maybe to try loading in a very small image and plotting its values to stdout. As a sanity check, you could check the height and width of vector_header, something like
if(vector_header.height != 1)
{
fprintf(stderr, "vector_header's height is %d\n", vector_header.height);
}
if(vector_header.width != width*height)
{
fprintf(stderr, "vector_header's width is %d\n", vector_header.width);
}
I might have that backwards in terms of height and width. I'm also not 100% sure how the reshape will go, i.e. if your image is [1 2; 3 4], will the resulting vector be [1 2 3 4] or [1 3 2 4]? If you're doing something like PCA (e.g., the eigenfaces algorithm), then it might not matter as long as it's consistent.