To be honest I'm suprised nobody has run into this thus far.
I'm loading a picture from OpenCV into cv::Mat, which I want to base64 encode before I send it over a socket.
For base64 I am using libb64 as it is native to Debian/Ubuntu, and easy to use and very fast. The encoding function takes as a parameter an std::ifstream, and outputs an std::ofstream.
#include <opencv2/opencv.hpp>
#include <b64/encode.h>
#include <fstream>
using namespace cv;
Mat image;
image = imread( "picture.jpg", CV_LOAD_IMAGE_COLOR );
if ( image.data )
{
std::ifstream instream( ???, std::ios_base::in | std::ios_base::binary);
std::ofstream outstream;
// Convert Matrix to ifstream
// ...
base64::encoder E;
E.encode( instream, outstream );
// Now put it in a string, and send it over a socket...
}
I don't really know how to populate the instream from the cv::Mat.
Googling around, I found that I can iterate a cv::Mat, by columns and rows, and get each (pixel I am assuming) RGB values:
for ( int j = 0; j < image.rows; j++ )
{
for ( int i = 0; i < image.cols; i++ )
{
unsigned char b = input [ image.step * j + i ] ;
unsigned char g = input [ image.step * j + i + 1 ];
unsigned char r = input [ image.step * j + i + 2 ];
}
}
Is this the right way of going on about it? Is there some more elegant way?
In order to be able to send an image via HTTP, you also need to encode its width, height and type. You need to serialize the Mat into a stream and encode that stream with libb64. On the other side you need to decode that stream and deserialize the image to retrieve it.
I implemented a small test program that does this serialization and deserialization using std::stringstream as a buffer. I chose it because it extends both std::istream and std::ostream which libb64 uses.
The serialize function serializes a cv::Mat into a std::stringstream. In it, I write the image width, height, type, size of the buffer and the buffer itself.
The deserialize function does the reverse. It reads the width, height, type, size of the buffer and the buffer. It's not as efficient as it could be because it needs to allocate a temporary buffer to read the data from the stringstream. Also, it needs to clone the image so that it does not rely on the temporary buffer and it will handle its own memory allocation. I'm sure that with some tinkering it can be made more efficient.
The main function loads an image, serializes it, encodes it using libb64, then decodes it, deserializes it and displays it in a window. This should simulate what you are trying to do .
// Serialize a cv::Mat to a stringstream
stringstream serialize(Mat input)
{
// We will need to also serialize the width, height, type and size of the matrix
int width = input.cols;
int height = input.rows;
int type = input.type();
size_t size = input.total() * input.elemSize();
// Initialize a stringstream and write the data
stringstream ss;
ss.write((char*)(&width), sizeof(int));
ss.write((char*)(&height), sizeof(int));
ss.write((char*)(&type), sizeof(int));
ss.write((char*)(&size), sizeof(size_t));
// Write the whole image data
ss.write((char*)input.data, size);
return ss;
}
// Deserialize a Mat from a stringstream
Mat deserialize(stringstream& input)
{
// The data we need to deserialize
int width = 0;
int height = 0;
int type = 0;
size_t size = 0;
// Read the width, height, type and size of the buffer
input.read((char*)(&width), sizeof(int));
input.read((char*)(&height), sizeof(int));
input.read((char*)(&type), sizeof(int));
input.read((char*)(&size), sizeof(size_t));
// Allocate a buffer for the pixels
char* data = new char[size];
// Read the pixels from the stringstream
input.read(data, size);
// Construct the image (clone it so that it won't need our buffer anymore)
Mat m = Mat(height, width, type, data).clone();
// Delete our buffer
delete[]data;
// Return the matrix
return m;
}
void main()
{
// Read a test image
Mat input = imread("D:\\test\\test.jpg");
// Serialize the input image to a stringstream
stringstream serializedStream = serialize(input);
// Base64 encode the stringstream
base64::encoder E;
stringstream encoded;
E.encode(serializedStream, encoded);
// Base64 decode the stringstream
base64::decoder D;
stringstream decoded;
D.decode(encoded, decoded);
// Deserialize the image from the decoded stringstream
Mat deserialized = deserialize(decoded);
// Show the retrieved image
imshow("Retrieved image", deserialized);
waitKey(0);
}
Related
I want to read image from a database, image column is a MYSQL_TYPE_BLOB type and I read column using this code. Currently, Blob image converted as a char * array
//Get the total number of fields
int fieldCount = mysql_num_fields(result);
//Get field information of a row of data
MYSQL_FIELD *fields = mysql_fetch_fields(result);
while (m_row = mysql_fetch_row(result))
{
for (int i = 0;i < fieldCount; ++i)
{
if (fields[i].type == MYSQL_TYPE_BLOB)
{
unsigned long length = mysql_fetch_lengths(result)[i];
char* buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
}
}
}
In order to do some tests on image, I should know the image dimension without writing image on disk and reading it again?
Another way to read data from Mysql database is :
res = stmt->executeQuery("MY QUERY TO DATABASE");
while (res->next())
{
std::istream *blobData = res->getBlob("image");
std::istreambuf_iterator<char> isb = std::istreambuf_iterator<char>(*blobData);
std::string blobString = std::string(isb, std::istreambuf_iterator<char>());
tempFR.image = blobString;
blobData->seekg(0, ios::end);
tempFR.imageSize = blobData->tellg();
std::istream *blobIn;
char buffer[tempFR.imageSize];
memset(buffer, '\0', tempFR.imageSize);
blobIn = res->getBlob("image");
blobIn->read((char*)buffer, tempFR.imageSize);
}
Notice:
imageSize and length are the overall image size, for example 1000.
Update#1: How image will be reconstruct meanwhile writing it to disk?
In the first case it's possible to write the blob_image to disk via this codes:
stringstream pic_name;
pic_name << "car.jpeg";
ofstream outfile(pic_name.str(), ios::binary);
outfile.write(buffer, length);
and in the second ones:
std::ofstream outfile ("car.jpeg",std::ofstream::binary);
outfile.write (buffer, tempFR.imageSize);
outfile.close();
In both cases image writed to disk correctly. But I want to find image dimension without writing it to disk and reading it again?
By decoding buffered image:
length = mysql_fetch_lengths(result)[i];
buffer = new char[length + 1];
memset(buffer, 0x00, sizeof(buffer));
memcpy(buffer, m_row[i], length);
matImg = cv::imdecode(cv::Mat(1, length, CV_8UC1, buffer), cv::IMREAD_UNCHANGED);
At first, copy array to buffer, then convert it to a cv::Mat and finally decode it. It will be a cv::Mat image.
Im try to make my real-time video streaming app.
Right now, im try to speed up my application.
And i have such question:
How to speed up "for" loop here:
boost::array<uchar, 30000> RECV_DATA; // array for receive all data from socket
size_t ImageSize = image_recver.read_some(
boost::asio::buffer(RECV_DATA), ignored_error); // complete image size
vector<uchar> Img (ImageSize); // the new array, will contains only image data
for (int i = 0; i < ImageSize; i++) {
Img[i] = RECV_DATA[i]; // Image array filling
}
You can use std::vector range constructor to copy RECV_DATA:
std::vector<uchar> Img(RECV_DATA.begin(), RECV_DATA.begin() + ImageSize);
Or, better, read directly into std::vector<uchar>:
std::vector<uchar> RECV_DATA(image_recver.available());
size_t imageSize = image_recver.read_some(boost::asio::buffer(RECV_DATA), ignored_error);
RECV_DATA.resize(imageSize);
I have some image data as a uchar*. I need to run processing on it as a std::vector<uchar>, and then convert it back. I am using this code:
unsigned char* buffer = inputImg.data; //Image data from cv::Mat
std::vector<uchar> vec;
size_t size_of_buffer = sizeof(buffer);
vec.assign(buffer, buffer + size_of_buffer);
uchar* _compressed = reinterpret_cast<uchar*>(vec.data());
When I then view the result with:
cv::Mat mat = cv::Mat(_height, _width, inputImg.type(), _compressed );
this results in a black image. Where am I going wrong?
EDIT:
based on comments below, i have changed the code to:
//from Mat
int COLOR_COMPONENTS = inputImg.channels();
int _width = inputImg.cols;
int _height = inputImg.rows;
//to std::vector and back
std::vector<uchar> vec;
size_t size_of_buffer = _width * _height*COLOR_COMPONENTS;
vec.assign(buffer, buffer + size_of_buffer);
uchar* _compressed = reinterpret_cast<uchar*>(vec.data());
As in the answer below, this works.
This code works for me and displays the image correctly:
int main()
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/Lenna.png");
cv::Mat inputImg = input;
int COLOR_COMPONENTS = inputImg.channels();
int _width = inputImg.cols;
int _height = inputImg.rows;
//to std::vector and back
std::vector<uchar> vec;
size_t size_of_buffer = _width * _height*COLOR_COMPONENTS;
unsigned char* buffer = inputImg.data;
vec.assign(buffer, buffer + size_of_buffer);
uchar* _compressed = reinterpret_cast<uchar*>(vec.data());
cv::Mat mat = cv::Mat(_height, _width, inputImg.type(), _compressed);
cv::imshow("output", mat);
cv::waitKey(0);
return 0;
}
Visual Studio 2013 with OpenCV 3.4
sizeof(buffer) yields the size of the pointer to the buffer, not the amount of data inside the buffer. You must get the buffer size from somewhere else.
There is anyway to convert opencv mat object to base64.
I was using the below url for base64 encoding and decoding:
http://www.adp-gmbh.ch/cpp/common/base64.html
Below is the code snippet:
const unsigned char* inBuffer = reinterpret_cast(image.data);
There you go! (C++11)
Encode img -> jpg -> base64 :
std::vector<uchar> buf;
cv::imencode(".jpg", img, buf);
auto *enc_msg = reinterpret_cast<unsigned char*>(buf.data());
std::string encoded = base64_encode(enc_msg, buf.size());
Decode base64 -> jpg -> img :
string dec_jpg = base64_decode(encoded);
std::vector<uchar> data(dec_jpg.begin(), dec_jpg.end());
cv::Mat img = cv::imdecode(cv::Mat(data), 1);
Note that you can change JPEG compression quality by setting the IMWRITE_JPEG_QUALITY flag.
I'm encountering nearly the same problem, but I'm trying to encode a Mat into jpeg format and then convert it into base64.
The code on that page works fine!
So here is my code:
VideoCapture cam(0);
cam>>img;
vector<uchar> buf;
imencode(".jpg", img, buf);
uchar *enc_msg = new uchar[buf.size()];
for(int i=0; i < buf.size(); i++) enc_msg[i] = buf[i];
string encoded = base64_encode(enc_msg, buf.size());
if you just want to convert a Mat into base64, you need make sure the Mat size and channels. For a CV_8UC1, this will work:
string encoded = base64_encode(img.data, img.rows * img.cols);
I have created an example for this using Qt5 and OpenCV:
cv::Mat1b image;
this->cameraList[i]->getImage(image);
std::vector<uint8_t> buffer;
cv::imencode(".png", image, buffer);
QByteArray byteArray = QByteArray::fromRawData((const char*)buffer.data(), buffer.size());
QString base64Image(byteArray.toBase64());
base64ImageList.append(base64Image);
I was looking for a solution to the same problem. Using Jean-Christophe's answer above, this worked for me:
cv::Mat image = cv::imread("path/to/file");
std::vector<uchar> buffer;
buffer.resize(static_cast<size_t>(image.rows) * static_cast<size_t>(image.cols));
cv::imencode(".jpg", image, buffer);
std::string encoding = base64_encode(buffer.data(), buffer.size());
Also, c++ std does not have a base64_encode implementation so you can look at this answer which aggregated a bunch of implementations.
Without using opencv, we can convert the image or file into base 64.Read the file byte by byte, store it in a buffer and base64 encode it. Cheers!
FILE* f = fopen(imagePath, "rb");
fseek(f, 0, SEEK_END);
size_t length = ftell(f);
rewind(f);
BYTE* buffer = (BYTE*)malloc(length + 2);
while ((!feof(f))) {
BYTE c;
if (fread(&c, 1, 1, f) == 0) break; //read byte by byte of the PNG image file
buffer[i++] = (int)c;
}
fclose(f);
string base64String = base64_encode(&buffer[0], i + 1);
I have a array double dc[][] and want to convert this as to a IplImage* image and further to a video frame.
What I had to do was I was given a video and I extracted out some features and then make a new video of the extracted features.
My approach was I divided the video into frames extracted the features from each frame then did the updation like this and in each iteration of frame I get a new dc
double dc[48][44];
for(int i=0;i<48;i++)
{
for(int j=0;j<44;j++)
{
dc[i][j]=max1[i][j]/(1+max2[i][j]);
}
}
Now I need to save this dc in such a way that I can reconstruct the video.Anybody help me with this.
Thanks in advance
If you're okay with using Mat, then you can make a Mat for existing user-allocated memory. One of the Mat constructors has the signature:
Mat::Mat(int rows, int cols, int type, void* data, size_t step=AUTO_STEP)
where the parameters are:
rows: the memory height,
cols: the width,
type: one of the OpenCV data types (e.g. CV_8UC3),
data: pointer to your data,
step: (optional) stride of your data
I'd encourage you to take a look at the documentation for Mat here
EDIT: Just to make things more concrete, here's an example of making a Mat from some user-allocated data
int main()
{
//allocate and initialize your user-allocated memory
const int nrows = 10;
const int ncols = 10;
double data[nrows][ncols];
int vals = 0;
for (int i = 0; i < nrows; i++)
{
for (int j = 0; j < ncols; j++)
{
data[i][j] = vals++;
}
}
//make the Mat from the data (with default stride)
cv::Mat cv_data(nrows, ncols, CV_64FC1, data);
//print the Mat to see for yourself
std::cout << cv_data << std::endl;
}
You can save a Mat to a video file via the OpenCV VideoWriter class. You just need to create a VideoWriter, open a video file, and write your frames (as Mat). You can see an example of using VideoWriter here
Here's a short example of using the VideoWriter class:
//fill-in a name for your video
const std::string filename = "...";
const double FPS = 30;
VideoWriter outputVideo;
//opens the output video file using an MPEG-1 codec, 30 frames per second, of size height x width and in color
outputVideo.open(filename, CV_FOURCC('P','I','M,'1'), FPS, Size(height, width));
Mat frame;
//do things with the frame
// ...
//writes the frame out to the video file
outputVideo.write(frame);
The tricky part of the VideoWriter is the opening of the file, as you have a lot of options. You can see the names for different codecs here