Using GTK+ 3.6 I would like to display an image from a buffer in memory, not a file on disk. I have a const char *data with the image data, and I'm trying to create a GTK image from it.
So far I have tried two approaches which I thought could work. Both use GdkPixbuf, and thus require the image data to be guchar* (unsigned char*).
With that requirement I have to cast the data:
guchar *gudata = reinterpret_cast<guchar*>(const_cast<char*>(data));
I then tried the following:
Writing the data into a GdkPixbufLoader with gdk_pixbuf_loader_write. Here I get an error "Unrecognized image file format" or if I create the loader with a specific type (jpg) i get an error saying that it's not a JPG file format (and it is, explained below).
EDIT: A bit of code:
guchar *gudata = reinterpret_cast<guchar*>(const_cast<char*>(data));
int stride = ((1056 * 32 + 31) & ~31)/8;
GdkPixbufLoader *loader = gdk_pixbuf_loader_new();
GError *error = NULL;
if(!gdk_pixbuf_loader_write(loader, gudata, data_size, &error)
{
printf("Error:\n%s\n", error->message);
}
EDIT 03/01/2013:
Removed stride parameter from write function - misprint.
Cairo surface does not work as well. Shows black screen and noise.
Initializing the pixbuf with gdk_pixbuf_new_from_data and then the image just looks like tv noise, which would indicate that either the data is wrong (and it has been cast), or that the other parameters were wrong (image row stride, but it's not :) ).
After errors I just tried writing the data to a file foo.jpg using ofstream and yes, I get a properly working image file. The file command in terminal confirms that it is a JPEG image, and with a simple block of code I've created a GdkPixbuf from that foo.jpg to check out it's row stride value and it matches the value I pass to the aforementioned function.
Does the image data become corrupt with the cast, and if so how can I address that? I get the image data in const char*. I have looked at QtPixmap and it also loads unsigned char*.
Do I need to use a seperate library? (libjpeg?)
I have libgtk3-dev installed.
Thank you!
03/01/2012 UPDATE:
Here's a simple working app that loads a "test.jpg" file near it (file size must be < 100000 bytes).
#include <glib.h>
#include <gdk-pixbuf/gdk-pixbuf.h>
#include <gtk/gtk.h>
void on_destroy (GtkWidget *widget G_GNUC_UNUSED, gpointer user_data G_GNUC_UNUSED)
{
gtk_main_quit ();
}
int main (int argc, char *argv[])
{
FILE *f;
guint8 buffer[100000];
gsize length;
GdkPixbufLoader *loader;
GdkPixbuf *pixbuf;
GtkWidget *window;
GtkWidget *image;
gtk_init (&argc, &argv);
f = fopen ("test.jpg", "r");
length = fread (buffer, 1, sizeof(buffer), f);
fclose (f);
loader = gdk_pixbuf_loader_new ();
gdk_pixbuf_loader_write (loader, buffer, length, NULL);
pixbuf = gdk_pixbuf_loader_get_pixbuf (loader);
window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
image = gtk_image_new_from_pixbuf (pixbuf);
gtk_container_add (GTK_CONTAINER (window), image);
gtk_widget_show_all (GTK_WIDGET (window));
g_signal_connect (window, "destroy", G_CALLBACK(on_destroy), NULL);
gtk_main ();
return 0;
}
Original Answer:
The char * or unsigned char * here has little importance.
gdk_pixbuf_new_from_data will only read uncompressed RGB data (the only colorspace supported is GDK_COLORSPACE_RGB) with an alpha channel (RGBA) or without it (RGB). No wonder passing it JPEG fails.
Calling gdk_pixbuf_loader_write looks like a better option, but we'd need some code to see what you may doing wrong. Check however that you have the pixbuf loader for jpg installed by running in a shell the gdk-pixbuf-query-loaders command, and verifying that JPEG is there.
Related
I'm working on an C++ image viewer for Linux which is created using GTK+3 (gtkmm) for the GUI and Magick++ for image handling. My goal is to support as many image file formats as possible, including animated GIFs.
What is the best approach to take a Magick++ Image and draw it in a GTK+3 widget, such that it would work for (just about) any image file format?
What is the best approach to take a Magick++ Image and draw it in a GTK+3 widget, such that it would work for (just about) any image file format?
As long as ImageMagick has the format delegate, you should be able to draw the GtkWidget image.
image = Gtk::manage(new Gtk::Image());
// Load image into ImageMagick
Magick::Image img("wizard:");
/// Calculate how much memory to allocate
size_t to_allocate = img.columns() * img.rows() * 3;
// Create a buffer
guint8 * buffer = new guint8[to_allocate];
// Write pixel data to buffer.
img.write(0, 0, img.columns(), img.rows(), "RGB", Magick::CharPixel, buffer);
// Build a Pixbuf from pixel data in memory.
Glib::RefPtr<Gdk::Pixbuf> pBuff = Gdk::Pixbuf::create_from_data(buffer, Gdk::COLORSPACE_RGB, false, 8, img.columns(), img.rows(), img.columns()*3 );
// Set GtkImage from Pixbuf
image->set(pBuff);
Original Answer mixing C/C++ methods.
Use the following Magick++ method signature to export the pixel data into memory.
Magick::Image.write(const ssize_t x_,
const ssize_t y_,
const size_t columns_,
const size_t rows_,
const std::string &map_, //<= Usually "RGB"
const StorageType type_, //<= Usually CharType
void *pixels_) //<= Be sure to allocate _all_ the memory required (size of storage * number of channels * columns * rows)
Create a GdkPixBuf from the pixels exported above with the following GTK method.
GdkPixbuf *
gdk_pixbuf_new_from_bytes (GBytes *data, //<= Same as pixels_.
GdkColorspace colorspace, //<= Match colorspace channels from map_.
gboolean has_alpha, //<= Usually no.
int bits_per_sample, //<= Match StorageType bits
int width, //<= Same as columns_.
int height, //<= Same as rows_.
int rowstride); //<= size of data-type * number of channel * width.
Finally, build a GtkImage from the PixBuf with the following method.
GtkWidget * gtk_image_new_from_pixbuf (GdkPixbuf *pixbuf);
I suppose #emcconville is right for the ImageMagick part, as I have no clue about it.
For the GTKmm part, though, you'll want to stick to the C++ API, so use Gdk::Pixbuf::create_from_data to read the image from ImageMagick.
Also, as you're creating an image viewer, you will want to change the image shown by a Gtk::Image. So at startup just use an empty Gtk::Image created with Gtk::Image::Image (or Glade and Gtk::Builder), and later change the image displayed in it with Gtk::Image::set, passing it your pixbuf.
My code:
camera = new RaspiCam_Cv();//raspbery pi library
camera->set(CV_CAP_PROP_FORMAT,CV_8UC1); //this is monochrome 8 bit format
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
while (1){
camera->grab();//for linux
unsigned char* buff = camera->getImageBufferData();
QPixmap pic = QPixmap::fromImage(QImage( buff, camWidth_, camHeight_, camWidth_ * 1, QImage::Format_Indexed8 ));
label->setPixmap(pic);
}
The problem is bad quality! I found out that the problem happens when using QImage, when using openCv Mat, everything is good!
Same thing happens in other Qt based programs, like this one (same bad quality): https://code.google.com/p/qt-opencv-multithreaded/
Here is a pic, where the problem is shown. there is a white page in front of the camera, so if all went as it should, you should see clean gray image.
You are resizing the image using pixmap and label transformations, which are worse than the one of QImage. This is due to pixmap being optimized for display and not for anything else. The pixmap size should be the same of the label to avoid any further resizing.
QImage img =QImage(
buff,
camWidth_,
camHeight_,
camWidth_ * 1,
QImage::Format_Indexed8 ).scaled(label->size());
label->setPixmap(QPixmap::fromImage(img));
This is not an answer, but it's too hard to share code in the comments.
Can you please test this code and tell me whether the result is good or bad?
int main(int argc, char** argv)
{
RaspiCam_Cv *camera = new RaspiCam_Cv();
camera->set(CV_CAP_PROP_FORMAT , CV_8UC1) ;
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Output",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
camera.grab();
//camera.retrieve ( frame);
unsigned char* buff = camera->getImageBufferData();
frame = cv::Mat(720, 960, CV_8UC1, buff);
imshow("Output", frame);
if (waitKey(30) == 27)
{ cout << "Exit" << endl; break; }
}
camera->~RaspiCam_Cv();
return 0;
}
Your provided images look like the color depth is only 16 Bit.
For comparison, here's the provided captured image:
and here's the same image, transformed to 16 bit color space in IrfanView (without Floyd-Steinberg-Dithering).
In the comments we found out that the Raspberry Pi Output Buffer was set to 16 Bit. and setting it to 24 Bit helped.
But I can't explain why rendering the image on the pi with OpenCV's cv::imshow produced well looking images on the Monitor/TV...
I'm using DirectShow to access a video stream, and then using the SampleGrabber filter and interface to get samples from each frame for further image processing. I'm using a callback, so it gets called after each new frame. I've basically just worked from the PlayCap sample application and added a sample filter to the graph.
The problem I'm having is that I'm trying to display the grabbed samples on a different OpenCV window. However, when I try to cast the information in the buffer to an IplImage, I get a garbled mess of pixels. The code for the BufferCB call is below, sans any proper error handling:
STDMETHODIMP BufferCB(double Time, BYTE *pBuffer, long BufferLen)
{
AM_MEDIA_TYPE type;
g_pGrabber->GetConnectedMediaType(&type);
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER *)type.pbFormat;
BITMAPINFO* bmi = (BITMAPINFO *)&pVih->bmiHeader;
BITMAPINFOHEADER* bmih = &(bmi->bmiHeader);
int channels = bmih->biBitCount / 8;
mih->biPlanes = 1;
bmih->biBitCount = 24;
bmih->biCompression = BI_RGB;
IplImage *Image = cvCreateImage(cvSize(bmih->biWidth, bmih->biHeight), IPL_DEPTH_8U, channels);
Image->imageSize = BufferLen;
CopyMemory(Image->imageData, pBuffer, BufferLen);
cvFlip(Image);
//openCV Mat creation
Mat cvMat = Mat(Image, true);
imshow("Display window", cvMat); // Show our image inside it.
waitKey(2);
return S_OK;
}
My question is, am I doing something wrong here that will make the image displayed look like this:
Am I missing header information or something?
The quoted code is a part of the solution. You create here an image object of certain width/height with 8-bit pixel data and unknown channel/component count. Then you copy data from another buffer of unknown format.
The only chance for it to work well is that all unknowns amazingly match without your effort. So you basically need to start with checking what media type is exactly on Sample Grabber's input pin. Then, if it is not what you wanted, you have to update your code respectively. It might also be important what is the downstream connection of the SG, and whether it is connected to video renderer in particular.
Given the following
Bitmap raw image data in char array
Image width and height
Path wzAppDataDirectory in std::wstring generated using the following code
// Get a good path.
wchar_t wzAppDataDirectory[MAX_PATH];
wcscpy_s( wzAppDataDirectory, MAX_PATH, Windows::Storage::ApplicationData::Current->LocalFolder->Path->Data() );
wcscat_s( wzAppDataDirectory, MAX_PATH, (std::wstring(L"\\") + fileName).c_str() );
How can we save the image as JPG? (Include encoding as well as the char array is raw bitmap form)
Code example is very much appreciated.
You'll need to use a library to encode the JPEG. Some possibilities are the Independent JPEG Group's jpeglib, stb_image, or DevIL.
This is an example code which I obtained from my friend.
It uses OpenCV's Mat data structure. Note that, you need to ensure the unsigned char data array within cv::Mat is in continuous form. cv::cvtColor will do the trick (Or, cv::Mat.clone).
Take note, do not use OpenCV's imwrite. As at current time of writing, imwrite doesn't pass Windows Store Certification Test. It is using several APIs, which is prohibited in WinRT.
void SaveMatAsJPG(const cv::Mat& mat, const std::wstring fileName)
{
cv::Mat tempMat;
cv::cvtColor(mat, tempMat, CV_BGR2BGRA);
Platform::String^ pathName = ref new Platform::String(fileName.c_str());
task<StorageFile^>(ApplicationData::Current->LocalFolder->CreateFileAsync(pathName, CreationCollisionOption::ReplaceExisting)).
then([=](StorageFile^ file)
{
return file->OpenAsync(FileAccessMode::ReadWrite);
}).
then([=](IRandomAccessStream^ stream)
{
return BitmapEncoder::CreateAsync(BitmapEncoder::JpegEncoderId, stream);
}).
then([=](BitmapEncoder^ encoder)
{
const Platform::Array<unsigned char>^ pixels = ref new Platform::Array<unsigned char>(tempMat.data, tempMat.total() * tempMat.channels());
encoder->SetPixelData(BitmapPixelFormat::Bgra8, BitmapAlphaMode::Ignore, tempMat.cols , tempMat.rows, 96.0, 96.0, pixels);
encoder->FlushAsync();
});
}
So I'm trying to use the webp API to encode images. Right now I'm going to be using openCV to open and manipulate the images, then I want to save them off as webp. Here's the source I'm using:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <webp/encode.h>
int main(int argc, char *argv[])
{
IplImage* img = 0;
int height,width,step,channels;
uchar *data;
int i,j,k;
if (argc<2) {
printf("Usage:main <image-file-name>\n\7");
exit(0);
}
// load an image
img=cvLoadImage(argv[1]);
if(!img){
printf("could not load image file: %s\n",argv[1]);
exit(0);
}
// get the image data
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
printf("processing a %dx%d image with %d channels \n", width, height, channels);
// create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin",100,100);
// invert the image
for (i=0;i<height;i++) {
for (j=0;j<width;j++) {
for (k=0;k<channels;k++) {
data[i*step+j*channels+k] = 255-data[i*step+j*channels+k];
}
}
}
// show the image
cvShowImage("mainWin", img);
// wait for a key
cvWaitKey(0);
// release the image
cvReleaseImage(&img);
float qualityFactor = .9;
uint8_t** output;
FILE *opFile;
size_t datasize;
printf("encoding image\n");
datasize = WebPEncodeRGB((uint8_t*)data,width,height,step,qualityFactor,output);
printf("writing file out\n");
opFile=fopen("output.webp","w");
fwrite(output,1,(int)datasize,opFile);
}
When I execute this, I get this:
nato#ubuntu:~/webp/webp_test$ ./helloWorld ~/Pictures/mars_sunrise.jpg
processing a 2486x1914 image with 3 channels
encoding image
Segmentation fault
It displays the image just fine, but segfaults on the encoding. My initial guess was that it's because I'm releasing the img before I try to write out the data, but it doesn't seem to matter whether I release it before or after I try the encoding. Is there something else I'm missing that might cause this problem? Do I have to make a copy of the image data or something?
The WebP api docs are... sparse. Here's what the README says about WebPEncodeRGB:
The main encoding functions are available in the header src/webp/encode.h
The ready-to-use ones are:
size_t WebPEncodeRGB(const uint8_t* rgb, int width, int height,
int stride, float quality_factor, uint8_t** output);
The docs specifically do not say what the 'stride' is, but I'm assuming that it's the same as the 'step' from opencv. Is that reasonable?
Thanks in advance!
First, don't release the image if you use it later. Second, your output argument is pointing to non-initialized address. This is how to use initialized memory for the output address:
uint8_t* output;
datasize = WebPEncodeRGB((uint8_t*)data, width, height, step, qualityFactor, &output);
You release the image with cvReleaseImage before you try to use the pointer to the image data for the encoding. Probably that release function frees the image buffer and your data pointer now doesn't point to valid memory anymore.
This might be the reason for your segfault.
so it looks like the problem was here:
// load an image
img=cvLoadImage(argv[1]);
The function cvLoadImage takes an extra parameter
cvLoadImage(const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR)
and when I changed to
img=cvLoadImage(argv[1],1);
the segfault went away.