FFMPEG Frame to DirectX Surface - c++

Given a pointer to an AVFrame from FFMPEG's avcodec_decode_video() function how do I copy the image to a DirectX surface? (Assume I have a pointer to an appropriately sized DX X8R8G8B8 surface.)
Thanks.
John.

You can use FFMPEG's img_convert() function to simultaneously copy the image to your surface and convert it to RGB format. Here's a few lines of code pasted from a recent project of mine which did a similar thing (although I was using SDL instead of DirectX):
AVFrame *frame;
avcodec_decode_video(_ffcontext, frame, etc...);
lockYourSurface();
uint8_t *buf = getPointerToYourSurfacePixels();
// Create an AVPicture structure which contains a pointer to the RGB surface.
AVPicture pict;
memset(&pict, 0, sizeof(pict));
avpicture_fill(&pict, buf, PIX_FMT_RGB32,
_ffcontext->width, _ffcontext->height);
// Convert the image into RGB and copy to the surface.
img_convert(&pict, PIX_FMT_RGB32, (AVPicture *)frame,
_context->pix_fmt, _context->width, _context->height);
unlockYourSurface();

Related

uint8_t buffer to cv::Mat conversion results in distorted image

I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).

What is the preferred approach to display a Magick++ Image in a GTK+3 application?

I'm working on an C++ image viewer for Linux which is created using GTK+3 (gtkmm) for the GUI and Magick++ for image handling. My goal is to support as many image file formats as possible, including animated GIFs.
What is the best approach to take a Magick++ Image and draw it in a GTK+3 widget, such that it would work for (just about) any image file format?
What is the best approach to take a Magick++ Image and draw it in a GTK+3 widget, such that it would work for (just about) any image file format?
As long as ImageMagick has the format delegate, you should be able to draw the GtkWidget image.
image = Gtk::manage(new Gtk::Image());
// Load image into ImageMagick
Magick::Image img("wizard:");
/// Calculate how much memory to allocate
size_t to_allocate = img.columns() * img.rows() * 3;
// Create a buffer
guint8 * buffer = new guint8[to_allocate];
// Write pixel data to buffer.
img.write(0, 0, img.columns(), img.rows(), "RGB", Magick::CharPixel, buffer);
// Build a Pixbuf from pixel data in memory.
Glib::RefPtr<Gdk::Pixbuf> pBuff = Gdk::Pixbuf::create_from_data(buffer, Gdk::COLORSPACE_RGB, false, 8, img.columns(), img.rows(), img.columns()*3 );
// Set GtkImage from Pixbuf
image->set(pBuff);
Original Answer mixing C/C++ methods.
Use the following Magick++ method signature to export the pixel data into memory.
Magick::Image.write(const ssize_t x_,
const ssize_t y_,
const size_t columns_,
const size_t rows_,
const std::string &map_, //<= Usually "RGB"
const StorageType type_, //<= Usually CharType
void *pixels_) //<= Be sure to allocate _all_ the memory required (size of storage * number of channels * columns * rows)
Create a GdkPixBuf from the pixels exported above with the following GTK method.
GdkPixbuf *
gdk_pixbuf_new_from_bytes (GBytes *data, //<= Same as pixels_.
GdkColorspace colorspace, //<= Match colorspace channels from map_.
gboolean has_alpha, //<= Usually no.
int bits_per_sample, //<= Match StorageType bits
int width, //<= Same as columns_.
int height, //<= Same as rows_.
int rowstride); //<= size of data-type * number of channel * width.
Finally, build a GtkImage from the PixBuf with the following method.
GtkWidget * gtk_image_new_from_pixbuf (GdkPixbuf *pixbuf);
I suppose #emcconville is right for the ImageMagick part, as I have no clue about it.
For the GTKmm part, though, you'll want to stick to the C++ API, so use Gdk::Pixbuf::create_from_data to read the image from ImageMagick.
Also, as you're creating an image viewer, you will want to change the image shown by a Gtk::Image. So at startup just use an empty Gtk::Image created with Gtk::Image::Image (or Glade and Gtk::Builder), and later change the image displayed in it with Gtk::Image::set, passing it your pixbuf.

How to go from raw Bitmap data to SDL Surface or Texture?

I'm using a library called Awesomium and it has the following function:
void Awesomium::BitmapSurface::CopyTo ( unsigned char * dest_buffer, // output
int dest_row_span, // input that I can select
int dest_depth, // input that I can select
bool convert_to_rgba, // input that I can select
bool flip_y // input that I can select
) const
Copy this bitmap to a certain destination. Will also set the dirty bit to False.
Parameters
dest_buffer A pointer to the destination pixel buffer.
dest_row_span The number of bytes per-row of the destination.
dest_depth The depth (number of bytes per pixel, is usually 4 for BGRA surfaces and 3 for BGR surfaces).
convert_to_rgba Whether or not we should convert BGRA to RGBA.
flip_y Whether or not we should invert the bitmap vertically.
This is great because it gives me an unsigned char * dest_buffer which contains raw bitmap data. I've been trying for several hours to convert this raw bitmap data into some sort of usable format that I can use in SDL but I'm having trouble. =[ Is there any way I can load it into a SDL texture or surface? It would be ideal to have examples for both but if I only get one example (either texture or surface), that is sufficient and I will be very grateful. :) I tried to use SDL_LoadBMP_RW but that crashed. I'm not even sure if I should be using that method.
SDL_LoadBMP_RW is for loading an image in the BMP file format. And it expects an SDL_RWops*, which is a file stream, not a pixel buffer. The function you want is SDL_CreateRGBSurfaceFrom. I believe this call should work for your purposes:
SDL_Surface* surface =
SDL_CreateRGBSurfaceFrom(
pixels, // dest_buffer from CopyTo
width, // in pixels
height, // in pixels
depth, // in bits, so should be dest_depth * 8
pitch, // dest_row_span from CopyTo
Rmask, // RGBA masks, see docs
Gmask,
Bmask,
Amask
);

Decode PNG image with Magick++

I need to put decoded RGBA data (from 32-bit PNG) in cl::Image2D, then (after some processing) write it back to Magick++ image with enqueueReadImage().
However, at the moment I do not see any way to access RGBA data directly in Magick++ image object. Is this possible? If not, what's the best way to get data in RGBA format from Magick++ object?
You can use the Magick::Image::write function
Magick::Image im;
// read image ....
// only for RGBA !!!
size_t im_size = im.columns() * im.rows() * 4;
uint8_t * pixels = new uint8_t[im_size];
im.write(0, 0, im.columns(), im.rows(), "RGBA", ::Magick::CharPixel, pixels);

Depth feed from Kinect not updating

I'm using a combination of OpenKinect and OpenCV libraries to apply Haar-like feature recognition to both RGB and depth images.
I can get the live feed and successfully detect objects using the RGB feed however the depth is giving me massive problems.
After the initial frame the depth frame does not seem to update at all.
The depth callback function that provides the raw data is as follows:
//depth callback function
void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)
{
if (got_depth == 0){
pthread_mutex_lock(&buf_mutex);
//copy to OpenCV buffer
memcpy(depthMat.data, v_depth, (640*480*2));
// depthMat.convertTo(depthFrame, CV_8UC1, 256.0/2048.0);
got_depth++;
pthread_cond_signal(&frame_cond);
pthread_mutex_unlock(&buf_mutex);
}
}
the Mats used are initialised like so:
cv::Mat depthMat(cv::Size(640,480),CV_16UC1);
cv::Mat depthFrame(cv::Size(640,480),CV_8UC1);
And in the main function I try use them like so:
depthMat.convertTo(depthFrame, CV_8UC1, 255.0/2048.0);
imshow("rgb", rgbMat);
imshow("depth-pre-conversion", depthMat);
imshow("depth", depthFrame);
IplImage depthImage = depthFrame;
IplImage rgbImage = rgbMat;
detect_and_draw(&depthImage);
'Depth-pre-conversion' is a almost black frame, you can just about make out the depth image here. It doesn't update.
'Depth' is the lighter version once converted to 8 bits, it also doesn't move.
'rgb' is the live RGB feed which works no problem (although it is BGR rather than RGB but I'll get round fixing that at some point, it's less important right now)
I'd appreciate any advise and help you can offer.