I am trying to pass the output from librealsense library function, rs_get_frame_data on to Gstreamer. Before I do that, I need to copy the output to a local memory char buffer and then send that buffer over to GStreamer. I am having some issue with typecasting the output. The data from GSTREAMER does not look right.
char *buffer = malloc (...)
....
struct rgb *rgb = rs_get_frame_data (...);
....
memcpy (buffer,rgb,sizeof(rgb)):
Pass buffer to Gstreamer.
Is anything wrong with the above?
I think what you have done is only copy one pointer size (4bytes) OR one RGB struct size, since your declaration is ambiguous. If you want to copy the whole frame, you should multiply how many pixels inside the frame for the copy.
Related
I am trying to decode a video with libav by following the demo code: here
I need to be able to control where the frame data in pFrame->data[0] is stored. I have tried setting pFrame->data to my own buffer as follows:
// Determine required buffer size and allocate buffer
int numBytes = av_image_get_buffer_size(pixFmt, width, height, 1);
(uint8_t*) dataBuff = (uint8_t*) malloc (numBytes * sizeof(uint8_t));
// Assign buffer to image planes in pFrame
av_image_fill_arrays(frame->data, frame->linesize, dataBuff, pixFmt, width,
height, 1);
While this does set pFrame->data to be dataBuff (if I print their addresses, they are the same), this call ret = avcodec_receive_frame(pCodecContext, pFrame) to receive the decoded data always writes the data to a different address. It seems to manage its own memory somewhere in the underlying API and ignores the dataBuff that I assigned to pFrame right before.
So I'm stuck--how can I tell libav to write decoded frame data to memory that I pre-allocate? I've seen people ask similar questions online and in the libav forum but haven't been able to find an answer.
Many thanks~
I found that the proper way to do it is via the callback function get_buffer2 to create your own custom allocator as this answer demonstrates:
FFMPEG: While decoding video, is possible to generate result to user's provided buffer?
Further documentation is here!
I'm currently combining the Qt library with the Kinect API, and attempting to show the video from the sensor in a QImage (being shown by a QLabel).
In my Kinect handling library, the function that receives the data from the video sensor is emitting the data as a BYTE* (pointing to something with RGB32 values).
In another little corner of my program, I have a slot receiving that BYTE* and attempting to update the QImage with the RGB32 data as follows:
videoCanvas->loadFromData(reinterpret_cast<const uchar*>(pBuffer), QImage::Format_RGB32);
Where pBuffer is passed into the slot by the signal, and is the aforementioned BYTE*.
This is not working for me, and I'm still stuck with a gray box where the image should be. I imagine that the issue is in the casting, because I have researched the data type and apparently QImage::Format_RGB32 is correct.
How should I proceed with this? :)
loadFromData expects the buffer you are passing to have all the necessary information about the image...including its width and height. So the format should be things like JPEG or PNG, not a QImage::Format constant.
In fact, I'm surprised that line compiled as written...because loadFromData expects a char* for it's specification of the "format", and you're passing an integer:
http://developer.qt.nokia.com/doc/qt-4.8/qimage.html#Format-enum
If what you have is a raw byte array of data, and know the width and height from some other source, you probably want to use the appropriate QImage constructor for that:
http://developer.qt.nokia.com/doc/qt-4.8/qimage.html#QImage-5
That's where you use the QImage::Format values...
Is there a possibility of saving an image to a string instead of a file with OpenCV?
I have found just this function in the docs:
int cvSaveImage(const char* filename, const CvArr* image)
And it can only save to files.
How did you want to code the image in the string?
You could just store red,green,blue values as comma separated strings but coding the image binary as something like base64 would be more compact.
Edit: if you mean a jpeq format in memory see - OpenCV to use in memory buffers or file pointers
Maybe you can simply create a char* buffer and copy your IplImage data structure there ?
But be careful, you should carefully copy the pointer variables, such as imageData, imageDataOrigin and so on.
So the idea is to put everything in your buffer in the way you can decode it from the receiver's side.
And opencv is an opensource library and you can simply copy the opencv's savefile function body, replacing writing in a file to writing in your buffer
I'm looking for a simple c/c++ lib that would allow to extract the first frame of a video as a uchar array. And have a simple fonction to access the next one.
I know of FFMPEG but it requiere to play with packet and things like that, and i'm surprised that nowhere on the net i can find a lib that allow something like :
Video v = openVideo("path");
uchar* data = v.getFrame();
v.nextFrame();
I just need to extract frames of a video to use it as a texture...no need for reencoding after or anything...
of course something that would read the most format than possible would be great, something built upon libavcodec for example ;p
And i'm using Windows 7
Thanks!
Here's an example with OpenCV:
#include <cv.h>
#include <highgui.h>
int
main(int argc, char **argv)
{
cv::VideoCapture capture(argv[1]);
if (capture.grab())
{
cv::Mat_<char> frame;
capture.retrieve(frame);
//
// Convert to your byte array here
//
}
return 0;
}
It's untested, but I cannibalized it from some existing working code, so it shouldn't take you long to get it working.
The cv::Mat_<unsigned char> is essentially a byte array. If you really need something that's explicitly of type unsigned char *, then you can malloc space of the appropriate size and iterate over the matrix using
You can convert a cv::Mat to a byte array using pixel positions(cv::Mat_::at()) or iterators (cv::Mat_::begin() and friends).
There are many reasons why libraries rarely expose image data as a simple pointer, such as:
It implies the entire image must occupy a contiguous space in memory. This is a big deal when dealing with large images
It requires committing a certain ordering of the data (pixel vs non-planar -- are the RGB planes stored interspersed or separately?) and reduces flexibility
Dereferencing pointers is a cause for bugs (buffer overruns, etc).
So if you want your pointer, you have to do a bit of work for it.
You can use OpenCV it is very simple and they have a useful manual on their site http://opencv.willowgarage.com/wiki/
Use DirectShow. Here's an article: http://www.codeproject.com/KB/audio-video/framegrabber.aspx
There are DirectShow 'add-ons' to decode different video formats. Here's one of the sites where you can grab free DirectShow filter packs to decode some common formats that are not directly supported by DirectShow: http://www.free-codecs.com/download/DirectShow_FilterPack.htm
This one is for all you ALSA guys. I need a sanity check here. I am using the the alsa-lib api to play sounds and the function that I am using to write the data to the driver is
snd_pcm_sframes_t snd_pcm_writei (snd_pcm_t* pcm,
const void* buffer,
snd_pcm_uframes_t size);
For the third parameter, should it be the frame count or the size of the buffer in bytes? I'm asking because I have seen numerous examples where the size in bytes is passed in. One example is included in the documentation.
According to the documentation, it's the amount of frames, not bytes.
In the example you linked to the values just happen to be the same because it's using 8-bit samples and one channel, and one frame of one channel 8-bit data is one byte.