How to generate a QVideoFrame from PylonImage buffer - c++

I have a stream of pylon images that I would like to display to a user in a QML app, how can I convert the PylonImage to a QVideoFrame so I can display it?
I am using PixelType_YUV422planar since it is supported by both pylonImages and QVideoFrames.
yet I'm clueless on how can I get the QVideoFrame from the pylon image?
I will experiment a bit with memcpy but I would like to know if there's any other way..
edit:
copying the buffer of pylonImage to the QVideoFrame using memcpy results in distorted image..

The memcpy approach works very well. Please refer to:
https://blog.katastros.com/a?ID=9f708708-c5b3-4cb3-bbce-400cc8b8000c
The interesting code shiped:
QVideoFrame f(size, QSize(width, height), width, QVideoFrame::Format_YUV420P);
if (f.map(QAbstractVideoBuffer::WriteOnly)) {
memcpy(f.bits(), data, size);
f.setStartTime(0);
f.unmap();
emit newFrameAvailable(f);
}
I made the memcpy example work myself to make a v4l2 video capture work however memcpy is too costly and it is causing my framerates to drop from 34fps to 6 fps and CPU usage is at 100% (4K live video).
Could you find any alternatives to memcpy?
Thx

Related

C++(MFC) using Pylon :How to draw image which is in array

Hello I am working on a Pylon Application, and I want to know how to draw image which is in array.
Basler's Pylon SDK, on memory Image saving
In this link, it shows I can save image data in array(I guess) Pylon::CImageFormatConverter::Convert(Pylon::CPylonImage[i],Pylon::CGrabResultPtr)
But the thing is I can not figure out how to draw that images which is in array.
I Think it would be easy problem, but I'd appreciate it if you understand because it's my first time doing this.
For quick'n dirty displaying of Pylon buffers, you have a helper class available in Pylon SDK to put somewhere in buffer grabbing loop:
//initialization
CPylonImage img;
CPylonImageWindow wnd;
//somewhere in grabbing loop
wnd.SetImage(img);
wnd.Show();
If you want to utilize in your own display procedures (via either GDI or 3rd party libraries like OpenCV etc.), you can get image buffer with GetBuffer() method:
CGrabResultPtr ptrGrabResult;
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
Of course, you have to be aware of your target pixel alignment and optionally use CImageFormatConverter for proper output pixel format of your choice.
CImageFormatConverter documentation

How to render YUV420P frames using DirectX?

I'm using FFMPEG C++ to decode a video file.
I decoded the frames under YUV420P format in System Memory.
Next step, I want to render these frames by using DirectX.
I searched many times, and I know that I have to use the texture2D, but there is not any code sample for my case.
How to create a texture2D from YUV420 (or RGB instead) from System Memory and render it to the screen.
And the performance is also important, so I need the best way to do it.
Someone can show me the sample code?
Many Thanks!

Extracting RGB channels in OpenCV under C++

I'm using OpenCV to convert image data captured using an IDS uEye camera into a useful format, using the following code:
IplImage* tmpImg = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,3);
tmpImg->imageData = pFrameBuffer[k];
frame = cv::cvarrToMat(tmpImg);
This works perfectly - I can then use imwrite(filename,frame); further downstream to write the processed images out as a sensible format. I would ideally like to be able to save the RGB channels as separate 'grayscale' image files, but I don't understand the OpenCV documentation regarding single-channel operations. Can anyone suggest a means of accomplishing this? Preferably it's not overly computationally expensive (looping over an image pixel-by pixel isn't an option - I'm working with 60-230fps video at up to 1280x1064, and all the processing has to be done at the point of capture).
Running the latest Debian Testing if that makes any difference (I don't think it should).
Once you have a cv::Mat object it's pretty simple:
std::vector<cv::Mat> grayPlanes;
cv::split(frame, grayPlanes);
cv::imwrite("blue.png", grayPlanes[0]);
cv::imwrite("green.png", grayPlanes[1]);
cv::imwrite("red.png", grayPlanes[2]);
The split function can directly write to a standard vector and you don't really have to think about memory management and other stuff.

Capture image frames from Kinect and save to Hard drive

My aim is to capture all the frames (RGB) from Kinect at 30 fps and save them to my hard drive. For doing this I took the following approach.
Get the frames from Kinect and store them in an array buffer. Since writing to disk (using imwrite()) takes a bit of time and I may miss some frames while doing so, so instead of directly saving them to the disk, I store them in an array. Now, I have another parallel thread that accesses this array and writes the individual frames to the disk as images.
Now I have used a static array of size 3000 and type Mat. This will suffice since I need to store frames for 1.5 minute videos (1.5 minutes = 2700 frames). I have declared the array as follows :
#define NUM_FRAMES 3000
Mat rgb[NUM_FRAMES];
I have already tested this limit by reading images and saving them to the array using the following code :
for(int i=0; i<NUM_FRAMES; i++)
{
Mat img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
rgb[i] = img;
imshow("Image", img);
cvWaitKey(10);
}
The above code executed flawlessly.
But one problem is that the code I am using for capturing image using Kinect, captures the image in an IplImage. Thus I need to convert the image to cv::Mat format before using it. I convert it using the following command:
IplImage* color = cvCreateImageHeader(cvSize(COLOR_WIDTH, COLOR_HEIGHT), IPL_DEPTH_8U, 4);
cvSetData(color, colorBuffer, colorLockedRect.Pitch); // colorBuffer and colorLockedRect.Pitch is something that Kinect uses. Not related to OpenCv
rgb[rgb_read++] = Mat(color, FLAG);
Now here lies my problem. Whenever I am setting #define FLAG true, it causes memory leaks and gives me OpenCv Error: Insufficient memory (failed to allocate 1228804 bytes) error.
But if I use #define FLAG false it works correctly, but the frames that I am getting is erroneous as shown below. They are three consecutive frames.
I was moving around my arm and the image got cut in between as can be seen from above.
Can someone please point out the reason for this weird behavior or any other alternate way of obtaining the desired result. I have been struggling with this since a few days now. Please ask for if any further clarifications are required.
I am using OpenCV 2.4.8, Kinect SDK for Windows version-1.8.0 and Microsoft Visual Studio 2010.
Also can someone please explan to me the role of the CopyData parameter in Mat::Mat. I have already gone through this link, but still it's not completely clear. Maybe that's why I could not solve the above error in the first place since it's working is not very clear.
Thanks in advance.
first, do not use IplImages, stick with cv::Mat, please.
the equivalent code for that would be:
Mat img_borrowed = Mat( height, width, CV_8U4C, colorBuffer, colorLockedRect.Pitch );
note, that this does not do any allocation on its own, it's still the kinect's pixels, so you will have to clone() it:
rgb[rgb_read++] = img_borrowed.clone();
this is the same as setting the flag in your code above to 'true'. (deep-copy the data)
[edit] maybe it's a good idea to skip the useless 4th channel (also less mem required), so , instead of the above you could do:
cvtColor( img_borrowed, rgb[rgb_read++], CV_BGRA2BGR); // will make a 'deep copy', too.
so, - here's the bummer: if you don't save a deep-copy in your array, you'll end up with garbled (and all the same!) images, probably even with undefined behaviour due to the locking/unlocking of the kinect buffer, if you do copy it (and you must), you will need a lot of memory.
unlikely, that you can keep 3000 *1024*786*4 = 9658368000 bytes in memory, you'll have to cut it down one way or another.

Create video vom rendered OpenGL frames

i am searching for a way to create a video from a row of frames i have rendered with OpenGL and transfered to ram as int array using the glGetTexImage function. is it possible to achieve this directly in ram (~10 secs video) or do i have to save each frame to the harddisk and encode the video afterwards?
i have found this sample http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/api-example_8c-source.html in an other SO question but is this still the best way to do it?
today i got a hint to use this example http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/decoding_encoding.c;h=cb63294b142f007cf78436c5d81c08a49f3124be;hb=HEAD to see how h264 could be achieved. the problem when you want to encode frames rendered by opengl is that they are in RGB(A) and most codecs require YUV. for the convertion you can use swscale (ffmpeg) - an example how to use it can be found here http://web.me.com/dhoerl/Home/Tech_Blog/Entries/2009/1/22_Revised_avcodec_sample.c.html
as ananthonline stated the direct encoding of the frames is very cpu intensive but you can also write your frames with ffmpeg as rawvideo format, which supports the rgb24 pixelformat, and convert it offline with the cmd commands of ffmpeg.
If you want a cross-platform way of doing it, you're probably going to have to use ffmpeg/libavcodec but on Windows, you can write an AVI quite simply using the resources here.