C++(MFC) using Pylon :How to draw image which is in array - c++

Hello I am working on a Pylon Application, and I want to know how to draw image which is in array.
Basler's Pylon SDK, on memory Image saving
In this link, it shows I can save image data in array(I guess) Pylon::CImageFormatConverter::Convert(Pylon::CPylonImage[i],Pylon::CGrabResultPtr)
But the thing is I can not figure out how to draw that images which is in array.
I Think it would be easy problem, but I'd appreciate it if you understand because it's my first time doing this.

For quick'n dirty displaying of Pylon buffers, you have a helper class available in Pylon SDK to put somewhere in buffer grabbing loop:
//initialization
CPylonImage img;
CPylonImageWindow wnd;
//somewhere in grabbing loop
wnd.SetImage(img);
wnd.Show();
If you want to utilize in your own display procedures (via either GDI or 3rd party libraries like OpenCV etc.), you can get image buffer with GetBuffer() method:
CGrabResultPtr ptrGrabResult;
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
Of course, you have to be aware of your target pixel alignment and optionally use CImageFormatConverter for proper output pixel format of your choice.
CImageFormatConverter documentation

Related

How to generate a QVideoFrame from PylonImage buffer

I have a stream of pylon images that I would like to display to a user in a QML app, how can I convert the PylonImage to a QVideoFrame so I can display it?
I am using PixelType_YUV422planar since it is supported by both pylonImages and QVideoFrames.
yet I'm clueless on how can I get the QVideoFrame from the pylon image?
I will experiment a bit with memcpy but I would like to know if there's any other way..
edit:
copying the buffer of pylonImage to the QVideoFrame using memcpy results in distorted image..
The memcpy approach works very well. Please refer to:
https://blog.katastros.com/a?ID=9f708708-c5b3-4cb3-bbce-400cc8b8000c
The interesting code shiped:
QVideoFrame f(size, QSize(width, height), width, QVideoFrame::Format_YUV420P);
if (f.map(QAbstractVideoBuffer::WriteOnly)) {
memcpy(f.bits(), data, size);
f.setStartTime(0);
f.unmap();
emit newFrameAvailable(f);
}
I made the memcpy example work myself to make a v4l2 video capture work however memcpy is too costly and it is causing my framerates to drop from 34fps to 6 fps and CPU usage is at 100% (4K live video).
Could you find any alternatives to memcpy?
Thx

How to access individual frames of an animated GIF loaded into a ID3D11ShaderResourceView?

I used CreateWICTextureFromFile() from DirectXTK to load an animated GIF texture.
ID3D11Resource* Resource;
ID3D11ShaderResourceView* View;
hr = CreateWICTextureFromFile(d3dDevice, L"sample.gif",
&Resource, &View);
Then I displayed it on an ImageButton in dear IMGUI library:
ImGui::ImageButton((void*)View, ImVec2(width, height));
But it only displays a still image (the first frame of the GIF file).
I think I have to give it the texture of each frame separately. But I don't know how. Can you give me a hint?
The CreateWICTextureFromFile function in DirectX Tool Kit (a.k.a. the 'light-weight' version in the WICTextureLoader module) only loads a single 2D texture, not multi-frame images like animated GIF or TIFF.
The DirectXTex function LoadFromWICFile can load multiframe images if you give it the WIC_FLAGS_ALL_FRAMES flag. Because the library is focused on DirectX resources, it will resize them all to match the first image size.
That said, what WIC is going to return to you is a bunch of raw frames. You have query the metadata from WIC to actually get the animation information, and it's a little complicated to reconstruct. I have a simple implementation in the DirectXTex texassemble tool you can reference here. I focused on converting the animated GIF into a 'flip-book' style 2D texture array which is quite a bit larger.
The sample I referenced can be found on GitHub

cv::Mat detect PixelFormat

I'm trying to use pictureBox->Image (Windows Forms) to display a cv::Mat image (openCV). I want to do that without saving the Image as a file ('cause i want to reset the image every 100ms).
I just found that topic here: How to display a cv::Mat in a Windows Form application?
When i use this solution the image appears to be white only. I guess i took the wrong PixelFormat.
So how do figure out the PixelFormat i need? Haven't seen any Method in cv::Mat to get info about that. Or does this depend on the image Source i use to create this cv::Mat?
Thanks so far :)
Here i took a screen. Its not completely white. So i guess there is some color info. But maybe i'm wrong.
Screen
cv::Mat.depth() returns the pixel type, eg. CV_8U for 8bit unsigned etc.
and cv::Mat.channels() returns the number of channels, typically 3 for a colour image
For 'regular images' opencv generally uses 8bit BGR colour order which should map directly onto a Windows bitmap or most Windows libs.
SO you probably want system.drawing.imaging.pixelformat.Format24bppRgb, I don't know what order ( RGB or BGR) it uses but you can use the opencv cv::cvtColor() with CV_BGR2RGB to convert to RGB
Thanks for the help! The problem was something else.
I just did not allocate memory for the Image Object. So there was nothing to really display.
Using cv::Mat img; in the headerFile and img = new cv::Mat(120,160,0); in Constructor (ocvcamimg Class) got it to work. (ocvcamimg->captureCamMatImg() returns img).

loadRawData Memory issue in ogre while load opencv frames

I am capturing images in real time using OpenCV, and I want to show these images in the OGRE window as a background. So, for each frame the background will change.
I am trying to use MemoryDataStream along with loadRawData to load the images into an OGRE window, but I am getting the following error:
OGRE EXCEPTION(2:InvalidParametersException): Stream size does not
match calculated image size in Image::loadRawData at
../../../../../OgreMain/src/OgreImage.cpp (line 283)
An image comes from OpenCV with a size of 640x480 and frame->buffer is a type of Mat in OpenCV 2.3. Also, the pixel format that I used in OpenCV is CV_8UC3 (i.e., each pixel is 8-bits and each pixel contains 3 channels ( B8G8R8 ) ).
Ogre::MemoryDataStream* videoStream = new Ogre::MemoryDataStream((void*)frame->buffer.data, 640*480*3, true);
Ogre::DataStreamPtr ptr(videoStream,Ogre::SPFM_DELETE);
ptr->seek(0);
Ogre::Image* image = new Ogre::Image();
image->loadRawData(ptr,640, 480,Ogre::PF_B8G8R8 );
texture->unload();
texture->loadImage(*image)
Why I always getting this memory error?
Quick idea, maybe memory 4-byte alignment issues ?
see Link 1 and
Link 2
I'm not an Ogre expert, but does it work if you use loadDynamicImage instead?
EDIT : Just for grins try using the Mat fields to setup the buffer:
Ogre::Image* image = new Ogre::Image();
image->loadDynamicImage((uchar*)frame->buffer.data, frame->buffer.cols, frame->buffer.rows, frame->buffer.channels(), Ogre::PF_B8G8R8);
This will avoid copying the image data, and should let the Mat delete it's contents later.
I had similar problems to get image data into OGRE, in my case the data came from ROS (see ros.org). The thing is that your data in frame->buffer is not RAW, but has a file header etc.
I think my solution was to search the data stream for the beginning of the image (by finding the appropriate indicator in the data block, e.g. 0x4D 0x00), and inserting the data from this point on.
You would have to find out were in your buffer the header ends and where your data begins.

C/C++ library to blend PNG (including Alpha) with raw ARGB buffer

I have a PNG with an encoded alpha channel that I want to blend with a raw ARGB image in memory that is stored interleaved. The PNG is of a different resolution to the image buffer and needs to be resized accordingly (preferably with interpolation).
Whilst I appreciate it's not particularly difficult to do this by hand (once the PNG image is loaded into an appropriate structure), I was hoping to find a good open source image processing library to do the work for me.
I've looked at a few including:
libGD
libPNG
openCV
ImageMagick
CxImage
Intel Integrated Performance Primitives (IPP)
But none of seem to handle all the requirements of loading PNGs, resizing the PNG image, alpha blending into the image data and handling the ARGB format (as opposed to RGBA).
Performance is a concern so reducing the passes over the image data would be beneficial, especially being able to hold the ARGB data in place rather than having to copy it to a different data structure to perform the blending.
Does anyone know of any libraries that may be able to help or whether I've missed something in one of the above?
You can do this with SDL by using SDL_gfx and SDL_Image.
// load images using SDL_Image
SDL_Surface *image1, image2;
image1=IMG_Load("front.png");
image2=IMG_Load("back.png");
// blit images onto a surface using SDL_gfx
SDL_gfxBlitRGBA ( image1, rect, screen, rect );
SDL_gfxBlitRGBA ( image2, rect, screen, rect );
You need to pair a file-format library (libPNG or ImageMagick) with a image manipulation library. Boost.GIL would be good here. If you can load the ARGB buffer (4 bytes per pixel) into memory, you can create a GIL image with interleaved_view, and reinterpret_casting your buffer pointer to a boost::gil::argb32_ptr_t
With the ImageMagick it is very easy thing to do by using appendImages function.
Like this :
#include <list>
#include <Magick++.h>
using namespace std;
using namespace Magick;
int main(int /*argc*/,char **/*argv*/)
{
list<Image> imageList;
readImages( &imageList, "test_image_anim.gif" );
Image appended;
appendImages( &appended, imageList.begin(), imageList.end() );
appended.write( "appended_image.miff" );
return 0;
}