I'm trying to generate ImageMagick images from SDL pixel data. Here's what the GIF looks like so far. (This GIF is slower than the one below on purpose.)
http://www.starlon.net/images/combo.gif
Here's what it's supposed to look like. Notice in the above image that the pixels seem to be overlayed on top of other pixels.
http://www.starlon.net/images/combo2.gif
Here's where the GIF is actually created.
void DrvSDL::WriteGif() {
std::list<Magick::Image> gif;
for(std::list<Magick::Blob>::iterator it = image_.begin(); it != image_.end(); it++) {
Magick::Geometry geo(cols_ * pixels.x, rows_ * pixels.y);
Magick::Image image(*it, geo, 32, "RGB");
gif.push_back(image);
LCDError("image");
}
for_each(gif.begin(), gif.end(), Magick::animationDelayImage(ani_speed_));
Magick::writeImages(gif.begin(), gif.end(), gif_file_);
}
And here's where the Blob is packed.
image_.push_back(Magick::Blob(surface_->pixels, rows_ * pixels.y * surface_->pitch));
And here's how I initialize the SDL surface.
surface_ = SDL_SetVideoMode(cols_ * pixels.x, rows_ * pixels.y, 32, SDL_SWSURFACE);
The top image is normally caused by a misaligned buffer. The SDL buffer is probably not DWORD aligned and the ImageMagick routines expect the buffer to be aligned on a DWORD. This is very common in bitmap processing. The popular image processing library - Leadtools, commonly, requires DWORD aligned data. This is mostly case with monochrome and 32 bit color but can be the case for any color depth.
What you need to do is write out a DWORD aligned bitmap from your SDL buffer or at least create a buffer that is DWORD aligned.
The ImageMagick API documentation may be able to help clarify this further.
Another thing you might want to try is to clear the buffers to make sure there isn't any data already there. I don't really know IM's API, but pixels overlayed on top of other pixels usually indicates a dirty buffer.
Related
Hello I am working on a Pylon Application, and I want to know how to draw image which is in array.
Basler's Pylon SDK, on memory Image saving
In this link, it shows I can save image data in array(I guess) Pylon::CImageFormatConverter::Convert(Pylon::CPylonImage[i],Pylon::CGrabResultPtr)
But the thing is I can not figure out how to draw that images which is in array.
I Think it would be easy problem, but I'd appreciate it if you understand because it's my first time doing this.
For quick'n dirty displaying of Pylon buffers, you have a helper class available in Pylon SDK to put somewhere in buffer grabbing loop:
//initialization
CPylonImage img;
CPylonImageWindow wnd;
//somewhere in grabbing loop
wnd.SetImage(img);
wnd.Show();
If you want to utilize in your own display procedures (via either GDI or 3rd party libraries like OpenCV etc.), you can get image buffer with GetBuffer() method:
CGrabResultPtr ptrGrabResult;
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
Of course, you have to be aware of your target pixel alignment and optionally use CImageFormatConverter for proper output pixel format of your choice.
CImageFormatConverter documentation
What's alternative to D3DXCreateTextureFromFileInMemory and D3DXCreateTextureFromFileEx in d3d11? Simply how can I load image to texture buffer (looks like it ID3D10Texture2D data type) to be able to render it?
That's kind of a broad question, but hopefully I can help point you in the right direction.
At the highest level, "loading" a texture involves the following steps:
Place image data in some form into memory (either loading from a file, generating algorithmically, etc).
Convert the image data into the raw form required by the texture. This will be predicated on the format texture you require. For example, most color (albedo) textures will be in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format. This step may involve decompression of a source image file (e.g. if it's JPEG or PNG), and possibly some form of conversation if the formats take different data types, etc.
(Optional) generate the mip chain for the texture. Generally, having a full mip chain is a good idea for visual and performance reasons.
Copy the raw pixel data into the texture. Format conversion could be done during this step (it really depends on the implementation).
For the conversion part, there are plenty of libraries that will load and convert image files to raw pixel data. One such is the Windows Imaging Component library (WIC). There are others out there, too - a google search will yield lots of results.
For MIP generation, you can do this yourself, or some of the third party imaging libraries will do this for you. D3DX will also generate mips. Another option is to have D3D generate them for you (not ideal, but can work as a stop-gap) via the ID3D11DeviceContext::GenerateMips call.
Top copy raw pixel data into the texture, assuming it's static (unchanging, or "immutable") data, you should create your texture like so:
D3D11_TEXTURE2D_DESC tdesc;
// ...
// Fill out width, height, mip levels, format, etc...
// ...
tdesc.Usage = D3D11_USAGE_IMMUTABLE;
tdesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; // Add D3D11_BIND_RENDER_TARGET if you want to go
// with the auto-generate mips route.
tdesc.CPUAccessFlags = 0;
tdesc.MiscFlags = 0; // or D3D11_RESOURCE_MISC_GENERATE_MIPS for auto-mip gen.
D3D11_SUBRESOURCE_DATA srd; // (or an array of these if you have more than one mip level)
srd.pSysMem = pointer_to_raw_pixel_data; // This data should be in raw pixel format
srd.SysMemPitch = width_of_row_in_bytes; // Sometimes pixel rows may be padded so this might not be as simple as width * pixel_size_in_bytes.
srd.SysMemSlicePitch = 0;
ID3D11Texture2D * texture;
pDevice->CreateTexture2D(&tdesc, &srd, &texture);
This will create the texture and populate it with your pixel data in one go. You can also create the texture with the D3D11_USAGE_DEFAULT usage flag, and use the ID3D11DeviceContext::Map/Unmap calls to do this after creation (this is useful if you'll be changing the texture content occasionally).
This is a kinda rough overview of the basics - there's a ton of stuff out there on the web going into the dirty details of how all this stuff works and best practices etc. I think the best thing I can recommend is find some sample code and experiment with it.
I have an animation/sprite created using SDL2. The animation works fine when it is being rendered to a screen. But now I also want it to be recorded into a video file (locally stored). For this, I am planning on using FFmpeg APIs, to which I'll be sending a raw RGB pixel data array.
My problem is with fetching the data from SDL2 APIs.
What I've tried is:
// From http://stackoverflow.com/questions/30157164/sdl-saving-window-as-bmp
SDL_Surface *sshot = SDL_CreateRGBSurface(0, 750, 750, 32, 0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000);
SDL_RenderReadPixels(gRenderer, NULL, SDL_PIXELFORMAT_ARGB8888, sshot->pixels, sshot->pitch);
// From https://wiki.libsdl.org/SDL_RWFromMem
char fName[50];
sprintf(fName, "/tmp/a/ss%03d.bmp", fileCnt);
char bitmap[310000];
SDL_RWops *rw = SDL_RWFromMem(bitmap, sizeof(bitmap));
SDL_SaveBMP_RW(sshot, rw, 1);
Above does not work. But dumping a single frame into a file with following code works:
SDL_SaveBMP(sshot, "/tmp/alok1/ss.bmp")
This obviously is not an acceptable solution - Writing to thousands of BMPs and then using FFmpeg from command-line to create a video.
What am I doing wrong? How do you extract data from SDL_RWops? Is the use of SDL_RWFromMem the right approach to my problem statement?
Your buffer is too small to fit specified image, hence it cannot be saved here. Increase buffer size to at least actual image size + BMP header (width*height*bpp + 54, but padding needs to be counted too (what SDL_Surface refers as pitch)).
Note that taking 3Mb from stack may get you dangerously close to overflow (but could still be fine, depends on what happened in functions prior to the one in question). Chain-calling several functions that takes big chunk of stack may very quickly deplete it. It is likely you don't really need any extra space or BMP conversion at all - like creating AVImage and copying pixels directly to it from SDL_Surface.
Also in terms of performance this kind of readback would not be great (but probably compression itself is much heavier anyway).
I have a PNG with an encoded alpha channel that I want to blend with a raw ARGB image in memory that is stored interleaved. The PNG is of a different resolution to the image buffer and needs to be resized accordingly (preferably with interpolation).
Whilst I appreciate it's not particularly difficult to do this by hand (once the PNG image is loaded into an appropriate structure), I was hoping to find a good open source image processing library to do the work for me.
I've looked at a few including:
libGD
libPNG
openCV
ImageMagick
CxImage
Intel Integrated Performance Primitives (IPP)
But none of seem to handle all the requirements of loading PNGs, resizing the PNG image, alpha blending into the image data and handling the ARGB format (as opposed to RGBA).
Performance is a concern so reducing the passes over the image data would be beneficial, especially being able to hold the ARGB data in place rather than having to copy it to a different data structure to perform the blending.
Does anyone know of any libraries that may be able to help or whether I've missed something in one of the above?
You can do this with SDL by using SDL_gfx and SDL_Image.
// load images using SDL_Image
SDL_Surface *image1, image2;
image1=IMG_Load("front.png");
image2=IMG_Load("back.png");
// blit images onto a surface using SDL_gfx
SDL_gfxBlitRGBA ( image1, rect, screen, rect );
SDL_gfxBlitRGBA ( image2, rect, screen, rect );
You need to pair a file-format library (libPNG or ImageMagick) with a image manipulation library. Boost.GIL would be good here. If you can load the ARGB buffer (4 bytes per pixel) into memory, you can create a GIL image with interleaved_view, and reinterpret_casting your buffer pointer to a boost::gil::argb32_ptr_t
With the ImageMagick it is very easy thing to do by using appendImages function.
Like this :
#include <list>
#include <Magick++.h>
using namespace std;
using namespace Magick;
int main(int /*argc*/,char **/*argv*/)
{
list<Image> imageList;
readImages( &imageList, "test_image_anim.gif" );
Image appended;
appendImages( &appended, imageList.begin(), imageList.end() );
appended.write( "appended_image.miff" );
return 0;
}
I have a pointer to a COLORREF buffer, something like: COLORREF* buf = new COLORREF[x*y];
A subroutine fills this buffer with color-information. Each COLORREF represents one pixel.
Now I want to draw this buffer to a device context. My current approach works, but is pretty slow (== ~200ms per drawing, depending on the size of the image):
for (size_t i = 0; i < pixelpos; ++i)
{
// Get X and Y coordinates from 1-dimensional buffer.
size_t y = i / wnd_size.cx;
size_t x = i % wnd_size.cx;
::SetPixelV(hDC, x, y, buf[i]);
}
Is there a way to do this faster; all at once, not one pixel after another?
I am not really familiar with the GDI. I heard about al lot of APIs like CreateDIBitmap(), BitBlt(), HBITMAP, CImage and all that stuff, but have no idea how to apply it. It seems all pretty complicated...
MFC is also welcome.
Any ideas?
Thanks in advance.
(Background: the subroutine I mentioned above is an OpenCL kernel - the GPU calculates an Mandelbrot image and safes it in the COLORREF buffer.)
EDIT:
Thank you all for your suggestions. The answers (and links) gave me some insight into Windows graphics programming. The performance is now acceptable (semi-realtime-scrolling into the Mandelbrot works :)
I ended up with the following solution (MFC):
...
CDC dcMemory;
dcMemory.CreateCompatibleDC(pDC);
CBitmap mandelbrotBmp;
mandelbrotBmp.CreateBitmap(clientRect.Width(), clientRect.Height(), 1, 32, buf);
CBitmap* oldBmp = dcMemory.SelectObject(&mandelbrotBmp);
pDC->BitBlt(0, 0, clientRect.Width(), clientRect.Height(), &dcMemory, 0, 0, SRCCOPY);
dcMemory.SelectObject(oldBmp);
mandelbrotBmp.DeleteObject();
So basically CBitmap::CreateBitmap() saved me from using the raw API (which I still do not fully understand). The example in the documentation of CDC::CreateCompatibleDC was also helpful.
My Mandelbrot is now blue - using SetPixelV() it was red. But I guess that has something to do with CBitmap::CreateBitmap() interpreting my buffer, not really important.
I might try the OpenGL suggestion because it would have been the much more logical choice and I wanted to try OpenCL under Linux anyway.
Under the circumstances, I'd probably use a DIB section (which you create with CreateDIBSection). A DIB section is a bitmap that allows you to access the contents directly as an array, but still use it with all the usual GDI functions.
I think that'll give you the best performance of anything based on GDI. If you need better, then #Kornel is basically correct -- you'll need to switch to something that has more direct support for hardware acceleration (DirectX or OpenGL -- though IMO, OpenGL is a much better choice for the job than DirectX).
Given that you're currently doing the calculation in OpenCL and depositing the output in a color buffer, OpenGL would be the really obvious choice. In particular, you can have OpenCL deposit the output in an OpenGL texture, then you have OpenGL draw a quad using that texture. Alternatively, since you're just putting the output on screen anyway, you could just do the calculation in an OpenGL fragment shader (or, of course, a DirectX pixel shader), so you wouldn't put the output into memory off-screen just so you can copy the result onto the screen. If memory serves, the Orange book has a Mandelbrot shader as one of its examples.
Yes, sure, that's slow. You are making a round-trip through the kernel and video device driver for each individual pixel. You make it fast by drawing to memory first, then update the screen in one fell swoop. That takes, say, CreateDIBitmap, CreateCompatibleDc() and BitBlt().
This isn't a good time and place for an extensive tutorial on graphics programming. It is well covered by any introductory text on GDI and/or Windows API programming. Everything you'll need to know you can find in Petzold's seminal Programming Windows.
Since you already have an array of pixels, you can directly use BitBlt to transfer it to the window's DC. See this link for a partial example:
http://msdn.microsoft.com/en-us/library/aa928058.aspx