How to use SDL_MapRGB with SDL 2.0 - c++

I am trying to get a 24-bit color from rgb values. I want to use SDL_MapRGB, but I don't know what pixel format is. Since its SDL 2.0 I am using the SDL_Window and SDL_Renderer.

SDL_Surface* surface = //however you created your surface
SDL_PixelFormat* myPixelFormat=surface->format;
This is from the page https://wiki.libsdl.org/SDL_PixelFormat , which you'll want to look over for more information.

Take a look at the window (or maybe it's called "surface", it's been a while and it was SDL 1.x), it includes a pixel format specification for drawing on that window, which you should use.

Related

C++ image loading with SDL2 and SDL_image

I have a question about image loading in SDL2 with the SDL_image libary.
Unfortunatly i dont find any information about IMG_LoadTexture in the SDL_image documentation.
So what is the difference between using IMG_LoadTexture to directly load an PNG image as texture and using IMG_Load to load a surface an then convert it to an texture with SDL_CreateTextureFromSurface? Is there any benefit in using one over the other?
When in doubt, check source (easily browsable at https://hg.libsdl.org/) - https://hg.libsdl.org/SDL_image/file/ca95d0e31aec/IMG.c#l209 :
SDL_Texture *IMG_LoadTexture(SDL_Renderer *renderer, const char *file)
{
SDL_Texture *texture = NULL;
SDL_Surface *surface = IMG_Load(file);
if (surface) {
texture = SDL_CreateTextureFromSurface(renderer, surface);
SDL_FreeSurface(surface);
}
return texture;
}
I've just finished writing a GUI in SDL. So IMG_Load is the old way to load in SDL images, from SDL 1 i believe. The way it used to work is that you'd have SDL surfaces, and then you'd merge them together and then blit them to the screen, or blit sections of the surfaces to other surfaces, using masks etc. The problem is some stuff - for example drawing lines, now requires a renderer.
Renderers pull in the new features of SDL2. It also means that you can't just blit your surfaces necessarily to a texture without converting it first.
So, in summary, if you can get away with using IMG_load and using the old SDL features, do so because it's more intuitive. If you are planning to draw any lines at all, or anything that use the SDL renderer, then you'll need to learn how to convert between surfaces and textures!
Regarding your original question, because i realise i'm not answering it very well, normally it's best to use the right function calls, such as IMG_LoadTexture directly, rather than IMG_Load and then convert it to a texture. SDL talks to the hardware directly and has a surprising amount of optimisation. Converting the surface to a texture, presumably involves blitting, which means copying a substantial amount of memory.
However, it seems that in this case at the time of writing this, there is absolutely no difference at all. The function IMG_LoadTexture does exactly the same thing.
But once again, check, you might not need textures at all, if not, you could save yourself some work ;)

Rendering VTK visualization using OpenCV instead

Is it possible to get a rendered frame from a VTK visualization and pass it to OpenCV as an image without actually rendering a VTK window?
Looks like I should be able to follow this answer to get the rendered VTK frame from a window and then pass it to OpenCV code, but I don't want to render the VTK window. (I want to render a PLY mesh using VTK to control the camera pose, then output the rendered view to OpenCV so I can distort it for an Oculus Rift application).
Can I do this using the vtkRenderer class and not the vtkRenderWindow class?
Also, I'm hoping to do this all using the OpenCV VTK module if that is possible.
EDIT: I'm starting to think I should just be doing this with VTK functions alone since there is plenty of attention being paid to VTK and Oculus Rift paired together. I would still prefer to use OpenCV since that side of the code is complete and works nicely already.
You must make your render windows to render offline like this:
renderWindow->SetOffScreenRendering( 1 );
Then use a vtkWindowToImageFilter:
vtkSmartPointer<vtkWindowToImageFilter> windowToImageFilter =
vtkSmartPointer<vtkWindowToImageFilter>::New();
windowToImageFilter->SetInput(renderWindow);
windowToImageFilter->Update();
This is called Offscreen Rendering in VTK. Here is a complete example
You can render the image offscreen as mentioned by El Marce and then convert the image to OpenCV cv::Mat using
unsigned char* Ptr = static_cast<unsigned char*>(windowToImageFilter->GetOutput()->GetScalarPointer(0, 0, 0));
cv::Mat RGBImage(dims[1], dims[0], CV_8UC3, Ptr);

Convert EMF to BMP (Metafile to Bitmap) using Windows Imaging Component

I have an .emf file that I want to convert to a bitmap in legacy VC++ 6.0 code.
I've been looking through the WIC documentation and I'm surprised I haven't seen a way to do this.
Am I missing something?
If WIC ends up not supporting this, is there a method programattically load an .emf file into a CBitmap object?
There's no need for WIC. It's built into the core of Windows itself in the form of PlayEnhMetafile.
So, to get the picture into a BMP, you select your BMP into a DC, then do PlayEnhMetafile on that DC, and the result will go into the BMP.
Note that this isn't really converting a metafile into a BMP--it's rendering the metafile into the BMP. That is to say, a metafile is (usually) resolution independent. For example, it may specify a line from logical coordinate (0,0) to (100, 100). When you render that into a BMP, you get the line rasterized at a specific resolution. If you later wanted the same picture at higher resolution, the metafile could provide it, but the rendering in the BMP couldn't/can't.

Save OpenGL Rendering to Video

I have an OpenGL game, and I want to save what's shown on the screen to a video.
How can I do that? Is there any library or how-to-do-it?
I don't care about compression, I need the most efficient way so hopefully the FPS won't drop.
EDIT:
It's OpenGL 1.1 and it's working on Mac OSX though I need it to be portable.
There most certainly are great video capture software out there you could use to capture your screen, even when running a full screen OpenGL game.
If you are using new versions of OpenGL, as genpfault has mentioned you can use PBOs. If you are using legacy OpenGL (version 1.x), here's how you can capture the screen:
glFinish(); // Make sure everything is drawn
glReadBuffer(GL_FRONT);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glReadPixels(blx, bly, w, h, mode, GL_UNSIGNED_BYTE, GL_BGRA);
where blx and bly are the bottom left coordinates of the part of the screen you want to capture (in your case (0, 0)) and w and h are the width and height of the box to be captured. See the reference for glReadPixels for more info, such as the last parameter.
Writing captured screen (at your desired rate, for example 24 fps) to a video file is a simple matter of choosing the file format you want (for example raw video), write the header of the video and write the images (image by image if raw, or image differences in some other format etc)
Use Pixel Buffer Objects (PBOs).

C++ Spin Image Resources

Does anyone know of a good resource that will show me how to load an image with C++ and spin it?
What I mean by spin is to do an actual animation of the image rotating and not physically rotating the image and saving it.
If I am not clear on what I am asking, please ask for clarification before downvoting.
Thanks
I would definitely default to OpenGL for this type of task, you can load the image into a Texture, then either redraw the image at different angles, or better yet you can just spin the 'camera' in the OpenGL engine. There are loads of OpenGL tutorials around, a quick google search will get you everything you need.
You could use SDL and the extension sdl_image and/or sdl_gfx
In Windows using GDI+ you could show rotated image in the following way:
Graphics graphics( GetSafeHwnd() ); // initialize from window handle
// You can construct Image objects from a variety of
// file types including BMP, ICON, GIF, JPEG, Exif, PNG, TIFF, WMF, and EMF.
Image image( L"someimage.bmp" );
graphics.RotateTransform( 30.0f ); // 30 - angle, in degrees.
graphics.DrawImage( &image, 0, 0 ); // draw rotated image
You could read here more detailed explanation.
Second solution is to use DirectX. You could create texture from file and later render it. It is not trivial solution, but it'll use hardware acceleration and will give you the best performance.
On Windows 7 there is available new API called Direct2D. I have not used it yet, but it looks promising.
Direct2D provides Win32 developers with the ability to perform 2-D graphics rendering tasks with superior performance and visual quality.
i agree with DeusAduro. OpenGL is a good way of doing this.
you can also do this with Qt
The "roll-your-own" solution is difficult.
I'd suggest looking into WPF - it might have some nice options in an image control.