SDL_SetVideoMode vs SDL_CreateRGBSurface - sdl

Should be a simple question for SDL experts. I am confused about the following two seemingly equivalent functions and wondering when to use which
SDL_Surface * SDL_SetVideoMode (int width, int height, int bpp, Uint32 flags);
SDL_Surface * SDL_CreateRGBSurface (Uint32 flags,
int width, int height, int depth,
Uint32 Rmask, Uint32 Gmask, Uint32 Bmask, Uint32 Amask);
What's the fundamental difference between the above two?
It's mentioned here that SDL_CreateRGBSurface has to be called after SDL_SetVideoMode. Why is that so?

They are completely different functions.
SDL_SetVideoMode creates the video surface (a.k.a. application screen) and show it to the user.
SDL_CreateRGBSurface creates an empty surface.
After calling SDL_SetVideoMode, if successful, a screen will be shown to the user and you will have (returned by the function, or by calling SDL_GetVideoSurface) the video surface, the screen surface.
SDL_CreateRGBSurface simply creates an empty surface that you can play with it.
Some usage example would be: your application starts and you initialize the video, then you create an empty surface and manipulate it somehow, and finally you blit it to the video surface and the user will see the surface that you manipulated (remember to flip the screen surface, SDL_Flip).
It's important to know what a SDL_Surface is. Since you don't asked I assume you know.

SDL_SetVideoMode creates a window. This surface will be visible on the screen.
SDL_CreateRGBSurface creates off-screen image. SDL_CreateRGBSurface is used, for example, when you load images from disk. You need to blit it to the screen in order to see them.

Related

OPENGL glReadPixels how to get larger window content?

I want to get the window content from OpenGL to OpenCV. The code used below:
unsigned char* buffer = new unsigned char[ Win_width * Win_height * 4];
glReadPixels(0, 0, Win_width, Win_height, GL_BGRA, GL_UNSIGNED_BYTE, buffer);
cv::Mat image_flip(Win_height, Win_width, CV_8UC4, buffer);
When the window size is small. everything is fine.
But when Win_width and Win_height large than 1080p, the image will be resize to 1080p and other part will pad with grey.
Render to and read from a FBO so you don't run afoul of the pixel ownership test:
Because the Default Framebuffer is owned by a resource external to
OpenGL, it is possible that particular pixels of the default
framebuffer are not owned by OpenGL. And therefore, OpenGL cannot
write to those pixels. Fragments aimed at such pixels are therefore
discarded at this stage of the pipeline.
Generally speaking, if the window you are rendering to is partially
obscured by another window, the pixels covered by the other window are
no longer owned by OpenGL and thus fail the ownership test. Any
fragments that cover those pixels will be discarded. This also
includes framebuffer clearing operations.
Note that this test only affects rendering to the default framebuffer.
When rendering to a Framebuffer Object, all fragments pass this test.

SDL_RenderPresent vs SDL_UpdateWindowSurface

I have successfully created and drawn both a bitmap image and drawn a green line using a renderer to my SDL window. The problem is I am unsure of how to do both at once on the same window.
void draw::image(){
SDL_Surface *bmp = SDL_LoadBMP("C:\\Users\\Joe\\Documents\\Visual Studio 2013\\Projects\\SDL_APP1\\map1.bmp");
SDL_BlitSurface(bmp, 0, SDL_GetWindowSurface(_window), 0);
SDL_Renderer * renderer = SDL_CreateRenderer(_window, -1, 0);
// draw green line across screen
SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255);
SDL_RenderDrawLine(renderer, 0, 0, 640, 320);
SDL_RenderPresent(renderer);
SDL_UpdateWindowSurface(_window);
SDL_Delay(20000);
// free resources
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(_window);
}
This version of my code draws the bmp file to the window because SDL_UpdateWindowSurface(); is after SDL_RenderPresent(); however when I flip these it draws a green line to the screen. How would I draw the green line over my BMP?
If you store your images in RAM and use the CPU for rendering(this is called software rendering) you use SDL_UpdateWindowSurface.
By calling this function you tell the CPU to update the screen and draw using software rendering.
You can store your textures in RAM by using SDL_Surface, but software rendering is inefficent. You can give draw calls by using SDL_BlitSurface.
SDL_UpdateWindowSurface is equivalent to the SDL 1.2 API SDL_Flip().
On the other side when you use the GPU to render textures and you store your texture on the GPU(this is called hardware accelerated rendering), which you should, you use SDL_RenderPresent.
This function tells the GPU to render to the screen.
You store texture on the GPU using SDL_Texture.
When using this you can give draw calls by using SDL_RenderCopy or if you want transformations SDL_RenderCopyEx
Therefore, when using SDL's rendering API, one does all drawing intended for the frame, and then calls this function once per frame to present the final drawing to the user.
You should you use hardware rendering it's far more efficent, than software rendering! Even if the user running the program hasn't got a GPU (which is rare, because most CPU's have an integrated GPU) SDL will switch to software rendering by it self!
By the way you can load an image as SDL_Texture without the need to load an image as an SDL_Surface and convert it to an SDL_Texture using the SDL_image library, which you should because it supports several image formats not just BMP, like pure SDL. (SDL_image is made by the creators of SDL)
Just use the IMG_LoadTexture from SDL_image!
You cannot use both methods at the same time and must choose one or the other. I would recommend going with SDL_Renderer. Create an SDL_Texture from your SDL_Surface and render it with SDL_RenderCopy.

Get pixel info from SDL2 texture

I'm currently writing a simple program using SDL2 where you can drag some shapes (square, circle, triangle, etc) into a canvas and rotate them and move them around. Each shape is represented visually by a SDL texture that is created from a PNG file (using the IMG_LoadTexture function from the SDL_image library).
The thing is that I would like to know whether a certain pixel from the texture is transparent, so that when someone clicks on the image I could determine if I have to do some action (because the click is on the non transparent area) or not.
Because this is some school assignment I'm facing some restrictions, that is, only use SDL2 libraries and I can't have some map where I can look up if the pixel in question is transparent because the images are dinamically selected. Furthermore I thought about using a SDL surface for this task creating them from the original images but due to the fact that the shapes are being rotated through the texture that wouldn't work.
You can accomplish this by using Render Targets.
SDL_SetRenderTarget(renderer, target);
... render your textures rotated, flipped, translated using SDL_RenderCopyEx
SDL_RenderReadPixels(renderer, rect, format, pixels, pitch);
With the last step you read the pixels from the render target using SDL_RenderReadPixels and then you have to figure out if the alpha channel of the desired pixel is zero (transparent) or not. You can read just the one pixel you want from the render target, or the whole texture, which option you take depends on the number of hit tests you have to perform, how often the texture is rotated/moved around, etc.
You need to create your texture using the SDL_TEXTUREACCESS_STREAMING flag and lock your texture before being able to manipulate pixel data. To tell if a certain pixel is transparent in a texture make sure that you call
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
this allows the texture to recognize an alpha channel.
Try something like this:
SDL_Texture *t;
int main()
{
// initialize SDL, window, renderer, texture
int pitch, w, h;
void *pixels;
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
SDL_QueryTexture(t, NULL, &aw, &h);
SDL_LockTexture(t, NULL, &pixels, &pitch);
Uint32 *upixels = (Uint32*) pixels;
// you will need to know the color of the pixel even if it's transparent
Uint32 transparent = SDL_MapRGBA(SDL_GetWindowSurface(window)->format, r, g, b, 0x00);
// manipulate pixels
for (int i = 0; i < w * h; i++)
{
if (upixels[i] == transparent)
// do stuff
}
// replace the old pixels with the new ones
memcpy(pixels, upixels, (pitch / 4) * h);
SDL_UnlockTexture(t);
return 0;
}
If you have any questions please feel free to ask. Although I am no expert on this topic.
For further reading and tutorials, check out http://lazyfoo.net/tutorials/SDL/index.php. Tutorial 40 deals with pixel manipulation specifically.
I apologize if there are any errors in method names (I wrote this off the top of my head).
Hope this helped.

Read pixel on game (OpenGL or DirectX) screen

I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )

Blit SDL_Surface onto another SDL_Surface and apply a colorkey

I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT->POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work.
Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]):
// Create an empty surface with the same settings as the original image
SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height,
image->format->BitsPerPixel,
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
0xff000000,
0x00ff0000,
0x0000ff00,
0x000000ff
#else
0x000000ff,
0x0000ff00,
0x00ff0000,
0xff000000
#endif
);
// Map RGBA color to pixel format value
Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format,
static_cast<Uint8>(colorKey.R * 255),
static_cast<Uint8>(colorKey.G * 255),
static_cast<Uint8>(colorKey.B * 255),
static_cast<Uint8>(colorKey.A * 255));
SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat);
// Blit the image onto the padded image
SDL_BlitSurface(image, NULL, paddedImage, NULL);
SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat);
Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem.
I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?
PARTIAL SOLUTION:
Create paddedImage as above
SDL_FillRect the paddedImage with the colorkey
Generate the texture "as usual"
Manually copy the image (SDL_Surface*) pixels to the paddedImage (OGL texture)
This works almost always expect some cases where the image has 3 color components (i.e. no alpha channel). I'm trying to fix that now by converting them to 4 color components
I think that It could be used together with OpenGL if you can convert SDL_Surfaces into OGL textures, and then you could use the blit function to combine your textures, and manipulate thing using the SDL workflow.
I dont know what you want to achieve. You want to transfer one surface to an OGL texture and preserve the Colorkey, or just want to apply the colorkeyed surface to another surface which then you convert into an OGL texture.
Also you dont have to use Per-pixel alphas, as SDL gives you the ability to use per-surface alphas, but its quite complex as of what alphas and colorkeys can be combined and used together.
As this is a complex thing, please refer to the SDL reference, and this tutorial may be helpful too(tough it doesnt handle OGL stuff):
http://www.sdltutorials.com/the-ins-and-outs-and-overlays-of-alpha-blending