I have successfully created and drawn both a bitmap image and drawn a green line using a renderer to my SDL window. The problem is I am unsure of how to do both at once on the same window.
void draw::image(){
SDL_Surface *bmp = SDL_LoadBMP("C:\\Users\\Joe\\Documents\\Visual Studio 2013\\Projects\\SDL_APP1\\map1.bmp");
SDL_BlitSurface(bmp, 0, SDL_GetWindowSurface(_window), 0);
SDL_Renderer * renderer = SDL_CreateRenderer(_window, -1, 0);
// draw green line across screen
SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255);
SDL_RenderDrawLine(renderer, 0, 0, 640, 320);
SDL_RenderPresent(renderer);
SDL_UpdateWindowSurface(_window);
SDL_Delay(20000);
// free resources
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(_window);
}
This version of my code draws the bmp file to the window because SDL_UpdateWindowSurface(); is after SDL_RenderPresent(); however when I flip these it draws a green line to the screen. How would I draw the green line over my BMP?
If you store your images in RAM and use the CPU for rendering(this is called software rendering) you use SDL_UpdateWindowSurface.
By calling this function you tell the CPU to update the screen and draw using software rendering.
You can store your textures in RAM by using SDL_Surface, but software rendering is inefficent. You can give draw calls by using SDL_BlitSurface.
SDL_UpdateWindowSurface is equivalent to the SDL 1.2 API SDL_Flip().
On the other side when you use the GPU to render textures and you store your texture on the GPU(this is called hardware accelerated rendering), which you should, you use SDL_RenderPresent.
This function tells the GPU to render to the screen.
You store texture on the GPU using SDL_Texture.
When using this you can give draw calls by using SDL_RenderCopy or if you want transformations SDL_RenderCopyEx
Therefore, when using SDL's rendering API, one does all drawing intended for the frame, and then calls this function once per frame to present the final drawing to the user.
You should you use hardware rendering it's far more efficent, than software rendering! Even if the user running the program hasn't got a GPU (which is rare, because most CPU's have an integrated GPU) SDL will switch to software rendering by it self!
By the way you can load an image as SDL_Texture without the need to load an image as an SDL_Surface and convert it to an SDL_Texture using the SDL_image library, which you should because it supports several image formats not just BMP, like pure SDL. (SDL_image is made by the creators of SDL)
Just use the IMG_LoadTexture from SDL_image!
You cannot use both methods at the same time and must choose one or the other. I would recommend going with SDL_Renderer. Create an SDL_Texture from your SDL_Surface and render it with SDL_RenderCopy.
Related
I am using OpenGL for a 2D-based game which has been developed for a resolution of 640x480 pixels. Thus, I setup my OpenGL doublebuffer like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 480, 0, 0, 1);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This works really well and I can draw all my sprites and background scrollers using hardware accelerated GL textures. Now I'd like to support other window sizes as well, i.e. the user should be able to run the game in 800x600, 1024x768, etc. So all graphics should be scaled to the new resolution. Of course I could do this by simply applying scaling factors to all my vertices when drawing the textures as quads. But I don't think that I'd be able to achieve pixel-perfect positioning that way.... but pixel-perfect positioning is of course very important for 2D games!
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
1) My doublebuffer will always be 640x480 pixels, no matter what the real output window size is.
2) Once I call glfwSwapBuffers() the 640x480 doublebuffer should be scaled to the actual window size which can be smaller or larger than 640x480.
Is this possible somehow? I think this would be the easiest solution for my game because manually scaling all vertices is likely to give me some problems when it comes to pixel-perfect positioning, isn't it?
Thanks!
I setup my OpenGL doublebuffer like this:
I think you don't know what "doublebuffer" means. It means that you perform drawing on a invisible buffer which is then revealed to the user once the drawing is finished, so that the user doesn't see the drawing process.
The code snippet you have there is the projection setup. And hardcoding the dimensions in pixel units there is just wrong.
but pixel-perfect positioning is of course very important for 2D games!
No, not really. Instead of "pixel" units (which don't really exist in OpenGL except for texture image indexing and the viewport) you should use something like world unit. For example in a simple jump-and-run platformer like SMW
you could say, that each block is one unit high. The Yosi-sprite would be 2 units high, Mario 1.5 and so on.
The important thing is, that you can keep your sprite rendering dimensions independent of screen resolution. This is especially important with respect to all the varying screen resolutions and aspect ratios out there. Don't force the user on resolutions you think are appropriate. People have computers with big screens and they want to use them.
Also the appearance of your sprites depends largely on the texture images and filtering method you use. If you want to achieve a pixelated look, just make the texture images low resolution and use a GL_NEAREST magnification filter, OpenGL will do the rest (however you should provide minification mipmaps and use GL_LINEAR_MIPMAP_LINEAR for minification, so that things don't look awful on small resolutions).
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
Yes, you can use a framebuffer object for this. Create a set of textures (color and depth-stencil) of the rendering dimensions (like 640×480) render to that, then when finished draw the color texture to a viewport filling quad on the main framebuffer.
Like before, render at 640x480 but to an offscreen texture. Then render a screen-sized (800x600, 1024x768,...) quad with this texture applied to it.
I would like to create a modularly designed game using SDL, but yet I fail to get my sprites displayed. Precisely, I am trying to implement tile sets, which are are bunch of equally sized sprites collected in one single PNG file (in my case).
My data structure should hold an array of sprites available in the tile set which can then be drawn with a method, like draw(Tile *, Position, Layer);.
As indicated, I want to feature multiple layers of surfaces to later on implement multiple independent background layers and a foreground layer. Similarly, I have an array of layers that are blitted onto my screen surface (created with SDL_SetVideoMode) in a pre-defined order.
However, I don't get what's going wrong in my code.
While a tile set is loaded, I can successfully blit the currently loaded tile onto a layer surface, like in this snippet:
this->graphic = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_SRCALPHA,
tile_widths, tile_heights, bit_depth,
rmask, gmask, bmask, amask);
SDL_BlitSurface(tileset_graphic, &tile_position, this->graphic, nullptr);
SDL_BlitSurface(tileset_graphic, &tile_position,
((VideoController::get_instance())->layers)[0], &tile_position);
In the first line, I try to blit the part of the tileset_graphic, that corresponds to a sprite, to an SDL_Surface * that is held by my Tile structure to be used later on.
However, I cannot use this surface to blit onto a layer.
The second (test) statement just copies the considered region of the tileset_graphic to the most bottom of my layers. Furthermore, I have performed several test commands in my main method.
My findings during testing:
I can blit a tileset_graphic piece to a layer and it is correctly shown (see above)
I can blit a Tile directly onto the screen:
SDL_BlitSurface(
tile->graphic,
nullptr,
(VideoController::the_instance)->screen,
&relative_position);
However, I cannot blit a Tile onto a layer:
SDL_BlitSurface(
tile->graphic,
nullptr,
(VideoController::the_instance)->layers[0],
&relative_position);
To be more precise, when I blit the whole tileset_graphic for testing and then blit a Tile onto a region that is already occupied due to this test, I can partly see my sprite. But then again, I have absolutely no clue why this is the case...
Summary:
I try to blit several SDL_Surfaces onto another, but seem to fail only by trying this (desired) chain of surfaces:
graphic --> layer --> screen
Does anyone have a clue what may go wrong, or is able to tell me which additional information is needed by you guys?
EDIT
I was able to find that blitting on the initialized (SDL_CreateRGBSurface), but still "untouched" layer surfaces seems to fail somehow. When I use SDL_FillRect on my layer surfaces before blitting a tile onto the layer, it is displayed correctly. However, I am losing transparency of layers this way...
I figured out the solution.
I had to explicitly reset the SDL_SRCALPHA flag of my graphic by doing
SDL_SetAlpha(this->graphic, 0, SDL_ALPHA_OPAQUE );
I'm writting a simple software renderer,
load 3d model, process vertex, rasterization, texture, lighting,
all that things, all done.
I used SDL(not using OpenGL mode) to draw pixel on its SDL_Surface,
and call SDL_Flip, so one frame appears.
for some reason now I don't want to use SDL,
I just need a double-buffer to draw pixel.
I know there're someway to do this,
OpenGL, Direct3D, gdi,
maybe they're too "advanced" for this project,
what is the direct(or fastest) way to draw pixels to back-buffer on win32 ?
I'd recommend using a graphics API for this (OpenGL or Direct3D), but GDI would be the easiest option. You can create a DIB (Device Independent Bitmap) using the CreateDIBSection function which returns a pointer to the bitmap's memory. You can then modify the pixels of the bitmap however you please and draw it to your window's client area. See chapter 15 of Programming Windows (5th) by Charles Petzold for source code and explanation of this technique.
I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )
I'm doing double-buffering by creating a render target with its associated depth and stencil buffer, drawing to it, and then drawing a fullscreen, possibly stretched, quad with the back buffer as the texture.
To do this I'm using a CreateTexture() call to create the back buffer, and then a GetSurfaceLevel() call to get the texture from the Surface. This works fine.
However, I'd like to use CreateRenderTarget() directly. It returns a Surface. But then I need a Texture to draw a quad to the front buffer.
The problem is, I can't find a function to get a texture from a surface. I've searched the DX8.1 doc again and again with no luck. Does such function even exist?
You can create empty texture matching size and color format of the surface. Then copy contents of the surface to the surface of the texture.
Here is a snippet from my DirectX9 code without error handling and other complications. It actually creates mipmap-chain.
Note StretchRect that does the actual copying by stretching surface to match geometry of the destination surface.
IDirect3DSurface9* srcSurface = renderTargetSurface;
IDirect3DTexture9* tex = textureFromRenderTarget;
int levels = tex->GetLevelCount();
for (int i=0; i<levels; i++)
{
IDirect3DSurface9* destSurface = 0;
tex->GetSurfaceLevel(i, &destSurface);
pd3dd->StretchRect(srcSurface, NULL, destSurface, NULL, D3DTEXF_LINEAR);
}
But of course, this is for DirectX 9. For 8.1 you can try CopyRect or Blt.
On Dx9 there is ID3DXRenderToSurface that can use surface from texture directly. I am not sure if that's possible with Dx8.1, but above copy-method should work.
If backwards compatibility is the reason your using D3D8, try using SwiftShader instead. http://transgaming.com/business/swiftshader
It a software implementation of D3D9. You can utilize it when you don't have a video card. It costs about 12k though.