SDL: Combination of Tile sets and layered surfaces does not work - c++

I would like to create a modularly designed game using SDL, but yet I fail to get my sprites displayed. Precisely, I am trying to implement tile sets, which are are bunch of equally sized sprites collected in one single PNG file (in my case).
My data structure should hold an array of sprites available in the tile set which can then be drawn with a method, like draw(Tile *, Position, Layer);.
As indicated, I want to feature multiple layers of surfaces to later on implement multiple independent background layers and a foreground layer. Similarly, I have an array of layers that are blitted onto my screen surface (created with SDL_SetVideoMode) in a pre-defined order.
However, I don't get what's going wrong in my code.
While a tile set is loaded, I can successfully blit the currently loaded tile onto a layer surface, like in this snippet:
this->graphic = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_SRCALPHA,
tile_widths, tile_heights, bit_depth,
rmask, gmask, bmask, amask);
SDL_BlitSurface(tileset_graphic, &tile_position, this->graphic, nullptr);
SDL_BlitSurface(tileset_graphic, &tile_position,
((VideoController::get_instance())->layers)[0], &tile_position);
In the first line, I try to blit the part of the tileset_graphic, that corresponds to a sprite, to an SDL_Surface * that is held by my Tile structure to be used later on.
However, I cannot use this surface to blit onto a layer.
The second (test) statement just copies the considered region of the tileset_graphic to the most bottom of my layers. Furthermore, I have performed several test commands in my main method.
My findings during testing:
I can blit a tileset_graphic piece to a layer and it is correctly shown (see above)
I can blit a Tile directly onto the screen:
SDL_BlitSurface(
tile->graphic,
nullptr,
(VideoController::the_instance)->screen,
&relative_position);
However, I cannot blit a Tile onto a layer:
SDL_BlitSurface(
tile->graphic,
nullptr,
(VideoController::the_instance)->layers[0],
&relative_position);
To be more precise, when I blit the whole tileset_graphic for testing and then blit a Tile onto a region that is already occupied due to this test, I can partly see my sprite. But then again, I have absolutely no clue why this is the case...
Summary:
I try to blit several SDL_Surfaces onto another, but seem to fail only by trying this (desired) chain of surfaces:
graphic --> layer --> screen
Does anyone have a clue what may go wrong, or is able to tell me which additional information is needed by you guys?
EDIT
I was able to find that blitting on the initialized (SDL_CreateRGBSurface), but still "untouched" layer surfaces seems to fail somehow. When I use SDL_FillRect on my layer surfaces before blitting a tile onto the layer, it is displayed correctly. However, I am losing transparency of layers this way...

I figured out the solution.
I had to explicitly reset the SDL_SRCALPHA flag of my graphic by doing
SDL_SetAlpha(this->graphic, 0, SDL_ALPHA_OPAQUE );

Related

SDL_LockTexture returning black pixels

I'm currently following LazyFoo's SDL tutorial and I've reached the Texture Manipulation part but I'm running into a problem. The first time I call SDL_LockTexture it works fine to update the texture. However, after SDL_UnlockTexture, if I attempt to lock the texture again it now returns a pointer to a set of zeroed pixels, instead of what they were set to after the last unlock.
SDL_LockTexture(newTexture, NULL, &pixels, &pitch);
memcpy(pixels, formattedSurface->pixels, formattedSurface->pitch * formattedSurface->h);
SDL_UnlockTexture(newTexture);
pixels = NULL;
This is the initial code that copies the pixels of a loaded surface to an empty texture, and if rendered at this point it displays the correct image.
However, if the texture is locked and unlocked again then the image displayed is a black box (the value of each pixel being 0)
I'm completely stumped. Please let me know if any more is needed.
As stated on the wiki here the pixel data made available by SDL_LockTexture is intended only for writing data to a texture. It may not actually contain the current data for that texture. If you need a copy of a texture's pixel data you need to keep that stored somewhere else.

Zoom window in OpenGL

I've implemented Game of Life using OpenGL buffers (as specified here: http://www.glprogramming.com/red/chapter14.html#name20). In this implementation each pixel is a cell in the game.
My program receives the initial state of the game (2d array). The size array ,in my implementation, is the size of the window. This of course makes it "unplayable" if the array is 5x5 or some other small values.
At each iteration I'm reading the content of the framebuffer into a 2D array (its size is the window size):
glReadPixels(0, 0, win_x, win_y, GL_RGB, GL_UNSIGNED_BYTE, image);
Then, I'm doing the necessary steps to calculate the living and dead cells, and then draw a rectangle which covers the whole window, using:
glRectf(0, 0, win_x, win_y);
I want to zoom (or enlarge) the window without affecting the correctness of my code. If I resize the window, then the framebuffer content won't fit inside image(the array). Is there a way of zooming the window(so that each pixel be drawn as several pixels) without affecting the framebuffer?
First, you seem to be learning opengl 2, I would suggest instead learning a newer version, as it is more powerful and efficient. A good tutorial can be found here http://www.opengl-tutorial.org/
If i understand this correctly, you read in an initial state and draw it, then continuously read in the pixels on the screen, update the array based on the game of life logic then draw it back? this seems overly complicated.
The reading of the pixels on the screen is unnecessary, and will cause complications if you try to enlarge the rects to more than a pixel.
I would say a good solution would be to keep a bit array (1 is a organism, 0 is not), possibly as a 2d array in memory, updating the logic every say 30 iterations (for 30 fps), then drawing all the rects to the screen, black for 1, white for 0 using glColor(r,g,b,a) tied to an in statement in a nested for loop.
Then, if you give your rects a negative z coord, you can "zoom in" using glTranslate(x,y,z) triggered by a keyboard button.
Of course in a newer version of opengl, vertex buffers would make the code much cleaner and efficient.
You can't store your game state directly the window framebuffer and then resize it for rendering, since what is stored in the framebuffer is by definition what is about to be rendered. (You could overwrite it, but then you lose your game state...) The simplest solution would just to store the game state in an array (on the client side) and then update a texture based on that. Thus for each block that was set, you could set a pixel in a texture to be the appropriate color. Each frame, you then render a full screen quad with that texture (with GL_NEAREST filtering).
However, if you want to take advantage of your GPU there are some tricks that could massively speed up the simulation by using a fragment shader to generate the texture. In this case you would actually have two textures that you ping-pong between: one containing the current game state, and the other containing the next game state. Each frame you would use your fragment shader (along with a FBO) to generate the next state texture from the current state texture. Afterwards, the two textures are swapped, making the next state become the current state. The current state texture would then be rendered to the screen the same way as above.
I tried to give an overview of how you might be able to offload the computation onto the GPU, but if I was unclear anywhere just ask! For a more detailed explanation feel free to ask another question.

Render nodes in single drawing pass with spritekit?

As I was looking around for information about spritekit textures, and I stumbled upon this quote:
If all of the children of a node use the same blend mode and texture atlas, then Sprite Kit can usually draw these sprites in a single drawing pass. On the other hand, if the children are organized so that the drawing mode changes for each new sprite, then Sprite Kit might perform as one drawing pass per sprite, which is quite inefficient.
But check this out:
Tiles with same texture (I assure you it's a texture, not just a color)
Tiles with their own texture
The draw count has a difference of 40, although all of the textures used came from the same atlas.
Am I interpreting the word 'atlas' wrong?
This is where I store my images:
Is my example a 'texture atlas,' or is the definition of 'atlas' here a single .png that contains all the images needed, and individual tiles are sliced from it?
Or is the problem probably in how I am loading/something else?
Thanks!
The problem here is most likely because of the node graph itself. Say you have 100 sprites as children of the scene, with not sub-children of their own, and they all use textures from the same atlas and default blend modes, then Sprite Kit will batch-draw these 100 sprites in a single draw call.
However if these tile sprites have children of their own, perhaps shape or label nodes for debugging, then this will "interrupt" the batch drawing operation.
Check your node graph. Make sure all tile sprites are children of the same parent, and have no children of their own, use the same blend modes and textures from the same atlas. Then batch drawing will definitely work.
If that doesn't work, verify that the tiles.atlas folder is correctly converted to a texture atlas in the bundle. If you open the compiled app bundle you should find a folder named 'tiles.atlasc' with a plist and one or more png files in it, containing all individual images of the folder. Furthermore none of these individual images should appear in the bundle - if they are added to the bundle as individual files using the same names as in the atlas, then Sprite Kit will default to loading the individual image files rather than obtaining them from the texture atlas.

What exactly is a buffer in OpenGL, and how can I use multiple ones to my advantage?

Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.

Copying SDL_Surfaces with Alpha Channels

I've run into issues preserving the alpha channels of surfaces being copied/clip blitted (blitting sections of a surface onto a smaller surface, they're spritesheets). I've tried various solutions, but the end result is that any surface that's supposed to have transparency ends up becoming fully opaque (alpha mask becomes white).
So my question is, how does one copy one RGBA SDL_Surface to another new surface (also RGBA), including the alpha channel? And if it's any different, how does one copy a section of an RGBA surface, to a new RGBA surface (the same size of the clipped portion of the source surface), ala tilesheet blitting.
It seems that SDL_BlitSurface blends the alpha channels, so when for example, I want to copy a tile from my tilesheet surface to a new surface (which is of course, blank, I'm assuming SDL fills surfaces with black or white by default), it ends up losing it's alpha mask, so that when that tile is finally blitted to the screen, it doesn't blend with whatever is on the screen.
SDL_DisplayFormatAlpha works great to copy surfaces with an alpha mask, but it doesn't take clip parameters, it's only intended to copy the entire surface, not a portion of it, hence my problem.
If anyone is still wondering after all these years:
Before bliting the surface, you need to make sure that the blending mode of the source (which is an SDL_Surface) is set to SDL_BLENDMODE_NONE as described in the documentation: SDL_SetSurfaceBlendMode(). Is should look something simple like this:
SDL_SetSurfaceBlendMode(source, SDL_BLENDMODE_BLEND);
SDL_BlitSurface(source, sourceRect, destination, destinationRect);
I had this problem before and have not come to an official answer yet.
However, I think the only way to do it will be to write your own copy function.
http://www.libsdl.org/docs/html/sdlpixelformat.html
This page will help you understand how SDL_Surface stores color information. Note that there is a huge difference between colors above and below 8 bits.