I'm currently developing a 2D mobile game, and I want the camera to be able to pan around and zoom in and out. One method would be to render the entire level to a backbuffer, and simply draw a region and scale it up/down to the device screen. Angry birds has the effect I'm looking for where the user can zoom out and see the whole level.
This would take huge amounts of memory to have a surface for the entire level. Another method would be to scale down each sprite and draw it in the view, but this may take more rendering power than resizing an entire surface.
Another option is similar to the first where I could have a large fixed surface to render the images on instead of having a surface for the whole level. Again, this would take huge amounts of memory especially if the user wanted to zoom out really far.
I've never really developed for mobile before. Would scaling each individual image be too expensive? Should I scale each sprite and then render, or render a large portion of the map to a surface and downscale/upscale that surface?
You shouldn't have to mess with the size of the buffer you're drawing to. What you could do is to adjust the viewport size, thereby fitting more of the world into your viewport (thus ending up on the buffer). Sprites will of course be smaller, and if it becomes to slow for you, you could implement some level of detail (LOD) so that for example "when the viewport is larger, draw the sprite using a smaller, pre-loaded, image". What would still make the solution slow/"inefficient" is that you'll be drawing more sprites, but that's not something you'll get around.
Related
I want to create an app in Cocos2d/Cocos2dx in which i have an image which is not visible but when i move my finger on device it start drawing. Only that part of image draws where i move my finger.
Thanks in Advance
There are two ways I can think of drawing an image.
The first way would be like a brush. You would use RenderTexture and draw/visit a brush sprite to draw it into this texture. If you just need to draw with solid colors (can have opacity) you could also use the primitive draw commands (drawCircle, drawPoly, drawSegment). You will need a high rate of touch tracking and will likely want to draw segments or bezier curves between touch movements to catch fast movements.
http://discuss.cocos2d-x.org/t/using-rendertexture-to-render-one-sprite-multiple-times/16332/3
http://discuss.cocos2d-x.org/t/freehand-drawing-app-with-cocos2d-x-v3-3-using-rendertexture/17567/9
Searching on how other drawing games work would be useful.
The second way I can envision it is similar to revealing except using an inverse mask. So you would draw a set image, but reveal that image by drawing.
http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0
There are more elaborate ways to handle the drawing into the RenderTexture in order to have the brush design tile correctly and repeat based on a given size, but that'll involve making something closer to an image editing tool.
Using OPENGL , I am making a simple animation where a small triangle will move through the path that I have created with mouse (glutMotionFunc).
So the problem is how can I animate a small triangle without redrawing the whole path using glutSwapBuffers();
And also ,how can I rotate that triangle only.
I don't want to use overlay as switching between these 2 layers takes much time.
If redrawing the whole path is really too expensive, you can do your rendering to an off-screen framebuffer. The mechanism to do this with OpenGL is called Frame Buffer Object (FBO)
Explaining how to use FBOs in detail is beyond the scope of an answer here, but you should be able to find tutorials. You will be using functions like:
glGenFramebuffers()
glBindFramebuffer()
glFramebufferRenderbuffer() or glFramebufferTexture()
This way, you can draw just the additional triangle to your FBO whenever a new triangle is added. To show your rendering on screen, you can copy the current content of the FBO to the primary framebuffer using glBlitFramebuffer().
You cant! Because it just does not makes sense!
The way computer screen work is the same as in films: fps! Frames per second. There is no thing as "animation" in screens, it is just a fast series of static images, but as our eyes cannot see things moving fast, it looks like it is moving.
This means that every time something changes in the thing you want to draw, you need to create a new "static image" of that stage, and that is done with all the glVertex and so pieces of code. Once you finish drawing you want to put it on the screen, so you swap your buffer.
Background:
I am creating a game that presents the world in an isometric perspective, achieved by drawing isometric tiles. My current implementation is naive, using the painter's method, drawing from back to front, from bottom to top, using surface blits from tile images.
The Problem:
I'm concerned (maybe unduly so, please let me know if this is the case) about overdraw. Here's a small snapshot of a single layer of tiles:
The areas hi-lit in pink are the areas where the back-to-front, bottom-to-top method blits pixels to the canvas more than once. This is a small and contrived example, but in practice I hope to accomplish something more along the lines of this:
(image credit eBoy)
With an image as complex as this, and a tile-based implementation, each screen pixel is drawn to several times before the final image is composited, which feels like it's really inefficient. Since these are just 2D images with, in the end, one-bit alpha masks, there aren't as many concerns as there would be with 3D (e.g. no wasted lighting or transform math) but it still seems there should be a more elegant way of determining whether a pixel should be drawn or not based on whether or not it would be occluded in the final composition.
Solutions?
The best solution I've come up with so far is to:
Reverse the drawing order and draw front-to-back, top-to-bottom.
Keep a single bit per pixel fake z buffer that records whether or not a pixel has been drawn yet.
Only draw a tile if some of the pixels it covers haven't been drawn yet.
Is there a better way to do this? Are blit operations superefficient and I'm tilting at windmills here?
Windmills. Especially if you're using OpenGL-accelerated SDL2 blits.
So what I'm trying to do is copy the current front buffer to a texture and use this during a resize to mimic what a normal window resize would do. I'm doing this because the scene is too expensive to render during a resize and I want to provide a fluid resize.
The texture coping is fine, but I'm struggling to work out the maths to get the texture to scale and translate appropriately (I know there will be borders visible when magnifying beyond the largest image dimension).
Can anyone help me?
but I'm struggling to work out the maths to get the texture to scale and translate appropriately
Well, this depends on which axis you base your field of view on. If it's the vertical axis, then increasing the ratio width/height will inevitably lead to letterboxing left and right. Similar if your FOV is based on the horizontal axis inreasing height/width will letterbox top and bottom. If the aspect variation goes opposite you have no letterboxing, because you don't need additional picture information for this.
There's unfortunately no one-size-fits all solition for this. Either you live with some borders, or stretch your image without keeping the aspect, or before resizing the window you render with a largely bigger FOV to a quadratic texture of which you show only a subset.
I am using a QGraphicsView with hundreds of large QGraphicsItems in them. The QGraphicsItems are essentially QGraphicsPixmapItems because I am reading some raw data in from a file, and converting that into a QImage and then using the drawImage() painter.
The problem is, after I start getting a certain number of these items in my scene, OR just a single, very large one, the scene becomes really slow to respond. If I move the items, or try to zoom in/out on them, etc, the scene just takes forever to refresh. I would like it to be more interactive, instead of constantly waiting on the scene to refresh once all the data is loaded.
Can OpenGL help me here? How would I go about doing this? Creating a 2D rectangle and painting a texture of my raw data on it? I have all of my QGraphicsItems in a QGraphicsItemGroup, with them essentially making up one large image. If I am zoomed out far enough to see all of my hundreds of QGraphicsItems, the big "image" of the tiles is at least 32000x32000 pixels. Can OpenGL even handle that if I were mapping these as textures on rectangles?
A texture 32000 square is 1Gpixel. With RGBA color, that's 4GB of video memory.
You said your dimensions are even more than that, so do you have enough video RAM? Don't forget to include space for mipmap levels.
You could probably speed this up with careful use of OpenGL, but you're going to need to manage textures intelligently and not expect OpenGL to create a texture that large.