this is my first attempt at a 3d game. My basic idea is that I want to have the player inside a cube, with each wall (all 6 faces) is rendered as 2d using a FrameBuffer and then draw into 3d space. I am currently using decals but I get weird bugs and I'm starting to think maybe there is a better approach. Is there?
Here is what it looks like so far:
Each wall is a 2d texture created using an fbo. For now there is only a color being drawn to each wall but eventually there will be 2d sprites and stuff.
Sometimes the blue wall completely renders and other time (like in the photo) there are black boxes that show up. Also, the red wall to the right is good, but there are also black boxes giving it a weird illusion.
So basically, I just need a fast way to render dynamic (60fps minimum) 2d textures in a 3d environment.
Thanks in advance.
Related
I am working on a small 2d game where my wizards casts a spell I want to create an effect where the world warps as if the spell is bending Light much like hot air around a fire would. Right now I have a vertex shader warping the points of the rectangles I use to draw the world. There are two problems. The first is that there are not enough polygons in my simple 2d game for this to work seemlessly. The second is that my terrain is composed of hex tiles like a hex grid. Because the 4 points rectangle polygons do not represent where the 6 points of the hex grid join together, the warping of polygons causes the world to break apart and gaps appear below. Now I can change the world to use 6 points hex polygons instead of rectangles with hex textures but that would be out of scope.
Would it be possible to render my world somewhere offscreen then grab the offscreen frame as a texture then render it again with a higher polygon count? At that point I would use my warp vertex shader.
Also is there another way to do this?
You want to do this as a post processing effect in the pixel shader. Take your previous render target then use it as input for this post processing effect.
Guide for Rendering to Texture
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
Fire does refraction but i'd learn from code about this swirling and modify it to effect the screen more like how you want it too. Doing refraction is a bit more difficult, but you can emulate it with the ideas within here and manipulating how you sample the uv's with noise.
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
This should get you pointed some what in the right direction.
I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.
I'm making a 2D game that uses directx. Currently, I have a background texture (with more to come) that I draw to the screen. However, I only want a portion of the texture drawn to the screen. I know that I could use a rectangle with the draw function, but I need a greater degree of control. Is there a way to draw several triangles (using custom vertices) to the screen from my drawing? I've looked around the internet and this site, but I just can't seem to find what I want. I can give more information/code if needed. Thank you!
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.
I am currently using glLogicOp() with a cube, which i render twice: with glFrontFace(GL_CW) and then with glFrontFace(GL_CCW). This allows me to see which area of the other 3d object my cube is overlapping with.
But i want to change the negative color to something else, lets say 0.5f transparent blue color.
How this can be done? Sorry about the title, i dont know the name of this method.
--
Also, i am having problem with being inside the cube with my camera: i need to fill the screen with negative coloring, is there any other way than swithing to 2d mode and drawing a quad with glLogicOp() enabled ? Also the problem is that theres a chance to see bugged rendering if i am at the edge of the cube surface, any ideas for preventing this perfectly?
You should look into the "Carmack's reverse" algorithm and the stencil shadow algorithms in general, as your problem is closely related to them (your cube being a shadow volume object). You will not get away with using glLogicOp() if you want other colors than black and white.