I'm in the process of converting the rendering of my top down 2d game from the SDL2 draw functions to direct OpenGL, so that I can take advantage of batch rendering and shaders.
Currently for drawing my various sprites, I use an array of layers, each layer has an array of gameObjects. I simply loop through the layers and all gameObjects in each layer when rendering so that stuff is drawn in the right order.i.e. layer 0 is background objects, layer 1 is main level , and layer 2 is foreground.
I'm realizing while designing the OpenGL code, that I now have the option to draw on the Z axis. So, I'm wondering - does it make sense for me to get rid of my layer arrays and simply have each gameObject have a layer value that I can simply use to represent a particular Z axis value. I could have Z axis 3,2 and 1, and just let the openGL draw calls handle what goes on top or in front of what.
Some of my textures WILL have transparent parts, so I'll need that to work as well.
Related
At the moment I have a function where I pass in a vector2 for position and two int's for the width and height of the image that I want to draw.
It draws a triangle strip and uses the FreeImage library to apply a texture to this square.
However, I have gotten to the point where I want to perform more advanced stuff to these objects, such as setting up Child/Parent relationships, or rotating and scaling the objects.
All of this will become 100x easier if I can figure out how to draw squares with a matrix instead of my current way of doing it.
Lets say I have this image and in it is an object (a cube). That object is being tracked (with labels) and I manage to render a virtual cube onto it (augmented reality). Now that I can render a virtual cube onto it I want to be able to make the object 'disappear' with some really basic diminished-reality technique called "inpainting". The inpaint in question is pretty simple (it has to be or the FPS will suffer) and it requires me to do some operations on pixels and their neighbors (like with Gaussian blur or other basic image processing).
To do that I first need:
A mask: black background with a white cube in it.
Access each pixel of the initial image (at coordinates x and y) as well as its neighborhood and do stuff based on the pixel value of the mask at the same x and y coordinates. So basically the mask serves as a way to say ignore this pixel or use this pixel.
How do I do this using OpenGL? I want to be able to access pixel values 1 by 1 preferably in 2D because of the neighbors.
Do I use FBOs or PBOs? I've read many things about buffers and methods like glDrawPixels() but I'm having trouble putting them all together. The paper I saw this method in used the GL_BACK buffer but mine is already used. Some sample code (C++) would be really appreciated with all the formalities (OpenG` calls) since I'm still a beginner in OpenGL.
I'm even thinking of using OpenCV if pixel manipulation is too hard in OpenGL since my AR library (Aruco) works on top of OpenCV. In that case I will still need to get the mask (white cube on black background), convert it to a cv::Mat and then do my processing.
I know this approach is inefficient (going back and forth from the GPU/CPU) but my goal (for now) is to at least make the basics work.
Setup a framebuffer object to render your original image + virtual cube. Here's a tutorial.
Next you can attach that framebuffer texture as a input (sampler) texture of your next stage and render a quad (two triangles) that cover your mask.
In the fragment shader you should be able to sample your "screen coordinate" by reading the variable gl_FragCoord. Setting up the texture filter functions as GL_NEAREST, you can access the exact texture coordinates. Also the neighboring pixels are available with a displacement (deltaX = 2/Width, deltaY=2/Height).
Using a previous framebuffer texture as source is mandatory, as the currently active framebuffer is write only.
I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.
I want to know if it is possible to have multiple layers which can be manipulated independently and displayed in an overlapping manner.
Here is what I want to do. I'm implemeting a Turtle Graphics API. I want to animate the turtle movement. I was wondering if i could have all the graphics in one layer and the turtle (which I'm representing using a small isosceles triangle) alone in another layer so that I can erase the turtle by clearing this layer and without affecting the graphics layer and redraw the turtle in another location/orientation on the turtle plane.
OpenGL is not a scene graph.
OpenGL is (generally) not a classic 2D framebuffer where you want to try to minimize redraws. With OpenGL you'll generally be redrawing the entire scene each frame after clearing the depth and color buffers.
You have several options:
1) Disable the depth buffer/depth check and render your layers back to front.
2) Make sure each of your layers has an appropriate Z coordinate and render them in whatever order, letting the Z buffer take care of getting the layering right. Won't work if your layers are translucent.
3) Render your turtle path to a texture via whatever method you feel like supporting (glCopyPixels(), PBOs, FBOs, cairo). Render a screen-sized textured quad and your turtle on top.
4) Redraw your turtle path in full each frame, oldest point first. Shouldn't be slow unless you have line count in the hundreds of thousands. Make sure to use vertex arrays or VBOs.
I want to render a fire effect in OpenGL based on a particle simulation. I have hundreds of particles which have a position and a temperature (and therefore a color) as well as with all their other properties. Simply rendering a solidSphere using glut doesn't look very realistic, as the particles are spread too wide. How can I draw the fire based on the particles information?
If you are just trying to create a realistic fire effect I would use some kind of re-existing library as recommended in other answers. But it seems to me you that you are after a display of the simulation.
A direct solution worth trying might be replace your current spheres with billboards (i.e. graphic image that always faces toward the camera) which are solid white in the middle and fade to transparent towards the edges - obviously positioning and colouring the images according to your particles.
A better solution I feel is to approach the flame as a set of 2D Grids on which you can control the transparency and colour of each vertex on the grid. One could do this in OpenGL by constructing a plane from quads and use you particle system to calculate (via interpolation from the nearest particles you have) the colour and transparency of each vertex. OpenGL will interpolate each pixel between vertexes for you and give you a smooth looking picture of the 'average particles in the area'.
You probably want to use a particle system to render a fire effect, here's a NeHe tutorial on how to do just that: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=19