If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.
Related
I have a texture atlas that gets generated during gameplay. Textures are loaded and unloaded depending on which textures are visible when the player moves around.
The problem is when there are too many textures to fit in the atlas and I need to resize it, but if I simply resize the texture atlas by copying it to a larger texture, then I would need to re-render everything to update the UV coordinates of the quads.
Is it possible to somehow create an OpenGL texture and define its width and height in UV coordinates to be something other than 1?
One solution would be to use texel coordinates rather than normalized coordinates (assuming you are not going to scale or move the images around within the atlas when you make it bigger). If you can live with their limitations (no wrapping, no mipmaps, only linear filtering), you could just use Rectangle Textures and call it a day. Alternatively, you could simply apply scaling factors to the texture coordinates in your vertex shader, either by passing them as uniforms or directly querying the texture dimensions in the shader using the textureSize() GLSL function.
Alternatively, you might consider using an Array Texture to hold multiple atlases. Instead of enlarging an existing atlas to make room for more subimages, you could just start a new atlas in another array slice. That way, the texture coordinates of all existing subimages would stay the same…
As part of my 2D game engine, I would like to be able set the order (from back to front) in which my sprites are rendered on screen by manipulating their z-index. This z-index sprite property is a floating point value which can range between the far and near planes of my orthographic projection (in the range of (-1.0, 1.0]).
Currently, in order to minimize unnecessary texture switching, I store my sprites in an un-ordered dictionary, where the keys are textures and the values are the corresponding ordered list of sprite quads using that particular texture. This dictionary is then parsed every frame to populate a giant VBO with all of the appropriate per-vertex attributes (position, texcoords, and a mat4 modelview matrix). This is great since I only need to then make one texture bind for each texture, and I have been pretty happy with its performance.
While this z-ordering issue has always been there in my code, it became really obvious when I was working with translucent textures and enabled alpha blending, as rendering sprites in the wrong order resulted in some sprites appearing opaque instead! After some testing, I have come to the following conclusions:
The order of my texture keys matters as this determines which corresponding sprite lists are written to my giant VBO first. This basically means that all of the sprites of the first texture appear below all of the sprites of the second texture.
If two sprites have the same texture, then the relative order of the two sprites in the sprite list associated with that texture matters (first sprite appears on the bottom again).
If I call glEnable(GL_DEPTH_TEST), then I need to set the z-index in increasing order to match the order of the sprite list; otherwise I get incorrectly opaque sprites.
If I call glDisable(GL_DEPTH_TEST), then my z-index value I set for the sprites is (obviously) ignored, so only rules 1 and 2 apply.
My question is, given a set of translucent and opaque sprites of various textures, how can I best order my sprites in my giant VBO every frame so as to minimize texture changes? Are the rules different for handling opaque and translucent sprites (and should they be handled in separate passes)? I also read that the alpha blending functions in OpenGL are order-dependent and that there are some order-independent techniques that can be used apparently to get around this, but this went over my head, so any light that could be shed on those types of techniques would be appreciated as well.
Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?
I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?
There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.
No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.
I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.