I am using OpenGL 1.3 to do 2D sprite rendering and supporting both POTS (power of two size) textures and NPOTS (non power of two size) textures with TEXTURE_2D and TEXTURE_RECTANGLE_ARB respectively.
I had started with POTS textures (using TEXTURE_2D), which worked fine, but now I am adding NPOTS textures (using TEXTURE_RECTANGLE_ARB). This addition has caused the POTS textures (with TEXTURE_2D) to break.
By break I mean that the POTS textures are rendered as a grayscale gradient ranging from gray in the bottom left corner and white in the top right.
An extra point (discovered whilst trying to fix this error) - One big difference between TEXTURE_RECTANGLE_ARB and TEXTURE_2D is that the first uses non-normalised coordinates on the textures, whereas TEXTURE_2D uses normalised coordinates ([0.0,1.0]). I decided to check and replace the TEXTURE_2D's normalised coordinates with non-normalised coordinates, and this removed the grayscale problem by creating another - it was rendering the wrong texture!
I.e. when using a POTS and an NPOTS texture, my POTS texture tries to render the NPOTS texture.
Does anyone have any idea why this might be happening? Thankyou!
Okay, so it turns out that it was a rather silly mistake, that was found in the original TEXTURE_2D code. I had forgotten to end the rendering with the correct glDisable!
I.e. the rendering code began with glEnable(targetType), but did not end with glDisable(targetType) [where targetType was the correct choice of GL_TEXTURE_2D or GL_TEXTURE_RECTANGLE_ARB].
I guess, somehow, the two rendering environments got intertwined.
Lesson - make sure when you begin with glEnable that you end with glDisable.
Edit: Considering the below comment, I did a little digging and found out about target presidence. The idea is presented in the beginning of this article:
http://math.hws.edu/graphicsnotes/c4/s5.html
"At most one texture target will be used when a surface is rendered. If several targets are enabled, 3D textures have precedence over 2D textures, and 2D textures have precedence over 1D. Except for one example later in this section, we will work only with 2D textures."
In terms of TEXTURE_RECTANGLE_ARBs precedence, this is described in the specification in section 10: http://www.opengl.org/registry/specs/ARB/texture_rectangle.txt
I did not know about target priority or precedence at the time of the bug, so thanks to #datenwolf!
Even if you have solved your problem, I'd like to clarify a concept that might solve problems that you may have in a near future.
Here is what is wrong:
now I am adding NPOTS textures (using TEXTURE_RECTANGLE_ARB)
GL_texture_rectangle OpenGL extension is not mean to support Not Power Of Two (NPOT) textures; the correct OpenGL extention to query is GL_texture_non_power_of_two.
GL_texture_non_power_of_two cannot break existing applications which expect Power Of Two (POT) texture, because it relax the specification in order to accept textures having any width/height/depth (within the limits, of course). Here is a quote of the extension:
There is no additional procedural or enumerant api introduced by this
extension except that an implementation which exports the extension
string will allow an application to pass in texture dimensions for
the 1D, 2D, cube map, and 3D targets that may or may not be a power
of two.
Instead, GL_texture_rectangle allow you to specify texture coordinates by addressing the texture extents (width and height) by the pixel coordinate, which is an integer (instead of the usual floating point coordinate normalized in the range [0.0f, 1.0f]). Additionally, rectangle textures cannot support mipmapping.
additionally, if you read GL_texture_rectangle specification, it is not meant to support NPOT texture, because rectangle textures shall be affected by the same restrictions of 2D textures.
Related
as we all know, openGL uses a pixel-data orientation that has 0/0 at left/bottom, whereas the rest of the world (including virtually all image formats) uses left/top.
this has been a source of endless worries (at least for me) for years, and i still have not been able to come up with a good solution.
in my application i want to support following image data as textures:
image data from various image sources (including still-images, video-files and live-video)
image data acquired via copying the framebuffer to main memory (glReadPixels)
image data acquired via grabbing the framebuffer to texture (glCopyTexImage)
(case #1 delivers images with top-down orientation (in about 98% of the cases; for the sake of simplicity let's assume that all "external images" have top-down orientation); #2 and #3 have bottom-up orientation)
i want to be able to apply all of these textures onto various arbitrarily complex objects (e.g. 3D-models read from disk, that have texture coordinate information stored).
thus i want a single representation of the texture_coords of an object. when rendering the object, i do not want to be bothered with the orientation of the image source.
(until now, i have always carried a topdown-flag alongside the texture id, that get's used when the texture coordinates are actually set. i want to get rid of this clumsy hack!
basically i see three ways to solve the problem.
make sure all image data is in the "correct" (in openGL terms this
is upside down) orientation, converting all the "incorrect" data, before passing it to openGL
provide different texture-coordinates depending on the image-orientation (0..1 for bottom-up images, 1..0 for top-down images)
flip the images on the gfx-card
in the olde times i've been doing #1, but it turned out to be too slow. we want to avoid the copy of the pixel-buffer at all cost.
so i've switched to #2 a couple of years ago, but it is way to complicated to maintain. i don't really understand why i should carry metadata of the original image around, once i transfered the image to the gfx-card and have a nice little abstract "texture"-object.
i'm in the process of finally converting my code to VBOs, and would like to avoit having to update my texcoord arrays, just because i'm using an image of the same size but with different orientation!
which leaves #3, which i never managed to work for me (but i believe it must be quite simple).
intuitively i though about using something like glPixelZoom().
this works great with glDrawPixels() (but who is using that in real life?), and afaik it should work with glReadPixels().
the latter is great as it allows me to at least force a reasonably fast homogenous pixel orientation (top-down) for all images in main memory.
however, it seems thatglPixelZoom() has no effect on data transfered via glTexImage2D, let alone glCopyTex2D(), so the textures generated from main-memory pixels will all be upside down (which i could live with, as this only means that i have to convert all incoming texcoords to top-down when loading them).
now the remaining problem is, that i haven't found a way yet to copy a framebuffer to a texture (using glCopyTex(Sub)Image) that can be used with those top-down texcoords (that is: how to flip the image when using glCopyTexImage())
is there a solution for this simple problem? something that is fast, easy to maintain and runs on openGL-1.1 through 4.x?
ah, and ideally it would work with both power-of-two and non-power-of-two (or rectangle) textures. (as far as this is possible...)
is there a solution for this simple problem? something that is fast, easy to maintain and runs on openGL-1.1 through 4.x?
No.
There is no method to change the orientation of pixel data at pixel upload time. There is no method to change the orientation of a texture in-situ. The only method for changing the orientation of a texture (besides downloading, flipping and re-uploading) is to use an upside-down framebuffer blit from a framebuffer containing a source texture to a framebuffer containing a destination texture. And glFramebufferBlit is not available on any hardware that's so old it doesn't support GL 2.x.
So you're going to have to do what everyone else does: flip your textures before uploading them. Or better yet, flip the textures on disk, then load them without flipping them.
However, if you really, really want to not flip data, you could simply have all of your shaders take a uniform that tells them whether or not to invert the Y of their texture coordinate data. Inversion shouldn't be anything more than a multiply/add operation. This could be done in the vertex shader to minimize processing time.
Or, if you're coding in the dark ages of fixed-function, you can apply a texture matrix that inverts the Y.
why arent you change the way how you map the texture to the polygone ?
I use this mapping coordinates { 0, 1, 1, 1, 0, 0, 1, 0 } for origin top left
and this mapping coordinates { 0, 0, 1, 0, 0, 1, 1, 1 } for origin bottom left.
Then you dont need to manualy switch your pictures.
more details about mapping textures to a polygone could be found here:
http://iphonedevelopment.blogspot.de/2009/05/opengl-es-from-ground-up-part-6_25.html
What is the best way to texture terrain made from quads in OpenGL? I have around 30 different textures I want to have for my terrains (1 texture per terrain type, so 30 terrain types) and would like to have smooth transitions between any two of the terrains.
I have been doing some browsing on the web and found that there are many different methods, including 3d texturing, Alpha channels, blending, and using shaders. However, which of these is the most efficient and can handle the amount of textures I am looking to use? For example: This popular answer describes how to use some techniques, but since the mixmap only has 4 properties (RGBA) and so can only support 4 textures.
I should also note that I know nothing about shaders, so non-shader required techniques would be preferable.
Since you linked to an answer that describes texture splatting, and its question mentions the game Oblivion, I can provide some additional insight into that.
Basic texture splatting with an RGBA mixmap only supports four textures per terrain quad, but you can use different sets of textures for different quads. Oblivion divides its terrain into squares (called "cells") of 32 grid points (192 feet) per side, and each cell defines its own set of four terrain textures. So you can't have lots of texture diversity within a small area, but you can easily vary your textures over larger regions. If you prefer, you can define texture sets for smaller regions, even individual quads, at the expense of using more memory.
If you really need more than four textures in a quad, you can use multiple mixmaps. For each additional one, you just do another texture lookup to get four more blending factors, and blend in four more textures on top of the results from the previous mixmap. You can scale up to as many textures as you want, again at the expense of memory.
Texture splatting can be tricky to combine with with LOD techniques on the height map, because when a single low-detail terrain quad represents a group of high-detail quads, you have to sample several different mixmaps for different regions of the big quad. Oblivion sidesteps that problem by using texture splatting only for full-detail terrain; distant cells, rendered at lower resolution, use precomputed textures produced by the editor, which does the splatting and downscaling in advance.
One alternative to texture splatting is to use a clipmap to render a "megatexture". With this approach, you have a single large texture that represents your entire terrain, and you avoid filling up your RAM by loading different parts of it with only as much detail as is actually needed to render it based on the viewer's current position. (Distant parts of the terrain can't be seen at full detail, so there's no need to load them at full detail.)
The advantage of this approach is its artistic freedom: you can place fine details anywhere you want in the texture, without regard to the vertex grid. The disadvantage is that it's rather complex to implement, and the entire clipmap has to be stored somewhere, probably in a big file on disk, so that you can load parts of it into RAM as needed.
I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.
I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)
My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics