I have a 512X512 texture which holds a number of images that i want to use in my application. After adding the image data to the texture i save the texture coords for the individual images. Later i apply these on some quads that i am drawing. The texture has mipmapping activated.
When i take a screenshot of the rendered scene at exactly the same instance in two different runs of the applications, i notice that there are differences in the image only among those quads textured using this mipmapped texture. Can mipmapping cause such an issue?
My best guess is that it has to do with precisions in your shader. Check out this problem that I had (and fought with for a while) and my solution:
opengl texture mapping off by 5-8 pixels
It probably is a combination of mimapping's automatic scaling of your texture atlas and the precision hints in your shader code.
Also see the other linked question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Related
I've currently got a scene rendering using OpenGL / LWJGL which creates a texture from some rendering to a FrameBuffer, and then renders that texture to a generated quad - all working nicely.
My question is what would be the best way to instead apply this generated texture to a face of a model that I've imported?
This model would already have UV's generated for another texture that applies to the remaining faces of the model, but I'm not clear on how I could separately apply the FBO texture to one face.
The face would be rectangular and the FBO texture should simply fit to the face the same as if it was filling a separate quad - so in theory the mapping should be straightforward, if I understood how to do it.
An alternative idea is to still render to a separate quad and try and position this quad relative to the model slightly above the desired face, but this seems super awkward to position correctly and involves extra work should the model change.
Your question is overstated. The fact that the texture's data happens to come from a previous rendering operation to an FBO is utterly irrelevant. Your question is "how do I use a particular texture with a particular face?"
At the end of the day, you have one option: render multiple meshes. One draws the faces that use one texture, and the other draws the faces that use the second texture. It's simply a matter of separating out which faces go to which texture.
If you're using some externally loaded model, then the model has to be built in pieces, with each piece being used with a different texture.
I have a RGBA pixmap (e.g. an antialiased circular 4x4 dot) that I want to draw over a texture in a way similar to a brush stroke. The obvious solution of using glTexSubImage2D just overwrites a rectangular area with no respect to alpha value. Is there a better solution than the obvious maintaining a mirrored version of the texture in local RAM, doing a blending there and then using glTexSubImage2D to upload it - preferrably OpenGL/GPU based one? Is FBO the way to go?
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed? With 4x4 object in RAM doing a CPU blending is basically transforming 4x4 matrix with basic float arithmetics, totalling to 16 simple math iterations & 1 glTexSubImage2D call... is setting an FBO, switching rendering contexts & doing the rendering still faster?
Benchmarking data would be very appreciated, as well as MVCEs/pseudocode for proposed solutions.
Note: creating separate alpha-blended quads for each stroke is not an option, mainly due to very high amount of strokes used. Go science!
You can render to a texture with a framebuffer object (FBO).
At the start of your program, create an FBO and attach the texture to it. Whenever you need to draw a stroke, bind the FBO and draw the stroke as if you were drawing it to the screen (with triangles). The stroke gets written to the attached texture.
For your main draw loop, unbind the FBO, bind the attached texture, and draw a quad over the entire screen (from -1,-1 to 1,1 without using any matrices).
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed?
Yes.
If the attached texture is as big as the window, then there are no artifacts.
You only need to switch to the FBO when adding a new stroke, after which you can forget about the stroke since it's already rendered to the texture.
The GPU does all of the sampling, interpolation, blending, etc., and it's much better at it than the CPU (after all, it's what the GPU is designed for)
Switching FBO's isn't that expensive. Modern games can switch FBOs for render-to-texture several times a frame while still pumping out thousands of triangles; One FBO switch per frame isn't going to kill a 2D app, even on a mobile platform.
I want to render two textures on the screen at the same time at different positions, but, I'm confused about the vertex coordinates.
How could I write a vertex shader to meet my goal?
Just to address the "two images to the screen separately" bit...
A texture maps image colours onto geometry. To be pedantic, you can't draw a texture but you can blit and you can draw geometry with a mapped texture (using per-vertex texture coordinates).
You can bind two textures at once while drawing, but you'll need both a second set of texture coordinates and to handle how they blend (or don't in your case). Even then the shader will be quite specific and because the images are separate there'll be unnecessary code running for each pixel to handle the other image. What happens when you want to draw 3 images, or 100?
Instead, just draw a quad with one image twice (binding each texture in turn before drawing). The overhead will be tiny unless you're drawing lots, at which point you might look at texture atlases and drawing all the geometry with one draw call (really getting towards the "at the same time" part of the question).
Currently I have a texture atlas that is 2048 x 2048 pixels set up with three 512 x 512 textures stored, and I am only applying one texture to the object. So I used the following code to position the texture coordinates (from zero to 1) to the correct position on the texture atlas for that texture:
color = texture2D(tex_0, vec2(0.0, 1024.0/2048.0) + mod(texture_coordinate*vec2(40.0), vec2(1.0))*vec2(512.0/2048.0));
The problem is that when I apply this, there is a black border around the texture. I presume that this is because OpenGL can't blend the two pixels at the place of that border.
So how do I get rid of the border?
Edit*
I have already tried to move the starting and ending boundaries in toward the center of the texture and that didn't work.
Edit*
I found the source of the problem, the automatic mipmap generation is blending the textures in the texture atlas together. This means I have to write my own mipmapping function. (As far as I can tell)
If anyone has any better ideas, please do post.
Instead of using a normal 2D texture as the texture atlas with a grid of textures, I used the GL_2D_TEXTURE_ARRAY functionality to create a 3D texture that mipmapped correctly and repeated correctly. That way the textures did not blend together at higher mipmap levels.
First example:
You can take a huge rock shaped mesh and put a tiled rock texture all over it.
Now, some places needs to be covered with a grass texture (or other vegetation).
Another example:
Usually, terrain are built from tiled textures. In order to achieve a less "tilly" look, you can apply 4 times bigger (or 16 and so on..) tiled texture on it, and by that you'll gain a nice "random" tiled texture (seen that in the UDK's docs).
Blender (the 3d graphics app) is OpenGL based, and it allows you to assign multiple materials to a single mesh.
How can i do it in my own OpenGL application?
Thanks,
Amir
P.S:
I'm looking for a better solution than rendering 50 tris with tex a and and 3 more tris with tex b.
What you're looking for is called multitexturing. Modern graphics cards have several texture units that each can sample a different texture. When you render your rock you specify vertices that have UV coordinates for each texture you want to render.
In OpenGL you can use glActiveTexture to select your active texture unit so that you can bind a texture to it and use it in subsequent rendering. Your vertices will need additional texture coordinate pairs; one pair per texture you intend to render.
The modern way to do multitexturing is using shaders (GLSL in OpenGL typically). Load and bind each texture to a different texture unit, set your shader uniforms to the value of the texture units (0 for texture unit 0 etc) you're using, sample each texture, and blend using the desired blending function to get your output color.