So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
I have to support some legacy code which draws point clouds using the following code:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, (float*)cloudGlobal.data());
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (float*)normals.data());
glDrawArrays(GL_POINTS, 0, (int)cloudGlobal.size());
glFinish();
This code renders all vertices regardless of the angle between normal and the "line of sight". What I need is draw only vertices whose normals are directed towards us.
For faces this would be called "culling", but I don't know how to enable this option for mere vertices. Please suggest.
You could try to use the lighting system (unless you already need it for shading). Set ambient color alpha to zero, and then simply use alpha test to discard the points with zero alpha. You will probably need to set quite high alpha in diffuse color in order to avoid half-transparent points, in case alpha blending is required to antialiass the points (to render discs instead of squares).
This assumes that the vertices have normals (but since you are talking about "facing away", I assume they do).
EDIT:
As correctly pointed out by #derhass, this will not work.
If you have cube-map textures, perhaps you can copy normal to texcoord and perform lookup of alpha from a cube-map (also in combination with the texture matrix to take camera and point cloud transformations into account).
Actually in case your normals are normalized, you can scale them using the texture matrix to [-0.49, +0.49] and then use a simple 1D (or 2D) bar texture (half white, half black - incl. alpha). Note that counterintuitively, this requires texture wrap mode to be left as default GL_REPEAT (not clamp).
If your point clouds have shape of some closed objects, you can still get similar behavior even without cube-map textures by drawing a dummy mesh with glColorMask(0, 0, 0, 0) (will only write depth) that will "cover" the points that are facing away. You can generate this mesh also as a group of quads that are placed behind the points in the opposite direction of their normal, and are only visible from the other side than the points are supposed to be visible, thus covering them.
Note that this will only lead to visual improvement (it will look like the points are culled), not performance improvement.
Just out of curiosity - what's your application and why do you need to avoid shaders?
I'm drawing some 3D structures in a Fl_Gl_Window in FLTK's implementation of opengl. This images are drawn and rotated so the code looks something like
glTranslatef(-xshift,-yshift,-zshift);
glRotatef(ang1,ang2,ang3);
glTranslatef(xshift,yshift,zshift);
glColor4f((120.0/256.0),(120.0/256.0),(120.0/256.0),0.2);
for (int side=0;side<num_sides;side++){
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
glBegin(GL_TRIANGLES);
//draw shape
glEnd();
glDisable(GL_BLEND);
}
and it almost works apart from at different angles the transparency doesn't work properly. For example, if I draw a cube from one side it will look transparent all the way through without being able to discern the two sides but from the other one side will appear darker as it is supposed to. It's as if it calculates the transparency too 'early' as in before the rotation. Am I doing something wrong? Should I move the rotation to below the transparency effects (i.e. before them in execution) or does the order of the triangles matter?
The order of the triangles matters. To get the desired effect for transparency you need to render the triangles in back to front order because the hardware blending works by reading the color for the fragment in the depth buffer and blending it with the fragment currently being shaded. That's why you are getting different results when you rotate your cube since you are not changing the order of the triangles in the cube. You may also want to look into Order Independent Transparency techniques.
Depending on how many triangles you have sorting them every frame can get really expensive. One approximation technique is to presort the triangles along the x, y, and z axes and then choose the sorted ordered that most closely matches your viewing direction. This only works to a certain extent. One popular type of order independent transparency technique is depth peeling. Here's a tutorial with some code for implementing it: http://mmmovania.blogspot.com/2010/11/order-independent-transparency.html?m=1. You might also want to read the original paper to get a better understanding of the technique: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.9286&rep=rep1&type=pdf.
Currently I have a texture atlas that is 2048 x 2048 pixels set up with three 512 x 512 textures stored, and I am only applying one texture to the object. So I used the following code to position the texture coordinates (from zero to 1) to the correct position on the texture atlas for that texture:
color = texture2D(tex_0, vec2(0.0, 1024.0/2048.0) + mod(texture_coordinate*vec2(40.0), vec2(1.0))*vec2(512.0/2048.0));
The problem is that when I apply this, there is a black border around the texture. I presume that this is because OpenGL can't blend the two pixels at the place of that border.
So how do I get rid of the border?
Edit*
I have already tried to move the starting and ending boundaries in toward the center of the texture and that didn't work.
Edit*
I found the source of the problem, the automatic mipmap generation is blending the textures in the texture atlas together. This means I have to write my own mipmapping function. (As far as I can tell)
If anyone has any better ideas, please do post.
Instead of using a normal 2D texture as the texture atlas with a grid of textures, I used the GL_2D_TEXTURE_ARRAY functionality to create a 3D texture that mipmapped correctly and repeated correctly. That way the textures did not blend together at higher mipmap levels.
How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...