Deferred Rendering with OpenGL, experiencing heavy pixelization near lit boundaries on surfaces - opengl

Problem Explaination
I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.
The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.
From about 2 meters away in our game's units (64 units/meter):
A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:
A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:
The light volume is shown here as the green wireframe:
As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).
Position reconstruction
First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.
To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).
Geometry Buffer
My gBuffer setup is:
enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = { GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt] = { GL_RED, GL_RGBA, GL_RGBA, GL_RGBA };
GLint types[num_rt] = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);

Normal Precision
The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.
I am unsure why my classmate does not get the same issue (he is going to investigate further).
After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.
Solution
After changing my normal render target to RG16F the problem is resolved.
Using method suggested here to store and retrieve normals I get the following results:
I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.
[EDIT 1]
As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).
Here's what it looks like now.
It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).
[EDIT 2]
After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.

Related

Can OpenGL be used to draw real valued triangles into buffer?

I need to implement an image reconstruction which involves drawing triangles in a buffer representing the pixels in an image. These triangles are assigned some floating point value to be filled with. If triangles are drawn such that they overlap, the values of the overlapping regions must be added together.
Is it possible to accomplish this with OpenGL? I would like to take advantage of the fact that rasterizing triangles is a basic graphics task that can be accelerated on the graphics card. I have a cpu-only implementation of this algorithm already but it is not fast enough for my purposes. This is due to the huge number of triangles that need to be drawn.
Specifically my questions are:
Can I draw triangles with a real value using openGL? (Or can I come up with a hack using color etc?)
Can OpenGL add the values where triangles overlap? (Once again I could deal with a hack, like color mixing)
Can I recover the real values for the pixels as an array of floats or similar to be further processed?
Do I have misconceptions about the idea that drawing in OpenGL -> using GPU to draw -> likely faster execution?
Additionally, I would like to run this code on a virtual machine so getting acceleration to work with OpenGL is more feasible than rolling my own implementation in something like Cuda as far as I understand. Is this true?
EDIT: Is an accumulation buffer an option here?
If 32-bit floats are sufficient then it looks like the answer is yes: http://www.opengl.org/wiki/Image_Format#Required_formats
Even under the old fixed pipeline you could use a blending mode with the function GL_FUNC_ADD, though I'm sure fragment shaders can do it more easily now.
glReadPixels() will get you the data back out of the buffer after drawing.
There are software implementations of OpenGL, but you get to choose when you set up the context. Using the GPU should be much faster than the CPU.
No idea. I've never used OpenGL or CUDA on a VM. I've never used CUDA at all.
I guess giving pieces of code as an answer wouldn't be appropriate here as your question is extremely broad. So I'll simply answer your questions individually with bits of hints.
Yes, drawing triangles with openGL is a piece of cake. You provide 3 vertice per triangle and with the proper shaders your can draw triangles, with filling or just edges, whatever you want. You seem to require a large set (bigger than [0, 255]) since a lot of triangles may overlap, and the value of each may be bigger than one. This is not a problem. You can fill a 32bit precision one channel frame buffer. In your case only one channel may suffice.
Yes, the blending exists since forever on openGL. So whatever the version of openGL you choose to use, there will be a way to add up the value of the trianlges overlapping.
Yes, depending on you implement it you may have to use glGetSubData() or glReadPixels or something else. However, depending on the size of the matrix you're filling, it may be a bit long to download the full buffer (2000x1000 pixels for a one channel at 32bit would be around 4-5ms). It may be more efficient to do all your processing on the GPU and extract only few valuable information instead of continuing the processing on the CPU.
The execution will be undoubtedly faster. However, the download of data from the GPU memory is often not optimized (upload is). So the time you will win on the processing may be lost on the download. I've never worked with openGL on VM so the additional loss of performance is unknown to me.
//Struct definition
struct Triangle {
float[2] position;
float intensity;
};
//Init
glGenBuffers(1, &m_buffer);
glBindBuffer(GL_ARRAY_BUFFER, 0, m_buffer);
glBufferData(GL_ARRAY_BUFFER,
triangleVector.data() * sizeof(Triangle),
triangleVector.size(),
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_ARRAY_BUFFER, 0, 0);
glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glVertexAttribPointer(
POSITION,
2,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
0);
glVertexAttribPointer(
INTENSITY,
1,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
sizeof(float)*2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_R32F,
width,
height,
0,
GL_RED,
GL_FLOAT,
NULL);
glBindTexture(GL_FRAMEBUFFER, 0);
glGenFrameBuffers(1, &m_frameBuffer);
glBindFrameBuffer(GL_FRAMEBUFFER, m_frameBuffer);
glFramebufferTexture(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
m_texture,
0);
glBindFrameBuffer(GL_FRAMEBUFFER, 0);
After you just need to write your render function. A simple glDraw*() should be enough. Just remember to bind your buffers correctly. To enable the blending with the proper function. You might also want to disable the anti aliasing for your case. At first I'd say you need an ortho projection but I don't have all the element of your problem so it's up to you.
Long story short, if you never worked with openGL, the piece of code above will be relevant only after you read few documentation/tutorials on openGL/GLSL.

"Culling" for single vertices - glDrawArrays(GL_POINTS)

I have to support some legacy code which draws point clouds using the following code:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, (float*)cloudGlobal.data());
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (float*)normals.data());
glDrawArrays(GL_POINTS, 0, (int)cloudGlobal.size());
glFinish();
This code renders all vertices regardless of the angle between normal and the "line of sight". What I need is draw only vertices whose normals are directed towards us.
For faces this would be called "culling", but I don't know how to enable this option for mere vertices. Please suggest.
You could try to use the lighting system (unless you already need it for shading). Set ambient color alpha to zero, and then simply use alpha test to discard the points with zero alpha. You will probably need to set quite high alpha in diffuse color in order to avoid half-transparent points, in case alpha blending is required to antialiass the points (to render discs instead of squares).
This assumes that the vertices have normals (but since you are talking about "facing away", I assume they do).
EDIT:
As correctly pointed out by #derhass, this will not work.
If you have cube-map textures, perhaps you can copy normal to texcoord and perform lookup of alpha from a cube-map (also in combination with the texture matrix to take camera and point cloud transformations into account).
Actually in case your normals are normalized, you can scale them using the texture matrix to [-0.49, +0.49] and then use a simple 1D (or 2D) bar texture (half white, half black - incl. alpha). Note that counterintuitively, this requires texture wrap mode to be left as default GL_REPEAT (not clamp).
If your point clouds have shape of some closed objects, you can still get similar behavior even without cube-map textures by drawing a dummy mesh with glColorMask(0, 0, 0, 0) (will only write depth) that will "cover" the points that are facing away. You can generate this mesh also as a group of quads that are placed behind the points in the opposite direction of their normal, and are only visible from the other side than the points are supposed to be visible, thus covering them.
Note that this will only lead to visual improvement (it will look like the points are culled), not performance improvement.
Just out of curiosity - what's your application and why do you need to avoid shaders?

Layered images as framebuffer with high number of layers

I want to use a layered image (a 3D texture with 10-1000 z resolution) as the texture for a framebuffer.
I set the texture for the framebuffer via:
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_3D, textureName);
glTexStorage3D(GL_TEXTURE_3D, 1, GL_R32F, width, height, depth);
glBindTexture(GL_TEXTURE_3D, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, textureName, 0);
Then I create geometry for the layers in a geometry shader
void main() {
for (int layer = 0; layer < textureDepth; ++layer) {
gl_Layer = layer;
//generate and emit vertices
EndPrimitive();
}
}
For small texture depth (e.g. 10) this seems to work but for bigger numbers the result seems to be wrong. There are many places where things could go wrong, so I wanted to make sure that this is working.
Am I setting things up right?
Are there limits to the number of layers that I have to query (glGetInteger)?
Do you have any experience with the performace of highly layered images as framebuffers?
Note that my main problem is the lack of information on this topic.
The documentation is very short and the part on non cube map layered images even shorter. I would be happy about any tutorial that covers this topic (not the cube map problem).
There are a number of limitations you can be hitting in this situation, but the one that comes to my mind first is gl_MaxGeometryOutputVertices. OpenGL implementations can restrict you to as few as 256 vertices output in a single geometry shader invocation. You can split your geometry shader into multiple invocations if you are hitting this limitation. In fact, for heavily layered rendering you should be using GS invocation anyway.
If you update your question with more details, particularly your full geometry shader, I can give you a more detailed answer. Including how to setup GS invocations if you are not already familiar with this.

How to make fading-to-black effect with OpenGL?

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.

How to get rid of texture wrapping seam?

Take a look at the following image - you will see the clouds in the background have a very annoying seam:
http://simoneschbach.com/seam.png
This seam is occurring when the wrap around occurs, as I am supplying texture coordinates programmatically with the following code:
gBackgroundPos += 0.0003f; // gBackgroundPos climbs indefinitely...
GLfloat bgCoords[] = { gBackgroundPos, 1.0,
gBackgroundPos + 0.5f, 1.0,
gBackgroundPos, 0.0,
gBackgroundPos + 0.5f, 0.0 };
I have enabled texture wrapping during texture init as follows:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
What can I do here to get rid of the very visible seam?
The problem you have is exactly solved by this technique, which is extremely simple to implement:
http://vcg.isti.cnr.it/~tarini/no-seams/
There is an open-source demo at that link, which exposes the used fragment shader.
The trick is easy to adopt even without a complete understanding of why it works, but it is fully explained in the Journal of Graphic Tools article:
"Cylindrical and Toroidal Parameterizations Without Vertex Seams"
which can be found, for example, at
http://vcg.isti.cnr.it/Publications/2012/Tar12/.
Unfortunately, the other solutions listed here won't work:
GL_REPEAT (as GL_TEXTURE_WRAP), alone, does not do what you need. The problem, as noted, is that a triangle connecting point with S = 0.9 and S = 0.1 interpolates all the way back across the cylinder, not forward across the seam.
Replicating vertices on the "cut" (the texture seam) would work on static geometries, where texture coordinates are sent as attributes (but, even then, the drawbacks are many: introduces replication, and seams breaking the geometry: the two sides of the texture cut will be topologically disconnected). In this case specific case, texture coordinates are produced procedurally so that's not even an option.
This is an old question, but I recently had to deal with the same issue.
When a texture wraps around, say, a cylinder, a natural seam occurs where the edges of the map meet.
The strip of triangles that cross that boundary wind up having texture coordinates that cause the renderer to squeeze the entire texture, backwards, into those triangles.
Just looking at the U coordinate: a triangle that does not cross the texture edge will have have its texture coordinates in counter-clockwise order. However, when a triangle crosses the border (for example) you wind up with the opposite winding order, because the point that crosses the border gets mapped back to the opposite side of the texture. You'll typically see a triangle that has one or two texture coords in the 0.9-0.9999 U range, and the remaining coordinates in the 0-0.1 range.
When the renderer sees that, it does exactly what it's supposed to: it interpolates the face's texture coordinates from 0.9 down to 0.1, which includes most of the texture. Your seam is just what the texture looks like when it's squeezed backwards into a small space.
The solution is to split the edges that cross the border, so that each of the affected vertices appears twice in the list. The first one will have texture coordinates on the left end of the map, and the other will have all of its texture coordinates on the other end, so that no single edge spans the texture.
Note that you're not changing the XYZ values for the vertex: just UV.
Also note that this doesn't happen with surfaces that don't share vertices between disparate edges. Planes are immune. Your example image isn't online anymore so I can't verify that this is what's happening to you, but it seems likely based on your cursory description.
I'd vote for Shezan Baig if that comment was an answer.
GL_REPEAT is meant to do exactly what you want. If it does not, it's very likely because your texture itself has the seam in it, or alternatively, that the toolchain that loads the texture introduces the seam (say because the source texture is not a power of two size, e.g.).
You might be able to take advantage of texture borders (2^m+1 x 2^n+1 textures, rather than 2^m x 2^n) and copy data from the opposite side into the border pixels to make the texture cyclic.
You'll also want to change GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER to use linear interpolation or better (maybe GL_LINEAR_MIPMAP_LINEAR for best results).