OpenGL Y texture coordinates behaving oddly - c++

So basically I am making a 2D game with opengl/c++. I have a quad with a texture mapped on it and because I cant use non power of two images (or at least I shouldnt) I have an image inside a power of two image and I wish to remove the excess image with texture mapping.
GLfloat quadTexcoords[] = {
0.0, 0.0,
0.78125, 0.0,
0.78125, 0.8789,
0.0, 0.8789
};
glGenBuffers(1, &VBO_texcoords);
glBindBuffer(GL_ARRAY_BUFFER, VBO_texcoords);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadTexcoords), quadTexcoords, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is my texture code. Nothing special I know. The X coordinates (0.7256) work fine and it remove the excess image on the right side of the image. However the Y coordinate does not work. I have debugged my game and found the correct coordinate is sent to the shader but it wont map. It seems to be working in reverse sometimes but it is very unclear.
if I give it a coordinate for Y like 20 it repeats multiple times but still leaves a little white line a the top of the quad. I havent got the faintest idea what it could be.
Other details: The image I am trying to map is 800 by 450 and it is wrapped in an image of size 1024 by 512. I scale the quad by the aspect ratio of 800 by 450. I doubt this would make a difference but you never know!
Thanks for your time.
EDIT: here is an example of whats happening.
This is the full image mapped fully (0 to 1 in X and Y) The blue portion is 200 pixels high and the full image is 300 pixels high.
The second image is the image mapped to 2 thirds of the Y axis (i.e. 0 to 0.6666 in Y). This should remove the white at the top but that is not what is happening. I don't think the coordinates are back to front as I got the mapping of several tutorials online.

It seems to be working in reverse sometimes but it is very unclear.
OpenGL assumes the viewport origin in the lower left and texture coordinates running "along with" flat memory texel values in S, then T direction. In essence this means, that with one of the usual mappings, textures have their origin in the lower left, contrary to the upper left origin found in most image manipulation programs.
So in your case the white margin you see, is simply the padding you probably applied to the texture image at the bottom, instead of the top, where you should put it. Why can't you use NPO2 textures anyway? They're widely supported.

Not a real solution to you problem but maybe a way to go around the problem:
You can scale the image to 1024x1024 (which deforms the image) and use 0->1 texture coordinates. Because the aspect ratio of your quad is 800x450 the image should be displayed correctly.

Related

OpenGL depth sorting fails on single mesh, when rendering to framebuffer instead of screen

As a simple demonstration of my problem, I am trying to render a large but simple mesh to a texture to be used later, but strangely enough, the further-away-from-the-camera parts of this mesh are displayed in front of the closer-to-the-camera parts, when viewed from a specific angle. Despite the undeniable fact that I do beyond all doubt use depth testing:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
As an example, I am trying to render a subdivided grid (on the xz plane) centered on the origin, with a smooth "hill" in the middle of the grid.
When rendered to the screen no errors occur and the mesh looks like this (rendered using orthographic projection, and with the greyscale color representing depth, no error will furthermore occur even if the mesh is viewed from any side):
Rendering to the screen is of course done by making sure the framebuffer is set to 0 (glBindFramebuffer(GL_FRAMEBUFFER, 0);), but i need to render this scene to another framebuffer which is not my screen, in order to use this render as a texture.
So i have set up another framebuffer, and an output texture, and now i am redering this scene to the framebuffer (with absolutely nothing changed, except the framebuffer and the viewport size, which is set to match the output texture). For the purpose of demonstrating the error, which I am experiancing, I am then then rendering this rendered texture, onto a plane which is then displayed on the screen.
When the mesh is rotated seen from the positive x axis, and rotated around the y axis, centered on its origin between -0.5 π rad and 0.5 π rad, the rendered texture looks exactly identical to the result when rendering to the screen, as seen on the image above.
However when rotation around the y axis is greater than 0.5 π rad or less than -0.5 π rad the closer-to-the-camera hill is rendered behind the further-away-from-the-camera plane (the fact that the hill is closer to the camera can be proven by looking at the color, which represents debth):
(whoops got the title wrong on the window, ignore that)
In the borderregions with a rotation around the y axis of close to 0.5 π rad or -0.5 π rad the scene looks like this.
(whoops got the title wrong on the window again, ignore that again)
To recap. This error with the depth sorting happens only when rendering to a texture using a framebuffer, and only when the object is viewed from a specific angle. When the object is rendered directly to the screen, no error occurs. My question is therefor: why does this happen, and how (if at all) can I avoid avoid it.
If this problem only happens when you're rendering to the texture framebuffer, you probably don't have a depth attachment properly linked to it.
Make sure that during FBO init you are linking it to a depth texture as well.
There's a good example of how to do this here.
Also, check all of the matrices you're using to render -- I've had several cases in the past where improper matrices have thrown off depth calculations.

OpenGL: Drawing very thin triangles with TriangleList turn into points

I'm using TriangleList to output my primitives. Most all of the time I need to draw rectangles, triangles, circles. From time to time I need to draw very thin triangles (width=2px for example). I thought it should look like a line (almost a line) but it looks like separate points :)
Following picture shows what I'm talking about:
First picture at the left side shows how do I draw a rectangle (counter clockwise, from top right corner). And then you can see the "width" of the rectangle which I call "dx".
How to avoid this behavior? I would it looks like a straight (almost straight) line, not as points :)
As #BrettHale mentions, this is an aliasing problem. For example,
Without super/multisampling, the triangle only covers the centre of the bottom right pixel and only it will receive colour. Real pixels have area and in a perfect situation, would receive a portion of the colour equal to the area covered. "Antialiasing" techniques reduce aliasing effects caused by not integrating colour across pixels.
Getting it to look right without being incredibly slow is hard. OpenGL provides GL_POLYGON_SMOOTH, which conservatively rasterizes triangles and draws the correct percentages of colour to each pixel using blending. This works well until you have overlapping triangles and you hit the problem of transparency sorting where order-independent transparency is needed. A simple and more brute force solution is to render to a much bigger texture and then downsample. This is essentially what supersampling does, except the samples can be "anisotropic" (irregular) which gives a nicer result. Multisampling techniques are adaptive and a bit more efficient, e.g. supersample pixels only at triangle edges. It is fairly straightforward to set this up with OpenGL.
However, as the triangle area approaches zero the area will too and it'll still disappear entirely even with antialiasing (although will fade out rather than become pixelated). Although not physically correct, you may instead be after a minimum 1-pixel width triangle so you get the lines you want even if it's a really thin triangle. This is where doing your own conservative rasterization may be of interest.
This is the problem of skinny triangles in general. For example, in adaptive subdivision when you have skinny T-junctions, it happens all the time. One solution is to draw the edges (you can use GL_LINE_STRIP) with having antialiasing effect on You can have:
Gl.glShadeModel(Gl.GL_SMOOTH);
Gl.glEnable(Gl.GL_LINE_SMOOTH);
Gl.glEnable(Gl.GL_BLEND);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);
Gl.glHint(Gl.GL_LINE_SMOOTH_HINT, Gl.GL_DONT_CARE);
before drawing the lines so you get lines when your triangle is very small...
This is called a subpixel feature, when geometry gets smaller than a single pixel. If you animated the very thin triangle, you would see the pixels pop in and out.
Try turning multi-sampling on. Most GL windowing libraries support multisampled back buffer. You can also force it on in your graphics driver settings.
If the triangle is generated by geometry shader, then you can make the triangle area dynamic.
For example, you can make the triangle width always greater than 1px.
// ndc coord is range from -1.0 to 1.0 and the screen width is 1920.
float pixel_unit = 2.0 / 1920.0;
vec2 center = 0.5 * (triangle[0].xy + triangle[1].xy );
// Remember to divide the w component.
float triangle_width = (triangle[0].xy - center)/triangle[0].w;
float scale_ratio = pixel_unit / triangle_width;
if (scale_ratio > 1.0){
triagle[0].xy = (triangle[0].xy - center) * scale_ratio + center;
triagle[1].xy = (triangle[1].xy - center) * scale_ratio + center;
}
This issue can also be addressed via conservative rasterisation. The following summary is reproduced from the documentation for the NV_conservative_raster OpenGL extension:
This extension adds a "conservative" rasterization mode where any pixel
that is partially covered, even if no sample location is covered, is
treated as fully covered and a corresponding fragment will be shaded.
Similar extensions exist for the other major graphics APIs.

Holes on Heightmap based terrain using Directx11

I'm currently working on a cylinder shaped terrain produced by a height map.
What happens in the program is simple, there is a texture for the colors of the terrain that has the alpha value of regions in with i want it to be invisible and another texture ARGB with the A being the gray scale for the heights and RGB is the normal for the light.
The texture is such that the A value goes from 1 to 255 and I'm reserving the 0 for the regions with holes, meaning i don't want then to exist.
So in theory no problem, I'm making those regions invisible based on the first texture but on practice what's happening is that the program is considering the 0 as the minimum height and, even with the texture on top, is creating some lines towards this regions of 0, like trying to make its triangle but not getting there because i cut the next vertex by making it invisible.
Notice the lines going to the center of the cylinder
This is how it gets when i stop making those vertex invisible
So, just to say, i used the function Clip() on the pixel shader to make it invisible.
Basically what i need of help:
Is it possible, the same way i use clip() on the pixel shader i do something like that on the vertex shader and get rid of the unwanted vertex?
Basically, is possible to just say to ignore value 0?
Any ideas to fix this? i thinking of making all the vertex that are 0 become the value of his neighbor, that way those lines wouldn't go to the center but to the same plane as the cylinder itself.
Another thing is that we can see that the program is interpolating the values from one vertex to the next, that is why i cuts on halfway through to the invisible vertex
I'm working with Directx11 API with C++ and the program uses Tessellation.
Thank you for your time and will be very glad with any input on this!
Well i did resolve a bit of this issue.
I made the texture with the height values pass through a modifier that created another texture with the zero values substituted by the side pixel with value different then zero or change for 128.0f.
with that it made the weird lines direction be more accurate not going to the center of the cylinder but along the line.

Rasterizer not picking up GL_LINES as I would want it to

So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.
Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.
Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.

Double buffered sprite issue in OpenGL

Unfortunately, taking a screenshot does no replicate the problem, so I'll have to explain.
My character is a QUAD with a texture bound to it. When I move this character in any direction, the 'back end' of the pixels have a green and red 'after-glow' or strip of pixels. Very hard to explain, but I am assuming it is a problem with the double buffering. Is there a known issue associated with moving sprites and trailing pixels?
My only guess at this point is that you are only using a subset of the texture (i.e. your UVs are not just 0 and 1), and you have some colored pixels outside the rect you're drawing, and due to bilinear filtering, you catch a glimpse of them.
When creating textures with alpha, be sure to create an outline around the visible part of the texture with the same color (i.e. if your texture is a brown wooden fence, make sure that all transparent pixels near the fence are brown too).
NOTE that some texture compression algorithms will remove the color value from a pixel if it is entirely transparent, so if necessary, write a test pixel shader that ignores alpha to make sure that your texture made it through the pipeline intact.