I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.
Related
Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;
I need to write a shader where the color of the pixel are black when the following equation is true:
(x-coordinate of pixel) mod 2 == 1
If it is false, the pixel should be white. Therefore I searched the web but it did not work.
More information:
I've an OpenGL scene with 800 x 600 resolution and the teapot in it. The teapot is red. Now I need to create that zebra look.
Here is some code I've wrote, but it didn'T work:
FragmentShader:
void main(){
if (mod(gl_FragCoord[0].x * 800.0 , 2.0) == 0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
VertexShader:
void main(void)
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
}
As far as I know, gl_FragCood.x is in range(0,1) therefore I need to multiply with width.
Interesting you mention the need to multiply with the width, have you tried without the * 800.0 in there? The range of gl_FragCoord is such that the distance between adjacent pixels is 1.0, for example [0.0, 800.0] or possibly [0.5, 800.5].
Remove the width multiplication and see if it works.
Instead of comparing directly to 0, try doing a test against 1.0, e.g.
void main(){
if (mod(gl_FragCoord[0].x , 2.0) >= 1.0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
That'll avoid precision errors and the cost of rounding.
As emackey points out, gl_FragCoord is specified in window coordinates, which:
... result from scaling and translating Normalized
Device Coordinates by the viewport. The parameters to glViewport() and
glDepthRange() control this transformation. With the viewport, you can
map the Normalized Device Coordinate cube to any location in your
window and depth buffer.
So you also don't actually want to multiply by 800 — the incoming coordinates are already in pixels.
Okay first up I am using:
DirectX 10
C++
Okay this is a bit of a bizarre one to me, I wouldn't usually ask the question, but I've been forced by circumstance. I have two triangles (not a quad for reasons I wont go into!) full screen, aligned to screen through the fact they are not transformed.
In the DirectX vertex declaration I am passing a 3 component float (Pos x,y,z) and 2 component float (Texcoord x,y). Texcoord z is reserved for texture2d arrays, which I'm currently defaulting to 0 in the the pixel shader.
I wrote this to achieve the simple task:
float fStartX = -1.0f;
float fEndX = 1.0f;
float fStartY = 1.0f;
float fEndY = -1.0f;
float fStartU = 0.0f;
float fEndU = 1.0f;
float fStartV = 0.0f;
float fEndV = 1.0f;
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fStartY, 0, fEndU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fEndY, 0, fStartU, fEndV));
IA Layout: (Update)
D3D10_INPUT_ELEMENT_DESC ieDesc[2] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,12, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
Data reaches the vertex shader in the following format: (Update)
struct VS_INPUT
{
float3 fPos :POSITION;
float3 fTexcoord :TEXCOORD0;
}
Within my Vertex and Pixel shader not a lot is happening for this particular draw call, the pixel shader does most of the work sampling from a texture using the specified UV coordinates. However, this isn't working quite as expected, it appears that I am getting only 1 pixel of the sampled texture.
The workaround was in the pixel shader to do the following: (Update)
sampler s0 : register(s0);
Texture2DArray<float4> meshTex : register(t0);
float4 psMain(in VS_OUTPUT vOut) : SV_TARGET
{
float4 Color;
vOut.fTexcoord.z = 0;
vOut.fTexcoord.x = vOut.fPosObj.x * 0.5f;
vOut.fTexcoord.y = vOut.fPosObj.y * 0.5f;
vOut.fTexcoord.x += 0.5f;
vOut.fTexcoord.y += 0.5f;
Color = quadTex.Sample(s0, vOut.fTexcoord);
Color.a = 1.0f;
return Color;
}
It was also worth noting that this worked with the following VS out struct defined in the shaders:
struct VS_OUTPUT
{
float4 fPos :POSITION0; // SV_POSITION wont work in this case
float3 fTexcoord :TEXCOORD0;
}
Now I have a texture that's stretched to fit the entire screen, both triangles already cover this, but why did the texture UV's not get used as expected?
To clarify I am using a point sampler and have tried both clamp and wrapping UV.
I was a bit curious and found a solution / workaround mentioned above, however I'd prefer not to have to do it if anyone knows why it's happening?
What semantics are you specifying for your vertex-type? Are they properly aligned with your vertices and also your shader? If you are using a D3DXVECTOR4, D3DXVECTOR3 setup (as shown in your VS code) this could be a problem if your CreateVertex() returns a D3DXVECTOR3, D3DXVECTOR2 struct.
It would be reassuring to see your pixel-shader code too.
Okay well, for one, the texture coordinates outside of 0..1 range get clamped. I made the mistake of assuming that by going to clip space -1 to +1 that the texture coordinates would be too. This is not the case, they still go from 0.0 to 1.0.
The reason why the code in the pixel shader worked, was because it was using the clip space x,y,z coordinates to automatically overwrite these texture coordinates; an oversight on my part. However, the pixel shader code results in texture stretch on a full screen 'quad', so it might be useful to someone somewhere ;)
I'm trying to render colored text to the screen. I've got a texture containing a black (RGBA 0, 0, 0, 255) representation of the text to display, and I've got another texture containing the color pattern I want to render the text in. This should be a fairly simple multitexturing exercise, but I can't seem to get the second texture to work. Both textures are Rectangle textures, because the integer coordinate values are easier to work with.
Rendering code:
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TextHandle);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, ColorsHandle);
glBegin(GL_QUADS);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top);
glVertex2f(x, y);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top + colorRect.Height);
glVertex2f(x, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top + colorRect.Height);
glVertex2f(x + textRect.Width, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top);
glVertex2f(x + textRect.Width, y);
glEnd;
Vertex shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
}
Fragment shader:
uniform sampler2DRect texAlpha;
uniform sampler2DRect texRGB;
void main()
{
float alpha = texture2DRect(texAlpha, gl_TexCoord[0].st).a;
vec3 rgb = texture2DRect(texRGB, gl_TexCoord[1].st).rgb;
gl_FragColor = vec4(rgb, alpha);
}
This seems really straightforward, but it ends up rendering solid black text instead of colored text. I get the exact same result if the last line of the fragment shader reads gl_FragColor = texture2DRect(texAlpha, gl_TexCoord[0].st);. Changing the last line to gl_FragColor = texture2DRect(texRGB, gl_TexCoord[1].st); causes it to render nothing at all.
Based on this, it appears that calling texture2DRect on texRGB always returns (0, 0, 0, 0). I've made sure that GL_MULTISAMPLE is enabled, and bound the texture on unit 1, but for whatever reason I don't seem to actually get access to it inside my fragment shader. What am I doing wrong?
The overalls look fine. It is possible that your texcoords for unit 1 are messed up, causing sampling outside the colored portion of your texture.
Is your color texture fully filled with color ?
What do you mean by "causes it to render nothing at all." ? This should not happen except if your alpha channel in color texture is set to 0.
Did you try with the following code, to override the alpha channel ?
gl_FragColor = vec4( texture2DRect(texRGB, gl_TexCoord[1].st).rgb, 1.0 );
Are you sure the the font outline texture contains a valid alpha values? You said that the texture is black and white, but you are using the alpha value! Instead of using the a component, try to use the r one.
Blending affects fragment shader output: it blends ths fragment color with the corresponding one.
I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(