Rendering a circle with a pixel shader in DirectX - hlsl

I would like to render a circle on to a triangle pair using a pixel shader written in HLSL. There is some pseudocode for this here, but I am running in to one problem after another while implementing it. This will be running on Windows RT, so I am limited to the DirectX 9.3 members of the Direct3D v11 API.
What is the general shape of the shader? For example, what parameters should be passed to the main method?
Any advice or working code would be greatly appreciated!

You should use the texture coordinates as inputs to render the parametric circle.
Here is a question that I asked about anti-aliasing a circle rendered in HLSL. Here is the code from that question with a minor change to make use of TEXCOORD0 more clear:
float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{
float dist = texCoord.x * texCoord.x
+ texCoord.y * texCoord.y;
if(dist < 1)
return float4(0, 0, 0, 1);
else
return float4(1, 1, 1, 1);
}
It uses the formula for a circle which is r² = x² + y². It uses the constant 1 for the radius-squared (ie: a circle of radius 1, as measured in texture coordinates). All points inside the circle are coloured black, and those outside, white.
To use it, provide triangles with texture-coordinates in the range [-1, 1]. If you want to use the more traditional [0, 1] you will have to include some code to scale and offset coordinates in the shader, or you will get a quarter-circle.
Once you have this up and running, you can experiment to add other features (for example: anti-aliasing as per my linked question).

Related

Implementing correct texture mapping in triangles with glTexCoord4f() in a Doom-like engine

A while ago I asked a question similar to this one, but in that case I was trying to correct the perspective texture mapping of a trapezoid that had the horizontal lines constantly parallel with glTexCoord4f() and this is relatively simple. However, now I'm trying to fix the texture mapping of the floor and ceiling in my engine, the problem is that since both depend on the shape of the map, I need to use triangles to fill in the polygonal shapes that the map may contain.
I tried a few variations of the same method I used for correct texture mapping on trapezoids, the attempt with more "acceptable" results were when I calculated the size of the triangle's edges (with screen coordinates) and used each result in the different 'q' in each glTexCoord4f(), that is how code currently stands.
With that in mind, how can I fix this while using glTexCoord4f()?
Here is the code I used to correct the texture mapping of the walls (functional):
float u, v;
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
float sza = wyaa - wyab; //Size of the first vertical edge on the wall
float szb = wyba - wybb; //Size of the second vertical edge on the wall
//Does the wall have streeched textures?
if(!(*wall).streechTexture){
u = -texLength;
v = -texHeight;
}else{
u = -1;
v = -1;
}
glBindTexture (GL_TEXTURE_2D, texture.at((*wall).texture));
glBegin(GL_TRIANGLE_STRIP);
glTexCoord4f(0, 0, 0, sza);
glVertex3f(wxa, wyaa + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, 0, 0, szb);
glVertex3f(wxb, wyba + shearing, -tzb * 0.001953);
glTexCoord4f(0, v * sza, 0, sza);
glVertex3f(wxa, wyab + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, v * szb, 0, szb);
glVertex3f(wxb, wybb + shearing, -tzb * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
And here the current code that renders both the floor and the ceiling (which needs to be fixed):
glEnable(GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, texture.at((*floor).texture));
float difA, difB, difC;
difA = vectorMag(Vertex(fxa, fyaa), Vertex(fxb, fyba)); //Size of the first edge on the triangle
difB = vectorMag(Vertex(fxb, fyba), Vertex(fxc, fyca)); //Size of the second edge on the triangle
difC = vectorMag(Vertex(fxc, fyca), Vertex(fxa, fyaa)); //Size of the third edge on the triangle
glBegin(GL_TRIANGLE_STRIP); //Rendering the floor
glTexCoord4f(ua * difA, va * difA, 0, difA);
glVertex3f(fxa, fyaa + shearing, -tza * 0.001953);
glTexCoord4f(ub * difB, vb * difB, 0, difB);
glVertex3f(fxb, fyba + shearing, -tzb * 0.001953);
glTexCoord4f(uc * difC, vc * difC, 0, difC);
glVertex3f(fxc, fyca + shearing, -tzc * 0.001953);
glEnd();
glBegin(GL_TRIANGLE_STRIP); //Rendering the ceiling
glTexCoord4f(uc, vc, 0, 1);
glVertex3f(fxc, fycb + shearing, -tzc * 0.001953);
glTexCoord4f(ub, vb, 0, 1);
glVertex3f(fxb, fybb + shearing, -tzb * 0.001953);
glTexCoord4f(ua, va, 0, 1);
glVertex3f(fxa, fyab + shearing, -tza * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
Here a picture of how it looks visually (for comparison purposes, the floor has the failed attempt at correct texture mapping, while the ceiling has affine texture mapping):
I understand that it would be easier if I just set a normal perspective view, but that would simply defeat the whole purpose of the engine.
This is an issue only for floor and ceiling (unless your camera can tilt). So you can render your wals as you doing. But for floors and ceiling you have these basic options (As I mentioned in your old duplicate post):
Rasterize scan line on your own
So instead of rendering triangles (which old ray casters did not do) you render vertical lines pixel by pixel using points instead of triangles. That will be much slower of coarse as GL is more suited for polygonal primitives. See draw_scanline functions in here:
Efficient floor/ceiling rendering in Raycaster
Use perspective view and pass z coordinate
Looks like you added the z coordinate already. So now you just need to set perspective view that matches your wall rendering. OpenGL will do the rest on its own. So you should add something like gluPerspective for your GL_PROJECTION matrix. but just for your floors/ceilings ...
Pass z coordinate and overide fragment shader
So you just write fragment shader that computes the perspective correct texture mapping correction in it and just output wanted texel color +/- some lighting. Here example of shaders usage:
complete GL+GLSL+VAO/VBO C++ example
For more info see:
Ray Casting with different height size

Wrapping texture co-ordinates on a variable-size quad?

Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;

fwidth(uv) giving strange results in glsl

I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.

Opengl pixel perfect 2D drawing

I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.

Gradient "miter" in OpenGL shows seams at the join

I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(