Related
Before I start my question, a little bit of background. I started learning OpenGL not so long ago, and I have learned most of what I know about it here. I have only really gotten past 2 tutorials, and yes, I know I will eventually have to learn about matrices, but for now, nothing fancy. So let's get on with it.
Okay, so, I simplified my program just a bit, but no worries, it still recreates the same problem. For my example, we are making a purple triangle. I do the usual, initializing GLFW and GLEW, and create a window with the following hints:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_SAMPLES, 8);
And then I create my window:
GLFWwindow* Window = glfwCreateWindow(640, 480, "Foo", NULL, NULL);
glfwMakeContextCurrent(Window);
glfwSwapInterval(0);
These are my vertices:
float Vertices[] = {
0.0f, 0.5f, 1.0f,
0.5f, -0.5f, 1.0f,
-0.5f, -0.5f, 1.0f
};
My shaders:
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;"
"void main () {"
" gl_Position = vec4 (vp, 1.0);"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;"
"void main () {"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);"
"}";
All is good, I compile the whole program, and voila! Purple triangle!
The yellow counter on the top left is FRAPS, by the way.
So, anyways, my brain gets this awesome idea (not really), what if I do this: vec4(vp, vp.z) in the vertex shader? Then I could get some sort of depth just by changing my z's in my buffer, I thought. Note that I wasn't thinking of replacing a perspective matrix, it was just a sort of an experiment. Please don't hate me.
And it worked, by changing the values, I got something that looked like depth, as in it looked like it was getting farther into the distance. Take a look, I changed the top vertex from 1.0 to 6.0:
Now here's the problem: I change the value to 999999999 (9 nines), and I get this:
Seems to work. Little difference from z = 6 though. Change it to 999999999999999999999999 (24 nines)? No difference. Take a look for yourself:
So this is weird. Big difference in numbers, yet little difference visually. Accuracy issues maybe? Multiple 24 nines by 349 and I get the same result. The kicker: Multiply the 24 nines by 350 and the triangle disappears. This is a surprise to me because I thought that the change would be visible and gradual. It clearly wasn't. However, changing the w manually in the vertex shader instead of doing vp.z does seem to give a gradual result, instead of just suddenly disappearing. I hope someone could shed light on this. If you got this far, you're one awesome person for reading through all my crap, for that, I thank you.
Your model can be seen as a simple form of a pinhole camera where the vanishing point for the depth direction is the window center. So, lines that are parallel to the z-axis meet in the center if they are extended. The window center represents the point of infinite depth.
Changing a vertex's z (or w) component from 1 to 6 is a very large change in depth (the vertex is 6 times farther away from the camera than before). That's why the resulting vertex is closer to the screen center than before. If you double the z component again, it will move a bit closer to the screen center (the distance will be halved). But it is obviously already very close to the center, so this change is hardly recognizable. The same applies to the 999999... depth value.
You can observe this property on most natural images, especially with roads:
[Source: http://www.benetemps.com/road-warriors.htm ]
If you walk along the road for - let's say - 5 meters, you'll end up somewhere at the bottom of the image. If you walk five more meters, you continue to the image center. After another five meters you're even closer. But you can see that the distance on the screen gets shorter and shorter the farther away you are.
Nico gave you a great visual explanation of what happens. The same thing can also be explained by using simple math, using the definition of homogeneous coordinates.
Your input coordinates have the form:
(x, y, z)
By using vec4(vp, vp.z) in the vertex shader, you map these coordinates to:
(x, y, z, z)
After the division by w that happens when converting from clip coordinates to normalized device coordinates, this is mapped to:
(x / z, y / z, 1.0f)
As long as z has the value 1.0f, this is obviously still the same as (x, y, z), which explains why the 2nd and 3rd vertex don't change in your experiment.
Now, applying this to your first vertex as you vary the z value, it gets mapped as:
(0.0f, 0.5f, z) --> (0.0f, 0.5f / z, 1.0f)
As you approach infinity with the z value, the y coordinate converges towards 0.5f / infinity, which is 0.0f. Since the center of the screen is at (0.0f, 0.0f), the mapped vertex converges towards the center of the screen.
Also, the vertex moves less and less as the z value increases. Picking a few values:
z = 1.0f --> y = 0.5f
z = 10.0f --> y = 0.05f
z = 100.0f --> y = 0.005f
z = 1000.0f --> y = 0.0005f
For example, when you change z from 100.0f to 1000.0f, y changes by 0.0045, or only a little more than 0.2% of your window height. With a window height of 500 pixels, that would be just about 1 pixel.
Why the triangle disappears completely at a certain value is somewhat more mysterious. I suspect that it must be some kind of overflow/rounding issue during clipping.
Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.
I am trying to learn some OpenGL basics by reading OpenGL Superbible.
I am at the beginning of the 4th chapter and I have a question about the transformations.
Firstly, relevant link:http://www.songho.ca/opengl/gl_transform.html
If I understand this pipeline (so to speak) right, if in my code I would have something like this
const float vertexPositions[] = {
0.75f, 0.75f, 0.0f, 1.0f,
0.75f, -0.75f, 0.0f, 1.0f,
-0.75f, -0.75f, 0.0f, 1.0f,
};
those coordinates are in so called object space coordinates, and I can specify each value as something in [-1,1] range.
After applying viewmodel matrix, each vertex coordinates can be any number and those coordinates will be in so called eye coordinates.
After applying projection matrix (be it perspective projection) we are in clip space, and still the numbers can have any possible value.
Now here is the point I am wondering about. In this page it is said that for each vertex x,y,z coordinate we are diving it by fourth value w, which is present because we are using homogeneous coordinate system, and after the division, x,y,z are in range [-1,1].
My question is, how can be sure that after all those transformations the value of w will be sufficient enough, that after dividing x,y,z by it we will get something in range [-1,1]?
… object space coordinates, and I can specify each value as something in [-1,1] range.
You're not limited in the range for object coordinates.
My question is, how can be sure that after all those transformations the value of w will be sufficient enough, that after dividing x,y,z by it we will get something in range [-1,1]?
The range [-1, 1] is the range of what will be in the viewport after transformation. Everything outside that range is outside the viewport and hence clipped. There's nothing to ensure about this. If things are in range, they are visible, if not, they are outside the viewport window.
I am doing some really basic experiments around some 2D work in GL. I'm trying to draw a "picture frame" around an rectangular area. I'd like for the frame to have a consistent gradient all the way around, and so I'm constructing it with geometry that looks like four quads, one on each side of the frame, tapered in to make trapezoids that effectively have miter joins.
The vert coords are the same on the "inner" and "outer" rectangles, and the colors are the same for all inner and all outer as well, so I'd expect to see perfect blending at the edges.
But notice in the image below how there appears to be a "seam" in the corner of the join that's lighter than it should be.
I feel like I'm missing something conceptually in the math that explains this. Is this artifact somehow a result of the gradient slope? If I change all the colors to opaque blue (say), I get a perfect solid blue frame as expected.
Update: Code added below. Sorry kinda verbose. Using 2-triangle fans for the trapezoids instead of quads.
Thanks!
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Prep the color array. This is the same for all trapezoids.
// 4 verts * 4 components/color = 16 values.
GLfloat colors[16];
colors[0] = 0.0;
colors[1] = 0.0;
colors[2] = 1.0;
colors[3] = 1.0;
colors[4] = 0.0;
colors[5] = 0.0;
colors[6] = 1.0;
colors[7] = 1.0;
colors[8] = 1.0;
colors[9] = 1.0;
colors[10] = 1.0;
colors[11] = 1.0;
colors[12] = 1.0;
colors[13] = 1.0;
colors[14] = 1.0;
colors[15] = 1.0;
// Draw the trapezoidal frame areas. Each one is two triangle fans.
// Fan of 2 triangles = 4 verts = 8 values
GLfloat vertices[8];
float insetOffset = 100;
float frameMaxDimension = 1000;
// Bottom
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = frameMaxDimension;
vertices[3] = 0;
vertices[4] = frameMaxDimension - insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = 0 + insetOffset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
// Left
vertices[0] = 0;
vertices[1] = frameMaxDimension;
vertices[2] = 0;
vertices[3] = 0;
vertices[4] = 0 + insetOffset;
vertices[5] = 0 + insetOffset;
vertices[6] = 0 + insetOffset;
vertices[7] = frameMaxDimension - inset;
glVertexPointer(2, GL_FLOAT , 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
/* top & right would be as expected... */
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
As #Newbie posted in the comments,
#quixoto: open your image in Paint program, click with fill tool somewhere in the seam, and you see it makes 90 degree angle line there... means theres only 1 color, no brighter anywhere in the "seam". its just an illusion.
True. While I'm not familiar with this part of math under OpenGL, I believe this is the implicit result of how the interpolation of colors between the triangle vertices is performed... I'm positive that it's called "Bilinear interpolation".
So what to do to solve that? One possibility is to use a texture and just draw a textured quad (or several textured quads).
However, it should be easy to generate such a border in a fragment shader.
A nice solution using a GLSL shader...
Assume you're drawing a rectangle with the bottom-left corner having texture coords equal to (0,0), and the top-right corner with (1,1).
Then generating the "miter" procedurally in a fragment shader would look like this, if I'm correct:
varying vec2 coord;
uniform vec2 insetWidth; // width of the border in %, max would be 0.5
void main() {
vec3 borderColor = vec3(0,0,1);
vec3 backgroundColor = vec3(1,1,1);
// x and y inset, 0..1, 1 means border, 0 means centre
vec2 insets = max(-coord + insetWidth, vec2(0,0)) / insetWidth;
If I'm correct so far, then now for every pixel the value of insets.x has a value in the range [0..1]
determining how deep a given point is into the border horizontally,
and insets.y has the similar value for vertical depth.
The left vertical bar has insets.y == 0,
the bottom horizontal bar has insets.x = 0,, and the lower-left corner has the pair (insets.x, insets.y) covering the whole 2D range from (0,0) to (1,1). See the pic for clarity:
Now we want a transformation which for a given (x,y) pair will give us ONE value [0..1] determining how to mix background and foreground color. 1 means 100% border, 0 means 0% border. And this can be done in several ways!
The function should obey the requirements:
0 if x==0 and y==0
1 if either x==1 or y==1
smooth values in between.
Assume such function:
float bias = max(insets.x,insets.y);
It satisfies those requirements. Actually, I'm pretty sure that this function would give you the same "sharp" edge as you have above. Try to calculate it on a paper for a selection of coordinates inside that bottom-left rectangle.
If we want to have a smooth, round miter there, we just need another function here. I think that something like this would be sufficient:
float bias = min( length(insets) , 1 );
The length() function here is just sqrt(insets.x*insets.x + insets.y*insets.y). What's important: This translates to: "the farther away (in terms of Euclidean distance) we are from the border, the more visible the border should be", and the min() is just to make the result not greater than 1 (= 100%).
Note that our original function adheres to exactly the same definition - but the distance is calculated according to the Chessboard (Chebyshev) metric, not the Euclidean metric.
This implies that using, for example, Manhattan metric instead, you'd have a third possible miter shape! It would be defined like this:
float bias = min(insets.x+insets.y, 1);
I predict that this one would also have a visible "diagonal line", but the diagonal would be in the other direction ("\").
OK, so for the rest of the code, when we have the bias [0..1], we just need to mix the background and foreground color:
vec3 finalColor = mix(borderColor, backgroundColor, bias);
gl_FragColor = vec4(finalColor, 1); // return the calculated RGB, and set alpha to 1
}
And that's it! Using GLSL with OpenGL makes life simpler. Hope that helps!
I think that what you're seeing is a Mach band. Your visual system is very sensitive to changes in the 1st derivative of brightness. To get rid of this effect, you need to blur your intensities. If you plot intensity along a scanline which passes through this region, you'll see that there are two lines which meet at a sharp corner. To keep your visual system from highlighting this area, you'll need to round this join over. You can do this with either a post processing blur or by adding some more small triangles in the corner which ease the transition.
I had that in the past, and it's very sensitive to geometry. For example, if you draw them separately as triangles, in separate operations, instead of as a triangle fan, the problem is less severe (or, at least, it was in my case, which was similar but slightly different).
One thing I also tried is to draw the triangles separately, slightly overlapping onto one another, with a right composition mode (or OpenGL blending) so you don't get the effect. I worked, but I didn't end up using that because it was only a tiny part of the final product, and not worth it.
I'm sorry that I have no idea what is the root cause of this effect, however :(
Im using glColor4f(1.0f, 1.0f, 1.0f, alpha_); to set transparency to primitives I'm drawing.
However I'd like to be able to read the current opengl alpha value. Is that possible?
e.g.
float current_alpha = glGetAlpha(); //???
glColor4f(1.0f, 1.0f, 1.0f, alpha_*current_alpha);
Either you store the last alpha value you sent using glColor4f, either you use:
float currentColor[4];
glGetFloatv(GL_CURRENT_COLOR,currentColor);
Do you mean the alpha value of the fragment you're drawing on (which would explain why you want alpha_ * current_alpha)? If so, remember that reading a fragment back from the pipeline is expensive.
If you're rendering back to front, consider using the GL_SRC_ALPHA + GL_ONE_MINUS_SRC_ALPHA trick.