I'm trying to draw a simple square with dx11, but the order of the indices of each triangle determines whether or not it shows up. I set the cull mode to none in the rasterizer state, but it doesn't seem to change anything.
If I specify the vertices of the first triangle to be 0, 1, 2 instead of 2, 1, 0 then the triangle doesn't show up. So my question is, do I need to order the vertices of a triangle in a specific way regardless of the cull mode?
P.S. I'm drawing a triangle list and not a strip.
UINT indices[] = {2, 1, 0,
1, 3, 0};
MeshVertex vertices[] =
{
{ Vector3(1.0f, 1.0f, 0.0f)}, //Top Right
{ Vector3(-1.0f, -1.0f, 0.0f)}, //Bottom Left
{ Vector3(1.0f, -1.0f, 0.0f)}, //Bottom Right
{ Vector3(-1.0f, 1.0f, 0.0f)} //Top Left
};
mesh = new Mesh(vertices, indices, 4, 6, shader);
So my question is, do I need to order the vertices of a triangle in a
specific way regardless of the cull mode?
** Define/Draw your vertices in clockwise order, and don't change the default cull mode.**
To determine whether a triangle can be displayed, you have to considering 2 factors
The front face, or you call it winding order, by default Direct3D treat a vertices defined in clockwise order as a front face, and all faces other than front faces are back faces.
The culling mode, by default, Direct3D will cull the back faces(the faces which vertices defined in counter-clockwise order).
You can set the front face and culling mode in D3D11_RASTERIZER_DESC structure
The reason why you didn't see difference when you change the culling mode to NONE is because both triangle were in the same order and D3D11_CULL_NONE means didn't cull any faces.
If triangle 1 is in clockwise order and triangle 2 is in counter clockwise order, you will only see one triangle when you set the cull mode to D3D11_CULL_FRONT or D3D11_CULL_BACK, and see both when you set it to D3D11_CULL_NONE.
Related
I’m trying to do an ortho projection onto a plane, which represents a map – think “floor plan”. I’m running into trouble because openGL 4 is new to me (I last used 1.1, and the world has changed) and because what I’m trying to do isn’t much like common examples online. My problem is scaling and translating.
The data that describes the map is a series of lines with endpoints are in what I’ll call “dungeon coordinates units”. When I render the image I want to have a fixed rule of “1 unit is 1 pixel”.
My coordinates are all in the first quadrant, with (0,0) representing the lower left of the map. I’d like (0,0) to show up in the lower left of the screen.
Now for the tricky bits. When I render the “floor” in the fragment shader, I’m being handed gl_FragCoord, which is ideal. It’s effectively a pixel location, which means for my purposes it is equivalent to a dungeon coordinate. I can look up all the information I passed to the shader (also in dungeon coordinates) and figure out how to paint (or discard) that pixel. It works, except… it draws (0,0) is in the center of the screen, not the low left.
Worse, There are some things, like lines (“walls”), that I render with skinny triangles in dungeon coordinates in a second pass. They don’t show up where I want them. (In fact I’m pretty sure that the triangles I’m using to tile the floor are also wrong and are only covering the screen by coincidence.)
I really, really need openGL to use a coordinate system that puts 0,0 at the lower left of the image and lets me specify triangle vertices in my units, which happen to map straight to pixels.
This seems like a simple case of scaling and translating. But I’m obviously applying the scale and translate incorrectly.
The vertex code is simple:
#version 430
layout (location = 0) in vec3 Position;
uniform mat4 gWorld;
out vec4 Color; //unused; the fragment shader caslculates all colors
void main()
{
gl_Position = gWorld * vec4(Position, 1.0);
}
Building the 2 triangles for the map floor (a simple rectangle for now) seems simple:
Vector3f Vertices[4];
Vertices[0] = Vector3f(0.f, 0.f, 0.0f);
Vertices[1] = Vector3f(0.f, mapEdges.maxs.y, 0.0f);
Vertices[2] = Vector3f(mapEdges.maxs.x, 0.f, 0.0f);
Vertices[3] = Vector3f(mapEdges.maxs.x, mapEdges.maxs.y, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
unsigned int Indices[] = { 0, 1, 2,
1, 2, 3 };
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
and I use an indexed draw for them.
The C++ code (using glm) sets up the world matrix:
glUseProgram(ShaderProgram); //this selects the shader
gWorldLocation = glGetUniformLocation(ShaderProgram, "gWorld");
assert(gWorldLocation != 0xFFFFFFFF);
...and when rendering…
//try to fix openGL’s desire to think my buffer is -1 to 1 across
float scale = 1/1024.f; //test map is about 1024 units across
glm::mat4 sm = glm::scale(
glm::mat4( 1.0f ),
glm::vec3( scale, scale, 1.0f )
);
glm::mat4 ts = glm::translate(
sm,
glm::vec3( -512.0f, -512.0f, 0.0f ) //shove left and down
);
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
Since my test map is about 1024 units across, I’d have thought this would have shoved things into position. But no. The floor (which, remember, is using gl_FragCoord to decide where and what to draw) is painted from screen center and up and right, though it otherwise looks as I’d expect. The walls, which are painted by skinny triangles in dungeon coordinates, are nowhere to be seen, probably scaled off into the aether somewhere.
Basically I’m not convincing openGL that I want x=0 to be the left edge of the image and my scaling is obviously completely wrong. Sadly I had one version that (incorrectly) drew some walls on the screen at one point, but I don’t have that code anymore. Still, it tells me that I’m not completely off in generating the walls, just laying them down.
How do I get openGL to use my units?
You transpose the matrix when you set the matrix uniform. Since the vector is multiplied to the matrix from the right in your shader program, this is wrong. See GLSL Programming/Vector and Matrix Operations
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, &ts[0][0]);
Instead of scaling and translating the vertices you can set an orthographic projection with matrix with glm::ortho:
glm::mat4 projection = glm::ortho(0.0f, 1024.0f, 0.0f, 1024.0f, -1.0f, 1.0f);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, glm::value_ptr(projection));
I'm rendering a quad with an outline. I do this by rendering a quad of the required size in black, then rendering the "inner" quad smaller than the first. I also want this outlined quad to be transparent.
My current attempt renders as so:
However the problem I have is that because I'm essentially rendering two quads on top of each other, the black "main" quad blends through to the smaller "inner" quad. I don't want this to happen as I want the blending to only blend the background and the "inner" quad colors.
Other than rendering separate quads/triangles for the outline is there a way (ideally not using shaders as I'm using fixed pipeline) to make the blending ignore/not render the "main" quad when blending the "inner" quad?
Relevant code:
Blend mode:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
Rendering the box:
struct vertex
{
GLfloat x,y;
};
void draw_box()
{
glPushMatrix();
glTranslatef( -3.0f, 1.12f, 1.0f );
// Draw a huge black quad
const vertex vertices2[] =
{
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ -1.0f, 0.0f },
{ -1.0f, -1.0f },
{ 0.0f, 0.0f },
{ 0.0f, -1.0f },
};
glVertexPointer(2, GL_FLOAT, 0, vertices2);
glColor4f(0.0f, 0.0f, 0.0f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
// Draw a smaller grey quad
const GLfloat space = 0.04f;
const vertex vertices3[] =
{
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ -1.0f+space, 0.0f-space },
{ -1.0f+space, -1.0f+space },
{ 0.0f-space, 0.0f-space },
{ 0.0f-space, -1.0f+space },
};
glVertexPointer(2, GL_FLOAT, 0, vertices3);
glColor4f(0.5f, 0.5f, 0.5f, 0.5f);
glDrawArrays(GL_TRIANGLES, 0, 6);
glPopMatrix();
}
The best way to do this is probably to split up the border into four trapezoidal quads, but since you explicitly mentioned you didn't want to go down that path, there are a couple of other options you could pursue.
The next best method is probably to use the depth buffer: glEnable(GL_DEPTH_TEST). Draw the inside quad first (with blending enabled). Then draw the outer quad, but using coordinates such that it's depth is greater than the first quad (it is "farther away"). If depth testing is enabled when the border quad is drawn, it will fail for the quad you already drew, since those pixels have a smaller depth value. Note that using this method, if you are writing to the depth buffer when rendering the things behind these quads, you may run into some issues with parts of the background being considered closer than the new quads. You can solve this for the inner quad by configuring the depth test to always succeed, even when the test fails: glDepthFunc(GL_ALWAYS). This won't help for the outer quad, however. In that case the next method might be helpful:
You can use the stencil buffer in a similar manner to the depth buffer. When rendering the inside quad, you set the stencil value (or one bit of it at least) to a specific value:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, GLuint(-1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
Then when you draw the outer quad, configure the stencil test to succeed only if a fragment does not have that value:
glStencilFunc(GL_NOT_EQUAL, 1, 1);
Essentially, this is the same thing the version above was doing with the depth buffer, but uses the stencil buffer instead. You may want to take a look at the documentation for glStencilFunc and glStencilOp.
There may be other ways to achieve this behavior, but all the other solutions I can think of involve using custom shaders, which you indicated is not acceptable. Personally, I'd just split up the border into 4 quads. Sure, it's 3 more quads than the other methods, but it doesn't require any extra OpenGL features or state changes.
I am trying to understand how to specify texture coordinates for a GL_QUAD_STRIP.
I have managed to get things working for one quad:
float vertices[] = { 0.0f, 0.0f, 1.0f, +1.0f, 0.0f, 0.0f, // bottom line
0.0f, 1.0f, 1.0f, +1.0f, 1.0f, 0.0f}; // top line
unsigned int indices[] = {2, 0, // x = 0
3, 1}; // x = +1
float textureCoordinates[] = { 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f};
...
glBindBuffer(GL_ARRAY_BUFFER, 0); // unbinds any buffer object previously bound
glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibufferid);
glDrawElements(GL_QUAD_STRIP, 4, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
And here is how the result looks (white rectangle with image, rest is drawn on to help explain):
However I do not understand the logic behind the choice of textureCoordinates[] :-(.
The first texture coordinate is (1,0); I would assume that this corresponds to lower right corner?
Also I would assume that when OpenGL reads the first index: 2, it uses this to look up the vertex: (0,1,1): upper left corner. Next it reads the first texture coordinate: (1,0).
But as mentioned above I would assume this to be the lower right corner of the texture !?
However the texture is shown unrotated so this can not be the case!?
Just like the vertices, the texture coordinates are also selected based on the indices used by glDrawElements(). So the first texture coordinate is not (1,0), but (1,1) because the first index is 2. Vertices and coordinates would be according to the following table, where i = index, v = vertex and t = texture coordinate. (I'll only take the x and y coordinates into consideration for the vertices, as the z coordinate doesn't really matter in this case.)
i v t
2 (0,1) (1,1)
0 (0,0) (1,0)
3 (1,1) (0,1)
1 (1,0) (0,0)
If we draw this on a piece of paper, we can see that this means the coordinates make more sense, since the indices matter. (I recommend that you do this! I had to do that to understand what was going on.) Notice in the table how the y coordinates match perfectly between the vertex and texture coordinate for a given index. But the x coordinates don't match: when the vertex has x = 0, the texture coordinate has x = 1 and vice versa. I assume this would make the image appear mirrored around the y axis instead of rotated in any way. What does the original image look like? Is it mirrored compared to what we see in the image you posted so that the building is on the left? If so, the texture coordinates would be the explanation. In that case, texture coordinate 2 and 3 should switch places.
In case you are curious, you could take a look at the OpenGL 2.1 specification on page 18, Figure 2.5(a), to see why the vertex indices were selected as they were. It would create a quad with vertices specified in a counterclockwise direction when projected on the screen. This is good because the initial value for glFrontFace() is GL_CCW, which means we see the front face of the polygons in the rendered image and the polygons would not have been culled if culling was enabled (see glCullFace()). (Culling is not enabled by default though, so it may or may not have mattered in your case.)
I hope this helped. Do comment if something is unclear!
I am looking for a technique in OpenGL that I can use in order to map color points on a surface.
Each point is defining a display color and three coordinates (X, Y, Z).
The surface on which to map those data is built from all the points' coordinates in the main usage (complex shape) but can be built normally from standard shape such as a cone or a sphere.
Since there are voids between the points (for example one millimeter step between two points along the X axis), it would be also needed to interpolate the points data on the surface.
I am thinking about building bitmaps from the points and then applying those bitmaps on my surfaces but I am wondering if OpenGL does have a feature that allow to do that in a "smart way".
It sounds to me like what you are asking for is basic OpenGl behaviour.
If you draw a triangle:
glBegin(GL_TRIANGLES);
glColor3f(1.0f,0.0f,0.0f); // Red
glVertex3f( 0.0f, 1.0f, 0.0f); // Top vertex
glColor3f(0.0f,1.0f,0.0f); // Green
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom left vertex
glColor3f(0.0f,0.0f,1.0f); // Blue
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom right vertex
glEnd();
The result is a smoothly ( if garishly) coloured solid triangle.
So your problem is to construct a series of polygons (possibly just triangles) which cover your surface and have the point set as vertices.
For a great intro to OpenGl, see NeHe's tutorials, including the above example.
I'm trying to put a texture on one surface of a cube (if facing the XY plane the texture would be facing you).
No texture is getting drawn, only the wireframe and I'm wondering what I'm doing wrong. I think it's the vertex coordinates?
Here's some code:
struct paperVertex {
D3DXVECTOR3 pos;
DWORD color; // The vertex color
D3DXVECTOR2 texCoor;
paperVertex(D3DXVECTOR3 p, DWORD c, D3DXVECTOR2 t) {pos = p; color = c; texCoor = t;}
paperVertex() {pos = D3DXVECTOR3(0,0,0); color = 0; texCoor = D3DXVECTOR2(0,0);}
};
D3DCOLOR color1 = D3DCOLOR_XRGB(255, 255, 255);
D3DCOLOR color2 = D3DCOLOR_XRGB(200, 200, 200);
vertices[0] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,0)); // bottom left corner of tex
vertices[1] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,0)); // top left corner of tex
vertices[2] = paperVertex(D3DXVECTOR3( 1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,1)); // top right corner of tex
vertices[3] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,1)); // bottom right corner of tex
vertices[4] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
vertices[5] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[6] = paperVertex(D3DXVECTOR3(1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[7] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
D3DXCreateTextureFromFile( md3dDev, "texture.bmp", &gTexture);
md3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
md3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
md3dDev->SetTexture(0, gTexture);
md3dDev->SetStreamSource(0, mVtxBuf, 0, sizeof(paperVertex));
md3dDev->SetVertexDeclaration(paperDecl);
md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME);
md3dDev->SetIndices(mIndBuf);
md3dDev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, VTX_NUM, 0, NUM_TRIANGLES);
disclaimer: I have no Direct3D experience, but solid OpenGL and general computer graphics experience. And since the underlying concepts don't really differ, I attempt an answer, of whose correctness I'm 99% sure.
You call md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME) immediately before rendering and wonder why only the wireframe is drawn?
Keep in mind that using a texture doesn't magically turn a wireframe model into a solid model. It is still a wireframe model with the texture only applied to the wireframe. You can only draw the whole primitve as wireframe or not.
Likewise does using texture coordinates of (0,0) not magically disable texturing for individual faces. You can only draw the whole primitive textured or not, though you might play with the texture coordinates and the texture's wrapping mode (and maybe the texture border) to make the "non-textured" faces use a uniform color from the texture and thus look like not textured.
But in general to achieve such deviating render styles (like textured/non-textured, but especially wireframe/solid) in a single primitive, you won't get around splitting the primitive into multiple ones and drawing each one with its dedicated render style.
EDIT: According to your comment: If you don't need wireframe, why enable it then? Besides disabling wireframe, with your current texture coordinates the other faces won't just have a single color from the texture but some strange distorted version of the texture. This is because your vertices (and their texture coordinates) are shared between different faces, but the texture coordinates at the moment are created only for the front face to look reasonable.
In such a situation, you won't get around duplicating vertices, so that each face uses a set of 4 unique vertices. In the case of a cube you won't actually need an index array anymore, because each face needs its own vertices. This is due to the fact, that a vertex conceptually represents all of the vertex' attributes (position, color, texCoord, ...) and you cannot have a two vertices sharing a position but having different texture coordinates (you can but you need two distinct vertices). Once you've duplicated the vertices accordingly, you can give each of the corner vertices their respective texture coordinates (which would be the usual [0,1]-quad if you want them textured normally, or all 0s if you want them to have a single color, in this case the color of the bottom left (or top left in D3D?) corner of the texture).
The same problem arises if you want to light the cube and need normals per-face, istead of interpolated per-vertex normals. In this case you also have to introduce duplicate vertices only deviating in their normal attribute. Always keep in mind that a vertex conceptually consists of all the vertex attributes and if two vertices have the same position but a different color/normal/texCoord/... they are conceptually (and practically) different vertices.