I learn DirectX (DirectX 9) from www.directxtutorial.com and using visual studio 2012 in windows 8.
d3dx9 (d3dx) replace by other header like DirectXMath, therefore I replaced all that is needed, but there is a problem - convert XMMATRIX to D3DMATRIX.
The problem code (The problem written - /problem!/):
void render_frame(void) {
// clear the window to a deep blue
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene(); // begins the 3D scene
// select which vertex format we are using
d3ddev->SetFVF(CUSTOMFVF);
// SET UP THE PIPELINE
DirectX::XMMATRIX matRotateY; // a matrix to store the rotation information
static float index = 0.0f; index+=0.05f; // an ever-increasing float value
// build a matrix to rotate the model based on the increasing float value
matRotateY = DirectX::XMMatrixRotationY(index);
D3DMATRIX D3DMatRotateY = matRotateY.r;
// tell Direct3D about our matrix
d3ddev->SetTransform(D3DTS_WORLD, &matRotateY); /*problem!*/
DirectX::XMMATRIX matView; // the view transform matrix
DirectX::XMVECTOR CameraPosition = {0.0f,0.0f,10.0f};
DirectX::XMVECTOR LookAtPosition = {0.0f,0.0f,0.0f};
DirectX::XMVECTOR TheUpDirection = {0.0f,1.0f,0.0f};
matView = DirectX::XMMatrixLookAtLH(CameraPosition, // the camera position
LookAtPosition, // the look-at position
TheUpDirection); // the up direction
d3ddev->SetTransform(D3DTS_VIEW, &matView); /*problem!*/ // set the view transform to matView
DirectX::XMMATRIX matProjection; // the projection transform matrix
DirectX::XMMatrixPerspectiveFovLH(&matProjection,
DirectX::XMConvertToRadians(45), // the horizontal field of view
1.0f, // the near view-plane
100.0f); // the far view-plane
d3ddev->SetTransform(D3DTS_PROJECTION, &matProjection); /*problem!*/ // set the projection
// select the vertex buffer to display
d3ddev->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
// copy the vertex buffer to the back buffer
d3ddev->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1);
d3ddev->EndScene(); // ends the 3D scene
d3ddev->Present(NULL, NULL, NULL, NULL); /* displays the created frame on the screen */ }
You can use XMStoreFloat4x4 to convert XMMATRIX to a XMFLOAT4X4.
You should be able to pass in XMFLOAT4X4 to setTransform by casting.
DirectX::XMMATRIX matProjection;
DirectX::XMFLOAT4X4 projectionMatrix;
DirectX::XMMatrixPerspectiveFovLH(&matProjection,DirectX::XMConvertToRadians(45),1.0f,100.0f);
XMStoreFloat4x4(&projectionMatrix, matProjection);
d3ddev->SetTransform(D3DTS_PROJECTION, (D3DXMATRIX*)&projectionMatrix); /*problem!*/ // set the projection
Related
I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.
I am trying to add a stop sign texture to a triangle in opengl.
But for some reason the image is coming up weird like I have the coordinates in the wrong order and the image is 1, mirrored and 2, not angled correctly:
I believe I set the texture coordinates correct but I am unsure. Have i got the texture coordinates in the wrong order?
Here is the code I have for it:
#include <glm\glm.hpp>
#include <graphics_framework.h>
#include <memory>
using namespace std;
using namespace graphics_framework;
using namespace glm;
mesh m;
effect eff;
target_camera cam;
texture tex;
bool load_content() {
// Construct geometry object
geometry geom;
// Create triangle data
// Positions
vector<vec3> positions{vec3(0.0f, 1.0f, 0.0f), vec3(-1.0f, -1.0f, 0.0f), vec3(1.0f, -1.0f, 0.0f)};
// *********************************
// Define texture coordinates for triangle
vector<vec2> tex_coords{ vec2(0.0f, 0.0f), vec2(1.0f, 0.0f), vec2(0.5f, 1.0f) };
// *********************************
// Add to the geometry
geom.add_buffer(positions, BUFFER_INDEXES::POSITION_BUFFER);
// *********************************
// Add texture coordinate buffer to geometry
geom.add_buffer(tex_coords, BUFFER_INDEXES::TEXTURE_COORDS_0);
// *********************************
// Create mesh object
m = mesh(geom);
// Load in texture shaders here
eff.add_shader("27_Texturing_Shader/simple_texture.vert", GL_VERTEX_SHADER);
eff.add_shader("27_Texturing_Shader/simple_texture.frag", GL_FRAGMENT_SHADER);
// *********************************
// Build effect
eff.build();
// Load texture "textures/sign.jpg"
tex = texture("textures/sign.jpg");
// *********************************
// Set camera properties
cam.set_position(vec3(10.0f, 10.0f, 10.0f));
cam.set_target(vec3(0.0f, 0.0f, 0.0f));
auto aspect = static_cast<float>(renderer::get_screen_width()) / static_cast<float>(renderer::get_screen_height());
cam.set_projection(quarter_pi<float>(), aspect, 2.414f, 1000.0f);
return true;
}
bool update(float delta_time) {
// Update the camera
cam.update(delta_time);
return true;
}
bool render() {
// Bind effect
renderer::bind(eff);
// Create MVP matrix
auto M = m.get_transform().get_transform_matrix();
auto V = cam.get_view();
auto P = cam.get_projection();
auto MVP = P * V * M;
// Set MVP matrix uniform
glUniformMatrix4fv(eff.get_uniform_location("MVP"), // Location of uniform
1, // Number of values - 1 mat4
GL_FALSE, // Transpose the matrix?
value_ptr(MVP)); // Pointer to matrix data
// *********************************
// Bind texture to renderer
renderer::bind(tex, 0);
// Set the texture value for the shader here
glUniform1i(eff.get_uniform_location("tex"), 0);
// *********************************
// Render the mesh
renderer::render(m);
return true;
}
void main() {
// Create application
app application("27_Texturing_Shader");
// Set load content, update and render methods
application.set_load_content(load_content);
application.set_update(update);
application.set_render(render);
// Run application
application.run();
}
That's because the order of your vertex positions and texture coordinates do not match. The first position is the top corner (assuming y is up), but the first texture coordinate is the bottom left corner of the image. Moving the last texture coordinate to the front should do the trick.
Your texture is probably mirrored along the y-axis. OpenGL expects the texture lines to be bottom-up, but most image libraries provide them top-down. Depends on what library you use to load the image. And of course, it will depend on the side from which you view the triangle.
I was trying to create triangle drawing with user-defined coordinates using Directx 11:
void triangle(float anchorx, float anchory, float x1, float y1, float x2, float y2, XMFLOAT4 _color)
{
XMMATRIX scale;
XMMATRIX translate;
XMMATRIX world;
simplevertex framevertices[3] = { XMFLOAT3(anchorx, ui::height - anchory, 1.0f), XMFLOAT2(0.0f, 0.0f),
XMFLOAT3(x1, ui::height - y1, 1.0f), XMFLOAT2(1.0f, 0.0f),
XMFLOAT3(x2, ui::height - y2, 1.0f), XMFLOAT2(1.0f, 1.0f) };
world = XMMatrixIdentity();
dx11::generalcb.world = XMMatrixTranspose(world);// XMMatrixIdentity();
dx11::generalcb.fillcolor = _color;
dx11::generalcb.projection = XMMatrixOrthographicOffCenterLH( 0.0f, ui::width, 0.0f, ui::height, 0.01f, 100.0f );
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
dx11::context->Map(dx11::vertexbuffers::trianglevertexbuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, framevertices, sizeof(simplevertex) * 3);
dx11::context->Unmap(dx11::vertexbuffers::trianglevertexbuffer, NULL);
dx11::context->VSSetShader(dx11::shaders::simplevertexshader, NULL, 0);
dx11::context->IASetVertexBuffers(0, 1, &dx11::vertexbuffers::trianglevertexbuffer, &dx11::verticestride, &dx11::verticeoffset);
dx11::context->IASetInputLayout(dx11::shaders::simplevertexshaderlayout);
dx11::context->PSSetShader(dx11::shaders::panelpixelshader, NULL, 0);
dx11::context->UpdateSubresource(dx11::general_cb, 0, NULL, &dx11::generalcb, 0, 0);
dx11::context->IASetIndexBuffer(dx11::indexbuffers::triangleindexbuffer, DXGI_FORMAT_R16_UINT, 0);
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
//dx11::context->Draw(3, 0);
};
Substration on Y axis when filling pos.y data:
XMFLOAT3(anchorx, ui::height - anchory, 1.0f)
XMFLOAT3(x1, ui::height - y1, 1.0f)
XMFLOAT3(x2, ui::height - y2, 1.0f)
My orthographic projection sets coordinates zeroes to left-bottom of the screen, so I substract passed Y coordinate from height of the window to get proper position on y axis. Not sure it can affect my problem because it worked well with all the rest of primitives (rectangles, textures, filled rectangles, lines and circles).
Index order defined in index buffer:
unsigned short triangleindices[6] = { 0, 1, 2, 0, 2, 3 };
Im actually using this index buffer to render rectangles so its created to render 2 triangles forming a quad, didn't bother to create separate triangle index buffer
Trianglevertexbuffer contains array 4 of simplevertex:
//A VERTEX STRUCT
struct simplevertex
{
XMFLOAT3 pos; // pixel coordintes
XMFLOAT2 uv; // corresponding point color coordinates in texture
};
I was not using UV data at the function above, just filled them with random data, because color data is passed via constant buffer. As you see I also memcpy only 3 first array data to the vertex buffer for a triangle requires only 3.
VERTEX SHADER:
// SIMPLE VERTEX SHADER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// simple vertex shader
float4 main(float4 input : POSITION) : SV_POSITION
{
float4 output = (float4)0; // setting variable fields to zero, may be skipped?
output = mul(input, world); // multiplying vertex shader output by world matrix passed in the constant buffer
output = mul(output, projection); // multiplying on by projection passed in the constant buffer (2d for now), resulting in final output data(.pos)
return output;
}
PIXEL SHADER:
// SIMPLE PIXEL SHADER RETURNING COLOR PASSED IN CONSTNT BUFFER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// pixel shader returning preset color in constant buffer
float4 main(float4 input : SV_POSITION) : SV_Target
{
return fillcolor; // and we just return the color passed to constant buffer
}
Layout used in the function:
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
There, I created next lines in the rendering cycle:
if (ui::keys[VK_F1]) tools::triangle(700, 100, 400, 300, 850, 700, colors::blue);
if (ui::keys[VK_F2]) tools::triangle(400, 300, 700, 100, 850, 700, colors::blue);
if (ui::keys[VK_F3]) tools::triangle(700, 100, 850, 700, 400, 300, colors::blue);
if (ui::keys[VK_F4]) tools::triangle(850, 700, 400, 300, 700, 100, colors::blue);
if (ui::keys[VK_F5]) tools::triangle(850, 700, 700, 100, 400, 300, colors::blue);
if (ui::keys[VK_F6]) tools::triangle(400, 300, 850, 700, 700, 100, colors::blue);
The point of this setup is to achieve any random order of coordinates forming a triangle for rendering, but this is actually the point of this question - I didn't get this triangle rendered at some coordinates variations, so I came to conclusion it goes from TOPOLOGY, as you can see I have at the moment:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
This is the only combination that draws all the triangles but I honestly don't understand how it does happen, from what I know STRIP topology is used with context->Draw function, while LIST works with index buffer setup. Tried next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
dx11::context->DrawIndexed(4, 0, 0);
Triangles F1, F5, F6 were not drawn. Aight, next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
//dx11::context->DrawIndexed(4, 0, 0);
dx11::context->Draw(3, 0);
Same story, F1, F5 and F6 are not rendered.
I cannot understand the things are going on, you may find the code a bit primitive but I only want to know why I get working result only with the combination of STRIP topology and DrawIndexed function. I hope I provided enough information, sorry if not, Ill correct on demand. Thank you:)
Got this.
First of all in my rasterizer settings I had rasterizerState.CullMode = D3D11_CULL_FRONT, don't remember why may be I just copy-pasted some other's steady code, so I changed it to D3D11_CULL_NONE and all worked as intended.
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->Draw(3, 0);
Second, I found out that facing or not-facing of a primitive depends on the drawing direction, if drawing vertices goes counter-clockwise (vertice 2 is "righter" then vertice 1 on the start) the primitive is considered as facing the view, if it goes clockwise - than we see its "back" instead.
So i decided to keep D3D11_CULL_NONE instead of adding some math logic to define the order of vertices, dont know for sure if D3D11_CULL_NONE mode cuts performance tho.
My 3D world draws perfectly every time but the 2D text never draws. The code below features my latest effort using a tutorial from lighthouse3D. I get the feeling its something stupidly simple and im just not seeing it.
Rendering code :
void ScreenGame::draw(SDL_Window * window)
{
glClearColor(0.5f,0.5f,0.5f,1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set up projection matrix
glm::mat4 projection(1.0);
projection = glm::perspective(60.0f,800.0f/600.0f,1.0f,150.0f);
rt3d::setUniformMatrix4fv(shaderProgram, "projection", glm::value_ptr(projection));
GLfloat scale(1.0f); // just to allow easy scaling of complete scene
glm::mat4 modelview(1.0); // set base position for scene
mvStack.push(modelview);
mvStack.top() = glm::lookAt(camera->getEye(),camera->getAt(),camera->getUp());
glm::vec4 tmp = mvStack.top()*lightPos;
light0.position[0] = tmp.x;
light0.position[1] = tmp.y;
light0.position[2] = tmp.z;
rt3d::setLightPos(shaderProgram, glm::value_ptr(tmp));
glUseProgram(skyBoxShader); // Switch shaders, reset uniforms for skybox
rt3d::setUniformMatrix4fv(skyBoxShader, "projection", glm::value_ptr(projection));
glDepthMask(GL_FALSE); // make sure depth test is off
glm::mat3 mvRotOnlyMat3 = glm::mat3(mvStack.top());
mvStack.push( glm::mat4(mvRotOnlyMat3) );
skyBox->draw(mvStack); // drawing skybox
mvStack.pop();
glDepthMask(GL_TRUE); // make sure depth test is on
mvStack.top() = glm::lookAt(camera->getEye(),camera->getAt(),camera->getUp());
glUseProgram(shaderProgram); // Switch back to normal shader program
rt3d::setUniformMatrix4fv(shaderProgram, "projection", glm::value_ptr(projection));
rt3d::setLightPos(shaderProgram, glm::value_ptr(tmp));
rt3d::setLight(shaderProgram, light0);
// Draw all visible objects...
Ball->draw(mvStack);
ground->draw(mvStack);
building1->draw(mvStack);
building2->draw(mvStack);
setOrthographicProjection();
glPushMatrix();
glLoadIdentity();
renderBitmapString(5,30,1,GLUT_BITMAP_HELVETICA_18,"Text Test");
glPopMatrix();
restorePerspectiveProjection();
SDL_GL_SwapWindow(window); // swap buffers
}
using the following methods :
void setOrthographicProjection() {
// switch to projection mode
glMatrixMode(GL_PROJECTION);
// save previous matrix which contains the
//settings for the perspective projection
glPushMatrix();
// reset matrix
glLoadIdentity();
// set a 2D orthographic projection
glOrtho(0.0F, 800, 600, 0.0F, -1.0F, 1.0F);
// switch back to modelview mode
glMatrixMode(GL_MODELVIEW);
}
void restorePerspectiveProjection() {
glMatrixMode(GL_PROJECTION);
// restore previous projection matrix
glPopMatrix();
// get back to modelview mode
glMatrixMode(GL_MODELVIEW);
}
void renderBitmapString(
float x,
float y,
int spacing,
void *font,
char *string) {
char *c;
int x1=x;
for (c=string; *c != '\0'; c++) {
glRasterPos2f(x1,y);
glutBitmapCharacter(font, *c);
x1 = x1 + glutBitmapWidth(font,*c) + spacing;
}
}
For some strange reason my depth buffer is not working, i.e. the triangles drawn later always overlap, regardless of their position.
I have these presenter parameters
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));
d3dpp.Windowed = TRUE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.hDeviceWindow = mWindow;
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8;
d3dpp.BackBufferWidth = mScreenWidth;
d3dpp.BackBufferHeight = mScreenHeight;
d3dpp.EnableAutoDepthStencil = TRUE;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
and these render states:
d3dDevice->SetRenderState(D3DRS_LIGHTING, TRUE); // turn off the 3D lighting
d3dDevice->SetRenderState(D3DRS_ZENABLE, TRUE); // turn on the z-buffer
d3dDevice->SetRenderState(D3DRS_NORMALIZENORMALS, TRUE);
d3dDevice->SetRenderState(D3DRS_AMBIENT, D3DCOLOR_XRGB(50, 50, 50)); // ambient light
edit:
thanks for replying. this is the rendering code code:
d3dDevice->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3dDevice->Clear(0, NULL, D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3dDevice->BeginScene();
// View transform
D3DXMATRIX matView;
D3DXMatrixLookAtLH(&matView,
&PlayerPos, // the camera position
&(LookAtRelative + PlayerPos), // the look-at position
&D3DXVECTOR3 (0.0f, 1.0f, 0.0f)); // the up direction
d3dDevice->SetTransform(D3DTS_VIEW, &matView);
// Projection transform
D3DXMATRIX matProjection;
D3DXMatrixPerspectiveFovLH(&matProjection,
D3DXToRadian(45), // the horizontal field of view
(FLOAT)mScreenWidth / (FLOAT)mScreenHeight, // aspect ratio
0.0f, // the near view-plane
1000.0f); // the far view-plane
d3dDevice->SetTransform(D3DTS_PROJECTION, &matProjection);
for (unsigned int i=0; i < mModels.size(); i++) {
mModels[i]->Draw();
}
d3dDevice->EndScene();
d3dDevice->Present(NULL, NULL, NULL, NULL);
and the Model::Draw() code is this:
void Model :: Draw () {
// Setup the world transform matrix
D3DXMATRIX matScale;
D3DXMATRIX matRotate;
D3DXMATRIX matTranslate;
D3DXMATRIX matWorldTransform;
D3DXMatrixScaling(&matScale, mScale->x, mScale->y, mScale->z);
D3DXMatrixRotationY(&matRotate, 0);
D3DXMatrixTranslation(&matTranslate, mPosition->x, mPosition->y, mPosition->z);
matWorldTransform = matScale * matRotate * matTranslate;
d3dDevice->SetTransform(D3DTS_WORLD, &matWorldTransform);
d3dDevice->SetFVF(CUSTOMFVF);
d3dDevice->SetStreamSource(0, vertexBuffer, 0, sizeof(CUSTOMVERTEX));
d3dDevice->SetIndices(indexBuffer);
d3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, vertexCount, 0, indexCount/3);
}
where vertexBuffer and indexBuffer with with their counts are attributes of the class.
Here are some screenshots (FU, spam protection):
1) http://img822.imageshack.us/img822/1705/dx2010080913182262.jpg this is the situation
2) http://img691.imageshack.us/img691/7358/dx2010080913183790.jpg this is the (correct) view when the cube is in front (the cube is drawn later)
3) http://img340.imageshack.us/img340/4720/dx2010080913184509.jpg But when I have the truncated pyramid in front, the cube still overlaps
it's easier to see when you move the camera yourself...
Now that's a gotcha. The problem was me setting the near view plane to 0.0f - when I changed it to something like 0.001f, the z-buffer suddenly started to work.