I am creating a game that will have 2d pictures inside a 3d world.
I originally started off by not caring about my images been stretched to a square while I learnt more about how game mechanics work... but it's now time to get my textures to display in the correct ratio and... size.
Just a side note, I have played with orthographic left hand projections but I noticed that you cannot do 3d in that... (I guess that makes sense... but I could be wrong, I tried it and when I rotated my image, it went all stretchy and weirdosss).
the nature of my game is as follows:
In the image it says -1.0 to 1.0... i'm not fussed if the coordinates are:
topleft = 0,0,0
bottom right = 1920, 1200, 0
But if that's the solution, then whatever... (p.s the game is not currently set up so that -1.0 and 1.0 is left and right of screen. infact i'm not sure how i'm going to make the screen edges the boundaries (but that's a question for another day)
Question:
The issue I am having is that my image for my player (2d) is 128 x 64 pixels. After world matrix multiplication (I think that's what it is) the vertices I put in scale my texture hugely... which makes sense but it looks butt ugly and I don't want to just whack a massive scaling matrix into the mix because it'll be difficult to work out how to make the texture 1:1 to my screen pixels (although maybe you will tell me it's actually how you do it but you need to do a clever formula to work out what the scaling should be).
But basically, I want the vertices to hold a 1:1 pixel size of my image, unstretched...
So I assume I need to convert my world coords to screen coords before outputting my textures and vertices??? I'm not sure how it works...
Anyways, here are my vertices.. you may notice what I've done:
struct VERTEX
{
float X, Y, Z;
//float R, G, B, A;
float NX, NY, NZ;
float U, V; // texture coordinates
};
const unsigned short SquareVertices::indices[ 6 ] = {
0, 1, 2, // side 1
2, 1, 3
};
const VERTEX SquareVertices::vertices[ 4 ] = {
//{ -1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
//{ 1.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
//{ -1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
//{ 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
{ -64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 64.0f, -32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -64.0f, 32.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 64.0f, 64.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
(128 pixels / 2 = 64 ), ( 64 / 2 = 32 ) because the centre is 0.0... but what do I need to do to projections, world transdoobifications and what nots to get the worlds to screens?
My current setups look like this:
// called 1st
void Game::SetUpViewTransformations( )
{
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -20.0f, 0 );
XMVECTOR vecCamLookAt = XMVectorSet( 0, 0, 0, 0 );
XMVECTOR vecCamUp = XMVectorSet( 0, 1, 0, 0 );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
}
// called 2nd
void Game::SetUpMatProjection( )
{
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
windowWidth / windowHeight, // aspect ratio
1, // the near view-plane
100 ); // the far view-plan
}
and here is a sneaky look at my update and render methods:
// called 3rd
void Game::Update( )
{
world->Update();
worldRotation = XMMatrixRotationY( world->rotation );
player->Update( );
XMMATRIX matTranslate = XMMatrixTranslation( player->x, player->y, 0.0f );
//XMMATRIX matTranslate = XMMatrixTranslation( 0.0f, 0.0f, 1.0f );
matWorld[ 0 ] = matTranslate;
}
// called 4th
void Game::Render( )
{
// set our new render target object as the active render target
d3dDeviceContext->OMSetRenderTargets( 1, rendertarget.GetAddressOf( ), zbuffer.Get( ) );
// clear the back buffer to a deep blue
float color[ 4 ] = { 0.0f, 0.2f, 0.4f, 1.0f };
d3dDeviceContext->ClearRenderTargetView( rendertarget.Get( ), color );
d3dDeviceContext->ClearDepthStencilView( zbuffer.Get( ), D3D11_CLEAR_DEPTH, 1.0f, 0 ); // clear the depth buffer
CBUFFER cBuffer;
cBuffer.DiffuseVector = XMVectorSet( 0.0f, 0.0f, 1.0f, 0.0f );
cBuffer.DiffuseColor = XMVectorSet( 0.5f, 0.5f, 0.5f, 1.0f );
cBuffer.AmbientColor = XMVectorSet( 0.2f, 0.2f, 0.2f, 1.0f );
//cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Final = worldRotation * matWorld[ 0 ] * matView * matProjection;
cBuffer.Rotation = XMMatrixRotationY( world->rotation );
// calculate the view transformation
SetUpViewTransformations();
SetUpMatProjection( );
//matFinal[ 0 ] = matWorld[0] * matView * matProjection;
UINT stride = sizeof( VERTEX );
UINT offset = 0;
d3dDeviceContext->PSSetShaderResources( 0, 1, player->texture.GetAddressOf( ) ); // Set up texture
d3dDeviceContext->IASetVertexBuffers( 0, 1, player->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( player->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &cBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( player->indices ), 0, 0 ); // draw
swapchain->Present( 1, 0 );
}
Just to clarify, if I make my vertices use 2 and 1 respective of the fact my image is 128 x 64.. I get a normal looking size image.. and yet at 0,0,0 it's not at 1:1 size... wadduuuppp buddyyyy
{ -2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1
{ 2.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -2.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 2.0f, 2.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
Desired outcome 2 teh max:
Cool picture isn't it :D ?
Comment help:
I'm not familliar with direct-x but as far as I can see the thing with your image that is screen coordinates are [-1...+1] on x and y. So total length on both axis equals 2 and your image is scaled times 2. Try consider this scale in camera matrix.
Related
I tried to make a cube in openGL and render a default texture on each side. I've been messing around with it for days but I cant get it to work. I really don't know what the problem is as I am convinced that my vertices and texture coordinates are right. What am I doing wrong?
These are my vertices, uv's and indices:
vertices = {
// front face
0.0f, 0.0f, 0.0f,
length, 0.0f, 0.0f,
length, height, 0.0f,
0.0f, height, 0.0f,
// back face
0.0f, 0.0f, width,
length, 0.0f, width,
length, height, width,
0.0f, height, width,
// left face
0.0f, 0.0f, 0.0f,
0.0f, 0.0f, width,
0.0f, height, width,
0.0f, height, 0.0f,
// right face
length, 0.0f, 0.0f,
length, 0.0f, width,
length, height, width,
length, height, 0.0f,
// top face
0.0f, height, 0.0f,
length, height, 0.0f,
length, height, width,
0.0f, height, width,
// bottom face
0.0f, 0.0f, 0.0f,
length, 0.0f, 0.0f,
length, 0.0f, width,
0.0f, 0.0f, width
};
uvs = {
// front face
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
// back face
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
// left face
0.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
0.0f, 1.0f,
// right face
1.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
// top face
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
// bottom face
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 0.0f,
0.0f, 0.0f
};
indices = {
// front face
0, 1, 2,
2, 3, 0,
// right face
1, 5, 6,
6, 2, 1,
// back face
7, 6, 5,
5, 4, 7,
// left face
4, 0, 3,
3, 7, 4,
// bottom face
4, 5, 1,
1, 0, 4,
// top face
3, 2, 6,
6, 7, 3
};
This is my render method:
void Mesh::render() {
// Render the pyramid using OpenGL
view = glm::lookAt(Camera::getInstance().cameraPos, Camera::getInstance().cameraPos + Camera::getInstance().cameraFront, Camera::getInstance().cameraUp);
mvp = projection * view * model;
// Attach to program_id
glUseProgram(programId);
// Send mvp
glUniformMatrix4fv(uniformMvp, 1, GL_FALSE, glm::value_ptr(mvp));
// Send vao
glBindVertexArray(vao);
glBindTexture(GL_TEXTURE_2D, textureId);
glDrawElements(GL_TRIANGLES, indices.size() * sizeof(GLushort),
GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
}
Vertexshader:
#version 430 core
in vec2 UV;
uniform sampler2D texsampler;
layout(location = 0) out vec4 gl_FragColor;
void main()
{
// Compute the diffuse and specular components for each fragment
vec3 test = texture2D(texsampler, UV).rgb;
// Write final color to the framebuffer
gl_FragColor = vec4(test, 1.0);
}
and the fragmentshader:
#version 430 core
// Uniform matrices
uniform mat4 mv;
uniform mat4 projection;
// Per-vertex inputs
in vec3 position;
// UV
in vec2 uv;
out vec2 UV;
void main()
{
// Calculate view-space coordinate
vec4 P = mv * vec4(position, 1.0);
// Calculate the clip-space position of each vertex
gl_Position = projection * P;
UV = uv;
}
Image of the cube:
I think this is enough information. Only the front and the back are textured normally and the rest is just like on the image.
Your texture coordinates are wrong, as commented:
// left face
0.0f, 0.0f,
0.0f, 0.0f,
0.0f, 1.0f,
0.0f, 1.0f,
// right face
1.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
This tells the computer to take a single line of pixels and stretch them across the entire face. The U coordinate is the same for the whole face. It does not advance from left to right across the texture.
Same for the top and bottom faces:
// top face
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
// bottom face
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 0.0f,
0.0f, 0.0f
the V does not advance, so the computer keeps reading the first or last row of the texture over and over. If you want to use the entire texture, then V should be 0 on one side of the texture, and 1 on the other side. Also note the direction where U changes must be different than the direction V changes - i.e. they can't change together - or else the compute only reads the diagonal pixels where U and V are equal.
Looking at OPs vertices, I noticed that there are 24 of them although a cube has 8 corners only. That's not surprising as coordinates for the same corner may correspond to distinct vertex coordinates depending on which face it belongs to.
Hence, it makes sense to define coordinates and corresponding texture coordinates per face, i.e. 6 faces with 4 corners each face -> 24 coordinates.
I enriched OPs code with enumeration:
vertices = {
// front face
0.0f, 0.0f, 0.0f, // 0
length, 0.0f, 0.0f, // 1
length, height, 0.0f, // 2
0.0f, height, 0.0f, // 3
// back face
0.0f, 0.0f, width, // 4
length, 0.0f, width, // 5
length, height, width, // 6
0.0f, height, width, // 7
// left face
0.0f, 0.0f, 0.0f, // 8
0.0f, 0.0f, width, // 9
0.0f, height, width, // 10
0.0f, height, 0.0f, // 11
// right face
length, 0.0f, 0.0f, // 12
length, 0.0f, width, // 13
length, height, width, // 14
length, height, 0.0f, // 15
// top face
0.0f, height, 0.0f, // 16
length, height, 0.0f, // 17
length, height, width, // 18
0.0f, height, width, // 29
// bottom face
0.0f, 0.0f, 0.0f, // 20
length, 0.0f, 0.0f, // 21
length, 0.0f, width, // 22
0.0f, 0.0f, width // 23
};
uvs = {
// front face
0.0f, 0.0f, // 0
1.0f, 0.0f, // 1
1.0f, 1.0f, // 2
0.0f, 1.0f, // 3
// back face
0.0f, 0.0f, // 4
1.0f, 0.0f, // 5
1.0f, 1.0f, // 6
0.0f, 1.0f, // 7
// left face
0.0f, 0.0f, // 8
0.0f, 0.0f, // 9
0.0f, 1.0f, // 10
0.0f, 1.0f, // 11
// right face
1.0f, 0.0f, // 12
1.0f, 0.0f, // 13
1.0f, 1.0f, // 14
1.0f, 1.0f, // 15
// top face
0.0f, 1.0f, // 16
1.0f, 1.0f, // 17
1.0f, 1.0f, // 18
0.0f, 1.0f, // 29
// bottom face
0.0f, 0.0f, // 20
1.0f, 0.0f, // 21
1.0f, 0.0f, // 22
0.0f, 0.0f // 23
};
But then I took a closer look what the indices look-up:
indices = {
// ...
// right face
1, 5, 6, // -> UV: { 1.0f, 0.0f }, { 1.0f, 0.0f }, { 1.0f, 1.0f }
6, 2, 1, // -> UV: { 1.0f, 1.0f }, { 1.0f, 1.0f }, { 1.0f, 0.0f }
// ...
}
There are only two distinct values of texture coordinates but there should be four of them. Hence, it's not a surprise if the texture projection of that right face looks strange.
OP noted the wrong indices. This doesn't manifest in the geometry as the wrong indices address coordinates (vertices) with identical values. However, concerning the texture coordinates (uvs) these indices are just wrong.
According to the added index values, I corrected the indices for the right face:
indices = {
// ...
// right face
12, 13, 14,
14, 15, 12,
// ...
}
The indices of the top face are defined correctly but the other faces have to be checked as well. (I leave this as "homework" to OP. Or, like a colleague of mine used to say: Not to punish just to practice.) ;-)
On the second glance, I realized that OP's texture coordinates are wrong as well.
To understand how texture coordinates work:
There is a uv coordinate system applied to the image with
(0, 0) … the lower left corner
(1, 0) … the lower right corner
(0, 1) … the upper left corner
of the image.
taken from opengl-tutorial – Tutorial 5: A Textured Cube
Hence, using my
uvs = {
// ...
// right face
1.0f, 0.0f, // 12
1.0f, 0.0f, // 13
1.0f, 1.0f, // 14
1.0f, 1.0f, // 15
// ...
};
provides two times the lower right corner and two times the upper right corner. The result of such texture projection are stripes instead of bricks.
A better result should be achieved by repeating the texture coordinates of the front face 6 times:
uvs = {
// ...
// right face
0.0f, 0.0f, // 12
1.0f, 0.0f, // 13
1.0f, 1.0f, // 14
0.0f, 1.0f, // 15
// ...
};
I have a black and white image, a would like to replace the black pixels with red pixels. I've tried
Gdiplus::Graphics* g = Gdiplus::Graphics::FromImage(filename);
Gdiplus::ImageAttributes ia;
Gdiplus::ColorMatrix m = {
0.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f, 0.0f, 1.0f
};
ia.SetColorMatrix(&m);
g->DrawImage(org, r, 0, 0, org->GetWidth(), org->GetHeight(), Gdiplus::UnitPixel, &ia);
But it's making the entire bitmap red.
Do not use a matrix to perform this transformation. Your matrix will always output the following vector:
[1.0 0.0 0.0 currentAlpha 1.0]
That's why you have a red image.
Visit https://msdn.microsoft.com/en-us/library/ms533875%28v=vs.85%29.aspx
Use this instead
ImageAttributes ia;
ColorMap blackToRed;
blackToRed.oldColor = Color(255, 0, 0, 0); // black
blackToRed.newColor = Color(255, 255, 0, 0);// red
ia.SetRemapTable(1, &blackToRed);
I wrought basic OpenGL 2.1\ES example for supposed target platform, using Qt 4.7.1 library on Windows. Target is some kind of Linux, with Qt 4.8 max available, no glm or similar libraries. Embedded GPU supports ES 1.0 or OpenGL 2.1 only. Example is "classic" texture cube, which you might met in various OpenGL examples.. but those examples use direct calls to OpenGL functions, what isn't available to me for lack of proper headers and glew - both on development and on target platforms. Development platform is Windows 7.
Geometry
static const int vertexDataCount = 6 * 4 * 4;
static const float vertexData[vertexDataCount] = {
// Left face
-0.5f, -0.5f, -0.5f, 1.0f,//0
-0.5f, -0.5f, 0.5f, 1.0f,//1
-0.5f, 0.5f, 0.5f, 1.0f,//2
-0.5f, 0.5f, -0.5f, 1.0f,//3
// Top face
-0.5f, 0.5f, -0.5f, 1.0f, //4
-0.5f, 0.5f, 0.5f, 1.0f, //5
0.5f, 0.5f, 0.5f, 1.0f, //6
0.5f, 0.5f, -0.5f, 1.0f, //7
// Right face
0.5f, 0.5f, -0.5f, 1.0f,//8
0.5f, 0.5f, 0.5f, 1.0f,//9
0.5f, -0.5f, 0.5f, 1.0f,//10
0.5f, -0.5f, -0.5f, 1.0f,//11
// Bottom face
0.5f, -0.5f, -0.5f, 1.0f,//12
0.5f, -0.5f, 0.5f, 1.0f,//13
-0.5f, -0.5f, 0.5f, 1.0f,//14
-0.5f, -0.5f, -0.5f, 1.0f,//15
// Front face
0.5f, -0.5f, 0.5f, 1.0f,//16/
0.5f, 0.5f, 0.5f, 1.0f,//17
-0.5f, 0.5f, 0.5f, 1.0f,//18
-0.5f, -0.5f, 0.5f, 1.0f,//19
// Back face
0.5f, 0.5f, -0.5f, 1.0f,//20
0.5f, -0.5f, -0.5f, 1.0f,//21
-0.5f, -0.5f, -0.5f, 1.0f,//22
-0.5f, 0.5f, -0.5f, 1.0f //23
};
// Normal vectors
static const int normalDataCount = 6 * 4 * 3;
static const float normalData[normalDataCount] = {
// Left face
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
// Top face
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
// Right face
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
// Bottom face
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
0.0f, -1.0f, 0.0f,
// Front face
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
// Back face
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f,
0.0f, 0.0f, -1.0f
};
// Texure coords
static const int textureCoordDataCount = 6 * 4 * 2;
static const float textureCoordData[textureCoordDataCount] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f
};
// Indices
//
// 3 indices per triangle
// 2 triangles per face
// 6 faces
static const int indexDataCount = 6 * 3 * 2;
static const unsigned int indexData[indexDataCount] = {
0, 1, 2, 0, 2, 3, // Left face
4, 5, 6, 4, 6, 7, // Top face
8, 9, 10, 8, 10, 11, // Right face
12, 14, 15, 12, 13, 14, // Bottom face
16, 17, 18, 16, 18, 19, // Front face
20, 22, 23, 20, 21, 22 // Back face
};
This is how I load texture
glEnable(GL_TEXTURE_2D);
m_texture = bindTexture(QImage("cube.png"));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
if(m_shaderProgram)
m_shaderProgram->setUniformValue("texture", 0); // texture unit 0, assuming that we used
Vertex shader
#version 120
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec4 vertex;
attribute vec3 normal;
attribute vec2 texturecoord;
varying vec3 fragmentNormal;
varying vec2 outtexture;
void main( void )
{
// Transform the normal vector
fragmentNormal = ( modelViewMatrix * vec4( normal, 0.0 ) ).xyz;
// Calculate the clip-space coordinates
gl_Position = projectionMatrix * modelViewMatrix * vertex;
outtexture = texturecoord;
}
Fragment shader
#version 120
// in
uniform sampler2D texture;
varying vec2 outtexture;
varying vec3 fragmentNormal;
// out
// gl_FragColor
void main( void )
{
// Calculate intensity as max of 0 and dot product of the
// fragmentNormal and the eye position (0,0,1).
float intensity;
intensity = max( dot( fragmentNormal, vec3( 0.0, 0.0, 1.0 ) ), 0.15 );
gl_FragColor = intensity * texture2D(texture,outtexture); // vec4( 1.0, 0.0, 0.0, 1.0 );
}
I bind buffers this way (prepareBufferObject is little snippet function I took from Qt sample):
// Prepare the vertex, normal and index buffers
m_vertexBuffer = new QGLBuffer(QGLBuffer::VertexBuffer );
if ( !prepareBufferObject( m_vertexBuffer, QGLBuffer::StaticDraw, vertexData, sizeof(vertexData) ) )
return;
m_normalBuffer = new QGLBuffer(QGLBuffer::VertexBuffer );
if ( !prepareBufferObject( m_normalBuffer, QGLBuffer::StaticDraw, normalData, sizeof(normalData) ) )
return;
m_texBuffer = new QGLBuffer(QGLBuffer::IndexBuffer );
if ( !prepareBufferObject( m_texBuffer, QGLBuffer::StaticDraw, textureCoordData, sizeof(textureCoordData) ) )
return;
m_indexBuffer = new QGLBuffer(QGLBuffer::IndexBuffer );
if ( !prepareBufferObject( m_indexBuffer, QGLBuffer::StaticDraw, indexData, sizeof(indexData) ) )
return;
loadShaders("vertexshader120.glsl", "fragshader120.glsl");
// Enable the "vertex" attribute to bind it to our vertex buffer
m_vertexBuffer->bind();
m_shaderProgram->setAttributeBuffer( "vertex", GL_FLOAT, 0, 4 ); //xyzw
m_shaderProgram->enableAttributeArray( "vertex" );
// Enable the "normal" attribute to bind it to our texture coords buffer
m_normalBuffer->bind();
m_shaderProgram->setAttributeBuffer( "normal", GL_FLOAT, 0, 3 ); //xyz
m_shaderProgram->enableAttributeArray( "normal" );
m_texBuffer->bind();
m_shaderProgram->setAttributeBuffer( "texturecoord", GL_FLOAT, 0, 2 ); //uv
m_shaderProgram->enableAttributeArray( "texturecoord" );
// Bind the index buffer ready for drawing
m_indexBuffer->bind();
Finally , paintGL method
void GWidget::paintGL()
{
QMatrix4x4 model;
model.setToIdentity();
model.rotate(m_rotation);
QMatrix4x4 mv = m_view * model;
// MVP = projection * view * model
// uploading MVP into shader (may add code to check if MVP was update since last redraw)
m_shaderProgram->setUniformValue("modelViewMatrix",mv);
m_shaderProgram->setUniformValue("projectionMatrix",m_projection);
// set up to render the scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw stuff
glDrawElements( GL_TRIANGLES, // Type of primitive to draw
indexDataCount, // The number of indices in our index buffer we wish to draw
GL_UNSIGNED_INT, // The element type of the index buffer
0 ); // Offset from the start of our index buffer of where to begin
}
Everything works except texture looks misaligned and skewed -both on development and on target platforms. I checked UVs and that they correspond to proper vertices - yet it looks like order of texture coordinates is wrong. Where is error here?
For reference: source code
This is my first attempt at usage of flexible pipeline, so I could do something dumb there.
You're setting up your texture coordinate buffer as an index buffer:
m_texBuffer = new QGLBuffer(QGLBuffer::IndexBuffer );
Since it contains vertex attribute data, it should be created as:
m_texBuffer = new QGLBuffer(QGLBuffer::VertexBuffer);
First of all here are the important parts of my code.
Creating the vertices.
D3DVertexTexture Vertices[] =
{
{-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, },
{ 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, },
{ 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, },
{-1.0f, -1.0f, 0.0f, 0.0f, 1.0f, },
};
Creating the vertex buffer.
D3DDevice->CreateVertexBuffer(sizeof(Vertices),
0,
D3DFVF_CUSTOMVERTEXTEXTURE,
D3DPOOL_MANAGED,
&vb,
NULL);
Memory crap.
void* pVoid;
vb->Lock(0, sizeof(pVoid), (void**) &pVoid, 0);
memcpy(pVoid, Vertices, sizeof(Vertices));
vb->Unlock();
Loading the texture.
D3DXCreateTextureFromFile(D3DDevice, "images/tex.png", &t);
Rendering.
D3DDevice->SetFVF(D3DFVF_CUSTOMVERTEXTEXTURE);
D3DDevice->SetTexture(0, t);
D3DDevice->SetStreamSource(0, vb, 0, sizeof(D3DVertexTexture));
D3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2);
Where is my problem.
It shows a square but the left side of the side is missing in a triangular shape like this.
Vertices A,B,C,D in a triangle strip will produce two triangles: A,B,C and B,C,D
A -- B A--B B
| | \ | /|
| | \| / |
D -- C C D--C
Look at that diagram and picture those two triangles...
Then go and put your vertices in the right order - triangle strips should 'zig-zag', not proceed in clockwise or anti-clockwise order.
If you order them: A,B,D,C - the quad will draw correctly.
Have you tried defining you vertices in this order:
D3DVertexTexture Vertices[] =
{
{ 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, },
{-1.0f, -1.0f, 0.0f, 0.0f, 1.0f, },
{-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, },
{ 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, },
};
I believe the order in wich vertices are drawn is by default the clockwise order. You are defining in an incorrect order.
Hello I use the same way to render sprites with directx from a long time but here I am rendering the screen in a texture and then render it with a big sprite on the screen.
For the camera I use that:
vUpVec=D3DXVECTOR3(0,1,0);
vLookatPt=D3DXVECTOR3(0,0,0);
vFromPt=D3DXVECTOR3(0,0,-1);
D3DXMatrixLookAtRH( &matView, &vFromPt, &vLookatPt, &vUpVec );
g_pd3dDevice->SetTransform( D3DTS_VIEW, &matView );
D3DXMatrixOrthoRH( &matProj, 1,1, 0.5f, 20 );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
And to render the sprite:
CUSTOMVERTEX* v;
spritevb->Lock( 0, 0, (void**)&v, 0 );
v[0].position = D3DXVECTOR3(-0.5f,-0.5f,0); v[0].u=0; v[0].v=1;
v[1].position = D3DXVECTOR3(-0.5f,0.5f,0); v[1].u=0; v[1].v=0;
v[2].position = D3DXVECTOR3(0.5f,-0.5f,0); v[2].u=1; v[2].v=1;
v[3].position = D3DXVECTOR3(0.5f,0.5f,0); v[3].u=1; v[3].v=0;
spritevb->Unlock();
g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
This is very basic and works, my sprite is rendered on the screen full.
But by looking closer I see that there's a small diagonal line through the screen (between the 2 polygons) not a colored one but like if them weren't perfectly positionned.
I thought about filtering and tried removing everything but maybe I forget something...
Thanks
To render to full screen best way is to not define any camera positions.
If you use as input positions
SimpleVertex vertices[] =
{
{ XMFLOAT3( -1.0f, 1.0f, 0.5f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 0.5f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.5f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 0.5f ), XMFLOAT2( 0.0f, 1.0f ) },
};
and in the Vertex Shader do
VS_OUTPUT RenderSceneVS( VS_INPUT input )
{
VS_OUTPUT Output;
Output.Position = input.Position;
Output.TextureUV = input.TextureUV;
return Output;
}
you get a render to full screen as well without having to worry about the viewing frustrum. Using this I never saw any lines between the two triangles.