TRIANGLESTRIP works as required with DrawIndexed why not with Draw? - c++

I was trying to create triangle drawing with user-defined coordinates using Directx 11:
void triangle(float anchorx, float anchory, float x1, float y1, float x2, float y2, XMFLOAT4 _color)
{
XMMATRIX scale;
XMMATRIX translate;
XMMATRIX world;
simplevertex framevertices[3] = { XMFLOAT3(anchorx, ui::height - anchory, 1.0f), XMFLOAT2(0.0f, 0.0f),
XMFLOAT3(x1, ui::height - y1, 1.0f), XMFLOAT2(1.0f, 0.0f),
XMFLOAT3(x2, ui::height - y2, 1.0f), XMFLOAT2(1.0f, 1.0f) };
world = XMMatrixIdentity();
dx11::generalcb.world = XMMatrixTranspose(world);// XMMatrixIdentity();
dx11::generalcb.fillcolor = _color;
dx11::generalcb.projection = XMMatrixOrthographicOffCenterLH( 0.0f, ui::width, 0.0f, ui::height, 0.01f, 100.0f );
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
dx11::context->Map(dx11::vertexbuffers::trianglevertexbuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, framevertices, sizeof(simplevertex) * 3);
dx11::context->Unmap(dx11::vertexbuffers::trianglevertexbuffer, NULL);
dx11::context->VSSetShader(dx11::shaders::simplevertexshader, NULL, 0);
dx11::context->IASetVertexBuffers(0, 1, &dx11::vertexbuffers::trianglevertexbuffer, &dx11::verticestride, &dx11::verticeoffset);
dx11::context->IASetInputLayout(dx11::shaders::simplevertexshaderlayout);
dx11::context->PSSetShader(dx11::shaders::panelpixelshader, NULL, 0);
dx11::context->UpdateSubresource(dx11::general_cb, 0, NULL, &dx11::generalcb, 0, 0);
dx11::context->IASetIndexBuffer(dx11::indexbuffers::triangleindexbuffer, DXGI_FORMAT_R16_UINT, 0);
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
//dx11::context->Draw(3, 0);
};
Substration on Y axis when filling pos.y data:
XMFLOAT3(anchorx, ui::height - anchory, 1.0f)
XMFLOAT3(x1, ui::height - y1, 1.0f)
XMFLOAT3(x2, ui::height - y2, 1.0f)
My orthographic projection sets coordinates zeroes to left-bottom of the screen, so I substract passed Y coordinate from height of the window to get proper position on y axis. Not sure it can affect my problem because it worked well with all the rest of primitives (rectangles, textures, filled rectangles, lines and circles).
Index order defined in index buffer:
unsigned short triangleindices[6] = { 0, 1, 2, 0, 2, 3 };
Im actually using this index buffer to render rectangles so its created to render 2 triangles forming a quad, didn't bother to create separate triangle index buffer
Trianglevertexbuffer contains array 4 of simplevertex:
//A VERTEX STRUCT
struct simplevertex
{
XMFLOAT3 pos; // pixel coordintes
XMFLOAT2 uv; // corresponding point color coordinates in texture
};
I was not using UV data at the function above, just filled them with random data, because color data is passed via constant buffer. As you see I also memcpy only 3 first array data to the vertex buffer for a triangle requires only 3.
VERTEX SHADER:
// SIMPLE VERTEX SHADER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// simple vertex shader
float4 main(float4 input : POSITION) : SV_POSITION
{
float4 output = (float4)0; // setting variable fields to zero, may be skipped?
output = mul(input, world); // multiplying vertex shader output by world matrix passed in the constant buffer
output = mul(output, projection); // multiplying on by projection passed in the constant buffer (2d for now), resulting in final output data(.pos)
return output;
}
PIXEL SHADER:
// SIMPLE PIXEL SHADER RETURNING COLOR PASSED IN CONSTNT BUFFER
cbuffer buffer : register(b0) // constant buffer, set up as 0 in set constant buffers command
{
matrix world; // world matrix
matrix projection; // projection matrix, is orthographic for now
float4 bordercolor;
float4 fillcolor;
float blendindex;
};
// pixel shader returning preset color in constant buffer
float4 main(float4 input : SV_POSITION) : SV_Target
{
return fillcolor; // and we just return the color passed to constant buffer
}
Layout used in the function:
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 };
There, I created next lines in the rendering cycle:
if (ui::keys[VK_F1]) tools::triangle(700, 100, 400, 300, 850, 700, colors::blue);
if (ui::keys[VK_F2]) tools::triangle(400, 300, 700, 100, 850, 700, colors::blue);
if (ui::keys[VK_F3]) tools::triangle(700, 100, 850, 700, 400, 300, colors::blue);
if (ui::keys[VK_F4]) tools::triangle(850, 700, 400, 300, 700, 100, colors::blue);
if (ui::keys[VK_F5]) tools::triangle(850, 700, 700, 100, 400, 300, colors::blue);
if (ui::keys[VK_F6]) tools::triangle(400, 300, 850, 700, 700, 100, colors::blue);
The point of this setup is to achieve any random order of coordinates forming a triangle for rendering, but this is actually the point of this question - I didn't get this triangle rendered at some coordinates variations, so I came to conclusion it goes from TOPOLOGY, as you can see I have at the moment:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->DrawIndexed(4, 0, 0);
This is the only combination that draws all the triangles but I honestly don't understand how it does happen, from what I know STRIP topology is used with context->Draw function, while LIST works with index buffer setup. Tried next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
dx11::context->DrawIndexed(4, 0, 0);
Triangles F1, F5, F6 were not drawn. Aight, next:
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
//dx11::context->DrawIndexed(4, 0, 0);
dx11::context->Draw(3, 0);
Same story, F1, F5 and F6 are not rendered.
I cannot understand the things are going on, you may find the code a bit primitive but I only want to know why I get working result only with the combination of STRIP topology and DrawIndexed function. I hope I provided enough information, sorry if not, Ill correct on demand. Thank you:)

Got this.
First of all in my rasterizer settings I had rasterizerState.CullMode = D3D11_CULL_FRONT, don't remember why may be I just copy-pasted some other's steady code, so I changed it to D3D11_CULL_NONE and all worked as intended.
dx11::context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
dx11::context->Draw(3, 0);
Second, I found out that facing or not-facing of a primitive depends on the drawing direction, if drawing vertices goes counter-clockwise (vertice 2 is "righter" then vertice 1 on the start) the primitive is considered as facing the view, if it goes clockwise - than we see its "back" instead.
So i decided to keep D3D11_CULL_NONE instead of adding some math logic to define the order of vertices, dont know for sure if D3D11_CULL_NONE mode cuts performance tho.

Related

Opengl:Unexpected behaviour when trying to draw a line to visualise mouse Ray cast

I was experimenting with ray casting to implement mouse picking in my application.
What I wanted to do is literally draw the line that is cast from the screen when the mouse is clicked.
Here's what i did:
float xNormalised = ((float)200/w.getWidth())*2 -1;
float yNormalised = -(((float)300/w.getHeight())*2 -1);
glm::vec4 nearP = glm::vec4(xNormalised,yNormalised,-1,1);
glm::vec4 farP = glm::vec4(xNormalised,yNormalised,1,1);
For simplicity i hard code the mouse coordinates from which the ray should be cast,
i map them to normalised device coordinates and then i create two vectors representing two points: the point nearest to the screen and the point farthest on the same line.
worldNear = getInversedPoint(nearP,proj, view);
worldFar = getInversedPoint(farP,proj, view);
Then i call this function on each of the points to basically reverse the pipeline to get their version in the world space, here's what it does:
glm::vec4 getInversedPoint(glm::vec4 point,glm::mat4& proj,glm::mat4 view){
//glm::mat4 inv = inverse(proj*view);
point = point*inverse(proj);
auto v = vec4(point.x/point.w,point.y/point.w,point.z/point.w,1);
//auto v = vec4(point.x/point.w,point.y/point.w,point.z/point.w,point.w);
point = v*inverse(view);
return vec4(point.x,point.y,point.z,point.w);
}
After this i should have gotten the two points in world space if i didn't get it wrong,
so i put these two points in a buffer and call gldraw to draw the line that connects them.
What i'm expecting is basically a red dot in the coordinates that i put, because it is a straight line from the click point to the point in front of it... but i get a strange line more or less in the center of the screen, which is not where i specified..
What am i doing wrong?
You can see the red line in the center, near the sideways triangle.
My vertex shader:
#version 330 core
layout(location = 1)in vec4 pos;
uniform mat4 projectMat;
uniform mat4 viewMat;
void main(){
gl_Position = pos;
}
I also tried to multiply the points by the two matrices as well but it only slightly change.
You can use glm::unproject to get you far and near points :
glm::vec3 near = glm::unProject(glm::vec3(mouseX, winSize.y - mouseY, 0), // the screen-space coordinate
camera.view,
camera.projection,
glm::vec4(0, 0, winSize)); // your viewport
glm::vec3 far = glm::unProject(glm::vec3(mouseX, winSize.y - mouseY, 0), // the screen-space coordinate
camera.view,
camera.projection,
glm::vec4(0, 0, winSize)); // your viewport
You could also get the point under the mouse directly by reading the framebuffer :
/** read depth buffer and get the point under the cursor **/
float depth;
glReadPixels(mouseX, winSize.y - mouseY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);
if (depth < 1.0)
point_on_plane = glm::unProject(glm::vec3(mouseX, winSize.y - mouseY, depth), camera.view,
camera.projection,
glm::vec4(0, 0, winSize));

how to convert XMMATRIX to D3DMATRIX in DirectX 9?

I learn DirectX (DirectX 9) from www.directxtutorial.com and using visual studio 2012 in windows 8.
d3dx9 (d3dx) replace by other header like DirectXMath, therefore I replaced all that is needed, but there is a problem - convert XMMATRIX to D3DMATRIX.
The problem code (The problem written - /problem!/):
void render_frame(void) {
// clear the window to a deep blue
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);
d3ddev->BeginScene(); // begins the 3D scene
// select which vertex format we are using
d3ddev->SetFVF(CUSTOMFVF);
// SET UP THE PIPELINE
DirectX::XMMATRIX matRotateY; // a matrix to store the rotation information
static float index = 0.0f; index+=0.05f; // an ever-increasing float value
// build a matrix to rotate the model based on the increasing float value
matRotateY = DirectX::XMMatrixRotationY(index);
D3DMATRIX D3DMatRotateY = matRotateY.r;
// tell Direct3D about our matrix
d3ddev->SetTransform(D3DTS_WORLD, &matRotateY); /*problem!*/
DirectX::XMMATRIX matView; // the view transform matrix
DirectX::XMVECTOR CameraPosition = {0.0f,0.0f,10.0f};
DirectX::XMVECTOR LookAtPosition = {0.0f,0.0f,0.0f};
DirectX::XMVECTOR TheUpDirection = {0.0f,1.0f,0.0f};
matView = DirectX::XMMatrixLookAtLH(CameraPosition, // the camera position
LookAtPosition, // the look-at position
TheUpDirection); // the up direction
d3ddev->SetTransform(D3DTS_VIEW, &matView); /*problem!*/ // set the view transform to matView
DirectX::XMMATRIX matProjection; // the projection transform matrix
DirectX::XMMatrixPerspectiveFovLH(&matProjection,
DirectX::XMConvertToRadians(45), // the horizontal field of view
1.0f, // the near view-plane
100.0f); // the far view-plane
d3ddev->SetTransform(D3DTS_PROJECTION, &matProjection); /*problem!*/ // set the projection
// select the vertex buffer to display
d3ddev->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
// copy the vertex buffer to the back buffer
d3ddev->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1);
d3ddev->EndScene(); // ends the 3D scene
d3ddev->Present(NULL, NULL, NULL, NULL); /* displays the created frame on the screen */ }
You can use XMStoreFloat4x4 to convert XMMATRIX to a XMFLOAT4X4.
You should be able to pass in XMFLOAT4X4 to setTransform by casting.
DirectX::XMMATRIX matProjection;  
DirectX::XMFLOAT4X4 projectionMatrix;
DirectX::XMMatrixPerspectiveFovLH(&matProjection,DirectX::XMConvertToRadians(45),1.0f,100.0f);
XMStoreFloat4x4(&projectionMatrix, matProjection);
d3ddev->SetTransform(D3DTS_PROJECTION, (D3DXMATRIX*)&projectionMatrix);  /*problem!*/   // set the projection

depth buffer got by glReadPixels is always 1

I'm using glReadPixels to get depth value of select pixel, but i always get 1, how can i solve it? here is the code:
glEnable(GL_DEPTH_TEST);
..
glReadPixels(x, viewport[3] - y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, z);
Do I miss anything? And my rendering part is shown below. I use different shaders to draw different part of scene, so how should i make it correct to read depth value from buffer?
void onDisplay(void)
{
// Clear the window and the depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// calculate the view matrix.
GLFrame eyeFrame;
eyeFrame.MoveUp(gb_eye_height);
eyeFrame.RotateWorld(gb_eye_theta * 3.1415926 / 180.0, 1.0, 0.0, 0.0);
eyeFrame.RotateWorld(gb_eye_phi * 3.1415926 / 180.0, 0.0, 1.0, 0.0);
eyeFrame.MoveForward(-gb_eye_radius);
eyeFrame.GetCameraMatrix(gb_hit_modelview);
gb_modelViewMatrix.PushMatrix(gb_hit_modelview);
// draw coordinate system
if(gb_bCoord)
{
DrawCoordinateAxis();
}
if(gb_bTexture)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
GLfloat vAmbientColor[] = { 0.2f, 0.2f, 0.2f, 1.0f };
GLfloat vDiffuseColor[] = { 1.0f, 1.0f, 1.0f, 1.0f};
glUseProgram(normalMapShader);
glUniform4fv(locAmbient, 1, vAmbientColor);
glUniform4fv(locDiffuse, 1, vDiffuseColor);
glUniform3fv(locLight, 1, vEyeLight);
glUniform1i(locColorMap, 0);
glUniform1i(locNormalMap, 1);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
else
{
if(!gb_bOnlyVoxel)
{
if(gb_bPoints)
{
//GLfloat vPointColor[] = { 1.0, 1.0, 0.0, 0.6 };
GLfloat vPointColor[] = { 0.2, 0.0, 0.0, 0.9 };
gb_shaderManager.UseStockShader(GLT_SHADER_FLAT, gb_transformPipeline.GetModelViewProjectionMatrix(), vPointColor);
gb_treeskl.Display(NULL, NULL, 1);
}
if(gb_bSkeleton)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
gb_treeskl.Display(SetGeneralColor, SetSelectedColor, 0);
}
}
if(gb_bVoxel)
{
GLfloat vEyeLight[] = { -100.0f, 100.0f, 150.0f };
glUseProgram(adsPhongShader);
glUniform3fv(locLight, 1, vEyeLight);
SetVoxelColor();
glPolygonMode(GL_FRONT, GL_LINE);
glLineWidth(1.0f);
gb_treeskl.DisplayVoxel();
glPolygonMode(GL_FRONT, GL_FILL);
}
}
//glUniformMatrix4fv(locMVP, 1, GL_FALSE, gb_transformPipeline.GetModelViewProjectionMatrix());
//glUniformMatrix4fv(locMV, 1, GL_FALSE, gb_transformPipeline.GetModelViewMatrix());
//glUniformMatrix3fv(locNM, 1, GL_FALSE, gb_transformPipeline.GetNormalMatrix());
//gb_sphereBatch.Draw();
gb_modelViewMatrix.PopMatrix();
glutSwapBuffers();
}
I think you are reading correctly the only problem is that you are not linearize the depth from buffer back to <znear...zfar> range hence the ~1 value for whole screen due to logarithmic dependence of depth (almost all the values are very close to 1).
I am doing this like this:
double glReadDepth(double x,double y,double *per=NULL) // x,y [pixels], per[16]
{
GLfloat _z=0.0; double m[16],z,zFar,zNear;
if (per==NULL){ per=m; glGetDoublev(GL_PROJECTION_MATRIX,per); } // use actual perspective matrix if not passed
zFar =0.5*per[14]*(1.0-((per[10]-1.0)/(per[10]+1.0))); // compute zFar from perspective matrix
zNear=zFar*(per[10]+1.0)/(per[10]-1.0); // compute zNear from perspective matrix
glReadPixels(x,y,1,1,GL_DEPTH_COMPONENT,GL_FLOAT,&_z); // read depth value
z=_z; // logarithmic
z=(2.0*z)-1.0; // logarithmic NDC
z=(2.0*zNear*zFar)/(zFar+zNear-(z*(zFar-zNear))); // linear <zNear,zFar>
return -z;
}
Do not forget that x,y is in pixels and (0,0) is bottom left corner !!! The returned depth is in range <zNear,zFar>. The function is assuming you are using perspective transform like this:
void glPerspective(double fovy,double aspect,double zNear,double zFar)
{
double per[16],f;
for (int i=0;i<16;i++) per[i]=0.0;
// original gluProjection
// f=divide(1.0,tan(0.5*fovy*deg))
// per[ 0]=f/aspect;
// per[ 5]=f;
// corrected gluProjection
f=divide(1.0,tan(0.5*fovy*deg*aspect));
per[ 0]=f;
per[ 5]=f*aspect;
// z range
per[10]=divide(zFar+zNear,zNear-zFar);
per[11]=-1.0;
per[14]=divide(2.0*zFar*zNear,zNear-zFar);
glLoadMatrixd(per);
}
Beware the depth accuracy will be good only for close to camera object without linear depth buffer. For more info see:
How to correctly linearize depth in OpenGL ES in iOS?
If the problem persist there might be also another reason for this. Do you have Depth buffer in your pixel format? In windows You can check like this:
Getting a window's pixel format
Missing depth buffer could explain why the value is always 1 (not like ~0.997). In such case you need to change the init of your window enabling some bits for depth buffer (16/24/32). See:
What is the proper OpenGL initialisation on Intel HD 3000?
For more detailed info about using this technique (with C++ example) see:
OpenGL 3D-raypicking with high poly meshes
Well, you missed to past the really relevent parts of the code. Also the status of the depth testing unit has no influence on what glReadPixels delivers. How about you post your rendering code as well.
Update
After a buffer swap SwapBuffers the contents of the back buffer are undefined and the default state for frame buffer reads is to read from the back buffer. Technically double buffering happens on only the color component, not the depth and stencil component. But you might run into a driver issue with that.
I suggest two tests to rule out those:
Do a read of the depth buffer with glReadBuffer(GL_BACK); right before the SwapBuffers.
Select the front buffer with glReadBuffer(GL_FRONT); for reading after SwapBuffers
Also please specify in which context (program, not OpenGL, well the later, too) you did your glReadPixels when this problem occours. Also check if you can read color value correctly.

How can I copy parts of an image from the buffer into a texture to render?

I have been searching around for a simple solution, but I have not found anything. Currently I am loading a texture from a file and rendering it into the buffer using C++ 2012 Express DirectX9. But what I want to do is be able to copy parts of the buffer, and use the part that is copied as the texture, instead of the loaded texture.
I want to be able to copy/select like a map-editor would do.
EDIT: Problem Solves :) It was just dumb mistakes.
You can use the StretchRect function (see documentation).
You should copy a subset of the source buffer into the whole destination buffer (which is the new texture's buffer in your case). Something like this:
LPDIRECT3DTEXTURE9 pTexSrc, // source texture
pTexDst; // new texture (a subset of the source texture)
// create the textures
// ...
LPDIRECT3DSURFACE9 pSrc, pDst;
pTexSrc->GetSurfaceLevel(0, &pSrc);
pTexDst->GetSurfaceLevel(0, &pDst);
RECT rect; // (x0, y0, x1, y1) - coordinates of the subset to copy
rect.left = x0;
rect.right = x1;
rect.top = y0;
rect.bottom = y1;
pd3dDevice->StretchRect(pSrc, &rect, pDst, NULL, D3DTEXF_NONE);
// the last parameter could be also D3DTEXF_POINT or D3DTEXF_LINEAR
pSrc->Release();
pDst->Release(); // remember to release the surfaces when done !!!
EDIT:
OK, I've just got through the tones of your code and I think the best solution would be to use uv coordinates instead of copying subsets of the palette texture. You should calculate the appropriate uv coordinates for a given tile in game_class:: game_gui_add_current_graphic and use them in the CUSTOMVERTEX structure:
float width; // the width of the palette texture
float height; // the height of the palette texture
float tex_x, tex_y; // the coordinates of the upper left corner
// of the palette texture's subset to use for
// the current tile texturing
float tex_w, tex_h; // the width and height of the above mentioned subset
float u0, u1, v0, v1;
u0 = tex_x / width;
v0 = tex_y / height;
u1 = u0 + tex_w / width;
v1 = v0 + tex_h / height;
// create the vertices using the CUSTOMVERTEX struct
CUSTOMVERTEX vertices[] = {
{ 0.0f, 32.0f, 1.0f, u0, v1, D3DCOLOR_XRGB(255, 0, 0), },
{ 0.0f, 0.0f, 1.0f, u0, v0, D3DCOLOR_XRGB(255, 0, 0), },
{ 32.0f, 32.0f, 1.0f, u1, v1, D3DCOLOR_XRGB(0, 0, 255), },
{ 32.0f, 0.0f, 1.0f, u1, v0, D3DCOLOR_XRGB(0, 255, 0), } };
Example: Your palette consists of 3 rows and 4 columns with the 12 possible cell textures. Each texture is 32 x 32. So tex_w = tex_h = 32;, width = 4 * tex_w; and height = 3 * tex_h;. Suppose you want to calculate uv coordinates for a tile which should be textured with the image in the second row and the third column of the palette. Then tex_x = (3-1)*tex_w; and tex_y = (2-1)*tex_h;. Finally, you calculate the UVs as in the code above (in this example you'll get {u0,v0,u1,v1} = {(3-1)/4, (2-1)/3, 3/4, 2/3} = {0.5, 0.33, 0.75, 0.66}).

DirectX Clip space texture coordinates

Okay first up I am using:
DirectX 10
C++
Okay this is a bit of a bizarre one to me, I wouldn't usually ask the question, but I've been forced by circumstance. I have two triangles (not a quad for reasons I wont go into!) full screen, aligned to screen through the fact they are not transformed.
In the DirectX vertex declaration I am passing a 3 component float (Pos x,y,z) and 2 component float (Texcoord x,y). Texcoord z is reserved for texture2d arrays, which I'm currently defaulting to 0 in the the pixel shader.
I wrote this to achieve the simple task:
float fStartX = -1.0f;
float fEndX = 1.0f;
float fStartY = 1.0f;
float fEndY = -1.0f;
float fStartU = 0.0f;
float fEndU = 1.0f;
float fStartV = 0.0f;
float fEndV = 1.0f;
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fStartY, 0, fEndU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fEndY, 0, fStartU, fEndV));
IA Layout: (Update)
D3D10_INPUT_ELEMENT_DESC ieDesc[2] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,12, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
Data reaches the vertex shader in the following format: (Update)
struct VS_INPUT
{
float3 fPos :POSITION;
float3 fTexcoord :TEXCOORD0;
}
Within my Vertex and Pixel shader not a lot is happening for this particular draw call, the pixel shader does most of the work sampling from a texture using the specified UV coordinates. However, this isn't working quite as expected, it appears that I am getting only 1 pixel of the sampled texture.
The workaround was in the pixel shader to do the following: (Update)
sampler s0 : register(s0);
Texture2DArray<float4> meshTex : register(t0);
float4 psMain(in VS_OUTPUT vOut) : SV_TARGET
{
float4 Color;
vOut.fTexcoord.z = 0;
vOut.fTexcoord.x = vOut.fPosObj.x * 0.5f;
vOut.fTexcoord.y = vOut.fPosObj.y * 0.5f;
vOut.fTexcoord.x += 0.5f;
vOut.fTexcoord.y += 0.5f;
Color = quadTex.Sample(s0, vOut.fTexcoord);
Color.a = 1.0f;
return Color;
}
It was also worth noting that this worked with the following VS out struct defined in the shaders:
struct VS_OUTPUT
{
float4 fPos :POSITION0; // SV_POSITION wont work in this case
float3 fTexcoord :TEXCOORD0;
}
Now I have a texture that's stretched to fit the entire screen, both triangles already cover this, but why did the texture UV's not get used as expected?
To clarify I am using a point sampler and have tried both clamp and wrapping UV.
I was a bit curious and found a solution / workaround mentioned above, however I'd prefer not to have to do it if anyone knows why it's happening?
What semantics are you specifying for your vertex-type? Are they properly aligned with your vertices and also your shader? If you are using a D3DXVECTOR4, D3DXVECTOR3 setup (as shown in your VS code) this could be a problem if your CreateVertex() returns a D3DXVECTOR3, D3DXVECTOR2 struct.
It would be reassuring to see your pixel-shader code too.
Okay well, for one, the texture coordinates outside of 0..1 range get clamped. I made the mistake of assuming that by going to clip space -1 to +1 that the texture coordinates would be too. This is not the case, they still go from 0.0 to 1.0.
The reason why the code in the pixel shader worked, was because it was using the clip space x,y,z coordinates to automatically overwrite these texture coordinates; an oversight on my part. However, the pixel shader code results in texture stretch on a full screen 'quad', so it might be useful to someone somewhere ;)