Are HLSL loops broken? - hlsl

I've been messing around with HLSL and Direct3D12 and encountered something strange. Here's a minimal reproducible example:
float4 main() : SV_TARGET
{
uint indices[3];
indices[0] = 0;
uint l = 0;
[loop] while (true) {
if (indices[l] == 1)
return float4(1.0f, 0.0f, 0.0f, 1.0f);
indices[l + 1] = 1;
if (++l == 2)
break;
}
return float4(0.0f, 1.0f, 0.0f, 1.0f);
}
The code is completely pointless, but at least in my eyes, there shouldn't be a problem with it, and it should return a red color. However, compiling and running this (with HLSL version 5_0, with and without optimizations) does strangely enough neither return red nor green; it doesn't seem to return anything at all, as even though I have all blending disabled, nothing is being drawn.
Changing seemingly meaningless things, like removing the [loop] tag or changing the size of indices to 2, gets rid of the strange behavior.
Does anyone know what's going on here? Are HLSL loops simply broken?

Related

Trying to Make Endless Runner C++ with OpenGL

I have an array of cube objects initialised like so (index 0 not used here as that's for the player):
game_object[1] = new GameObject();
game_object[1]->setPosition(vec3(7.0f, 0.0f, 0.0f));
game_object[2] = new GameObject();
game_object[2]->setPosition(vec3(14.0f, 0.0f, 0.0f));
game_object[3] = new GameObject();
game_object[3]->setPosition(vec3(21.0f, 0.0f, 0.0f));
game_object[4] = new GameObject();
game_object[4]->setPosition(vec3(36.0f, 0.0f, 0.0f));
game_object[5] = new GameObject();
game_object[5]->setPosition(vec3(42.0f, 0.0f, 0.0f));
I have a render function in which they are drawn:
glDrawElements(GL_TRIANGLES, 3 * INDICES, GL_UNSIGNED_INT, NULL);
In my update they move to the left as expected. To do this I am just adding another vector to their positions:
for (int i = 1; i < MAX_CUBES; i++)
{
game_object[i]->setPosition(game_object[i]->getPosition() + vec3(-0.03, 0.0, 0.00));
}
However, I want the cubes to repeat this until the user exits the game. I made a reset function to send them back to their starting positions:
void Game::reset()
{
game_object[0]->setPosition(vec3(0.0f, 0.0f, 0.0f));
game_object[1]->setPosition(vec3(7.0f, 0.0f, 0.0f));
game_object[2]->setPosition(vec3(14.0f, 0.0f, 0.0f));
game_object[3]->setPosition(vec3(21.0f, 0.0f, 0.0f));
game_object[4]->setPosition(vec3(36.0f, 0.0f, 0.0f));
game_object[5]->setPosition(vec3(42.0f, 0.0f, 0.0f));
}
This function gets called in the update when the final cube's position is off screen to the left:
if (game_object[5]->getPosition().x <= 0.0)
{
reset();
}
However, this isn't working. Nothing resets after the last cube goes to the left.
Not sure how you are using game_object here but looks very error prone. If you have MAX_CUBES = 5 (as you do have 5 cubes), then that for-loop will miss the last one. Adding further objects (e.g. for gaps, vertical rules, hazards, etc.) will make it even more so.
for (int i = 1; i < MAX_CUBES; i++)
{
game_object[i]->setPosition(game_object[i]->getPosition() + vec3(-0.03, 0.0, 0.00));
}
If MAX_CUBES = 5, then it will move index 1, 2, 3, 4, and not 5, which is the one you check in the condition. 5 will just stay at 42 permanently (is that off-screen?).
Stepping through the code in a debugger will make a problem like this pretty clear regardless, and is an essential tool for programming. Maybe the code just never reaches the if (game_object[5]->getPosition().x <= 0.0) check in the first place? Is there any return in that update function, or is that condition inside another one of some sorts?
Because in your comment you noted that game_object[5]->getPosition().x returns a correct value, the most likely problem is with your reset() function and the setPosition function you are using.
1. Check if set position is working in the first place
Perhaps there is an error with setPosition().
After you set the position using setPosition() and then log the object's coordinates using getPosition() does it return the position you expect?
If not, something is wrong with setPosition.
If so, then...
2. You probably changed the position but failed to render it!
This is a very common problem lol
There is a very high chance you changed the position of the object BUT didn't update what's shown on the screen!
3. Side note for scalability
There is a much more efficient and scalable way of doing a reset if you have eventually have more than 5 objects, by placing their reset values in an array and looping through them:
#define MAX_CUBES 6
double resetPositions_x[MAX_CUBES] = {0.0, 7.0, 14.0, 21.0, 36.0, 42.0};
void Game::reset()
{
for(int i=0;i<MAX_CUBES;i++){
game_object[i]->setPosition(vec3(resetPositions_x[i], 0.0f, 0.0f));
}
}
(Also, it seems every x reset position is a multiple of 7 except 36.0 -> is that a mistake?)

Problems sampling D3D11 depth buffer

I'm getting everything ready in a little DirectX 11.0 project of mine for a deferred rendering pipeline. However, I've been having quite a lot of trouble with sampling the depth buffer from within a pixel shader.
First I define the depth texture and its shader resource view:
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory(&depthTexDesc, sizeof(depthTexDesc));
depthTexDesc.Width = nWidth;
depthTexDesc.Height = nHeight;
depthTexDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.CPUAccessFlags = 0;
depthTexDesc.MiscFlags = 0;
hresult = d3dDevice_->CreateTexture2D(&depthTexDesc, nullptr, &depthTexture_);
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(DSVDesc));
DSVDesc.Format = DXGI_FORMAT_D32_FLOAT;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
hresult = d3dDevice_->CreateDepthStencilView(depthTexture_, &DSVDesc, &depthView_);
D3D11_SHADER_RESOURCE_VIEW_DESC gbDepthTexDesc;
ZeroMemory(&gbDepthTexDesc, sizeof(gbDepthTexDesc));
gbDepthTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gbDepthTexDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
gbDepthTexDesc.Texture2D.MipLevels = 1;
gbDepthTexDesc.Texture2D.MostDetailedMip = -1;
d3dDevice_->CreateShaderResourceView(depthTexture_, &gbDepthTexDesc, &gbDepthView_);
Here's the relevant part of my rendering function:
float clearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
d3dContext_->ClearRenderTargetView(backBufferTarget_, clearColor);
d3dContext_->ClearDepthStencilView(depthView_, D3D11_CLEAR_DEPTH, 1.0f, 0);
// GBuffer packing pass (in the future): /////////////////////////////////////////
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, depthView_);
unsigned int nStride = sizeof(Vertex);
unsigned int nOffset = 0;
d3dContext_->IASetInputLayout(inputLayout_);
d3dContext_->IASetVertexBuffers(0, 1, &vertexBuffer_, &nStride, &nOffset);
d3dContext_->IASetIndexBuffer(indexBuffer_, DXGI_FORMAT_R32_UINT, 0);
d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dContext_->VSSetShader(firstVS_, 0, 0);
d3dContext_->PSSetShader(firstPS_, 0, 0);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, nullptr);
d3dContext_->VSSetShader(secondVS_, 0, 0);
d3dContext_->PSSetShader(secondPS_, 0, 0);
d3dContext_->PSGetShaderResources(0, 1, &gbDepthView_);
d3dContext_->PSSetSamplers(0, 1, &colorMapSampler_);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
swapChain_->Present(0, 0);
In this temporary implementation, firstVS_ and secondVS_ are identical, and their only function is to do all the transforms and pass on the data to the PSs.
And finally, here are firstPS_ and secondPS_:
// firstPS_
float4 main(PS_Input frag) : SV_TARGET
{
return float4(1.0f, 1.0f, 1.0f, 1.0f);
}
// secondPS_
Texture2D<float> depthMap_ : register(t0);
SamplerState colorSampler_ : register(s0);
float4 main(PS_Input frag) : SV_TARGET
{
float4 psOut;
psOut.xyz = depthMap_.Sample(colorSampler_, frag.tex0).xxx;
psOut.w = 1.0f;
return psOut;
}
So, my actual questions:
1) All this code compiles without any issues, but when I sample the depth buffer, it just turns out black. I read this could be caused by having your depth & stencil view bound by D3D11DeviceContext::OMSetRenderTargets() at the time you want to sample the depth buffer. I fixed that, but the buffer is still black. I checked the graphics debugger, with no success. So, is my depth buffer not getting written correctly, or am I sampling the wrong way? firstPS_ works fine.
2) Speaking of sampling, the book I'm using just says "we'll be using a point sampler," but I have no idea what is exactly meant. Now I'm just using a standard texture map sampler, but is there something else I should sample with?
3) Also, the book uses the SamplerState.Gather() function in secondPS_, but when I tried that it complained that "the expression could not be mapped to pixel shader instruction set." Is Gather() an error in the book, or is it my GPU (D3D feature level 11.0) who doesn't understand what that is? Is Sample() good enough for what I want to do? The original use of Gather() was in the context of creating a silhouette around objects in the depth buffer.
4) I tried to get secondVS_ to draw nothing but a full-screen quad, but FXC complained about my use of SV_VertexID as "invalid," saying that my type should be integral, even though it already was. I read somewhere that SV_VertexID can only be used by the first VS in a pipeline. Is that the problem here? How do I solve this in this particular case? In my current inefficient solution, is the problem being caused by the UVs?
1) You've called PSGetShaderResources instead of PSSetShaderResources. Also, MostDetailedMip should be 0 not -1.
2) A "point sampler" is just a texture sampler with the FILTER field set to something like D3D11_FILTER_MIN_MAG_MIP_POINT.
3) Gather is a function on the Texture2D not the SamplerState as you stated.
4) You get this error if you compile using vs_4_0, try vs_5_0.

Oscilating colors in OpenGL

I am trying to complete some basic example in OpenGL from the OGLSuperbible. Basically, I am going to make the background shift from red to orange to green and back.
Here is the code that I am using:
typedef float F32;
void Example1::Render(void) {
time += 0.20f;
const GLfloat color[] = { (F32)sin(time) * 0.5f + 0.5f,
(F32)cos(time) * 0.5f + 0.5f,
0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
}
I have a precision timer that will measure the delta time of the last frame, but, anytime that I call the sin and cos functions with anything less than 1, it just keeps the screen at green. However, If I hard code the value to change by, as I have, if I increase it by 1 or more, it will flash between the colors very quickly (like a rave). I am not sure why the functions wont work for floating point numbers. I am using visual studio, and have included the math.h header. Has anyone seen anything like this before?
Update: Based of suggestions, I have tried a few things with the code. I got the program to have the effect that I was looking for by adding the following:
Testing my code, I manually input the follow:
In the constructor:
Example1(void): red(0.0f), green(1.0f), interval(0.002f), redUp(true), greenUp(false).....
In the render loop
if (red >= 1.0f) { redUp = false; }
else if (red <= 0.0f) { redUp = true; }
if (green >= 1.0f) { greenUp = false; }
else if (green <= 0.0f) { greenUp = true; }
if (redUp) { red += interval; }
else { red -= interval; }
if (greenUp) { green += interval; }
else { green -= interval; }
const GLfloat color[] = { red, green, 0.0f, 1.0f };
It does what its supposed to, but using the sin and cos functions with the floating point numbers has no change. I am baffled by why, I has assumed that giving sin and cos the floating point values that are time would work. I have tried counting time manually, incrementing it by 1/60th of a second manually, but any time I use sin and cos with anything less than 1.0f, it just remains green.
It looks like your time interval is much too large. If you're just assuming you're running at 60fps (It could be hundreds if you're not restricting it) then delta time should be 0.01667 (1/60) seconds per frame. Incrementing by 0.2 every frame (especially if your refresh rate is over 60fps) will result in strobing.
If you're using C++11, I'd suggest using the Chrono libraries and get exact numbers to use. That's well documented in this post.
Once you're using the actual time, remember that sin and cos take radians, not degrees. The range is only [0, 2PI) so in addition to passing in just your time variable, multiply it with some small factor and play around with that number:
time += 0.01667f;
F32 adjustedTime = time * 0.02; // OcillationSpeed
const GLfloat color[] = { (F32)sin( adjustedTime ) * 0.5f + 0.5f,
(F32)cos( adjustedTime ) * 0.5f + 0.5f,
...
Also, I don't know if you just neglected adding it to this question but don't forget to call glClear(GL_COLOR_BUFFER_BIT); after the clear color is set.
I think you need to refresh the screen by glutSwapBuffers();
I have this code and I can change the background color without difficulties:
void RenderScene(void)
{
// Clear the window with current clearing color
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
time+=0.2;
const GLfloat color[]={sin(time)*0.5f+0.5f,cos(time)*0.5f+0.5f,0.0f,1.0f};
glClearBufferfv(GL_COLOR,0,color);
GLfloat vRed[] = { 1.0f, 0.0f, 0.0f, 1.0f };
shaderManager.UseStockShader(GLT_SHADER_IDENTITY, vRed);
triangleBatch.Draw();
// Perform the buffer swap to display back buffer
glutSwapBuffers();
}
I get these results
Thanks for all the input. With much digging, I was able to find what my issue was. I thought it was logical, but it was not. It was a syntax error. Here is the code that produces that same results that you are seeing:
const GLfloat color[] = { F32(sin(curTime)) * 0.5f + 0.5f,
F32(cos(curTime)) * 0.5f + 0.5f,
0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, color);
The difference is using (F32)sin.... or F32(sin(... I am not 100% why this works, but I think it has something to do with how the constructor is being called on the typedef that I have set up for the F32 type, which I did in the first place because of some advice in a book called Game Engine Architecture by Jason Gregory. Anyway, thanks for working through it with me.

changing a color of a triangle

Very very new to DirectX, trying to just mess with this tutorial I got to learn how things work. Figured out what controlled the color of the triangle it drew, and I find it to be extremely weird.
Here's why:
-when I change the variables (which is a float) the left side number at all, NOTHING happens. I can change it to 40000000 or 4 or 3 or 400000000 or 10 or 9 NOTHING changes at all.
-when I change the variable from a positive to negative, or vice versa, it DOES change the color.
-when I change any of the variables to 0.0f, it changes the color.
So I'm really trying to figure out the logic for this, I mean, how can the variable number NOT affect it's color value? Here's some code that will hopefully make my question make more sense.
SimpleVertexShader.hlsl
float4 SimplePixelShader(PixelShaderInput input) : SV_TARGET
{
// Draw the entire triangle yellow.
return float4(4.0f, 0.0f, 2.0f, 6.0f);
}
Main.cpp
auto vertexShaderBytecode = reader->ReadData("SimpleVertexShader.cso");
ComPtr<ID3D11VertexShader> vertexShader;
DX::ThrowIfFailed(
m_d3dDevice->CreateVertexShader(
vertexShaderBytecode->Data,
vertexShaderBytecode->Length,
nullptr,
&vertexShader
)
);
return float4(4.0f, 0.0f, 2.0f, 6.0f);
you are returning the pixel color, and the valid input range for a color is [0.0 - 1.0], if the value was out of this range, it will be truncated to this range
for values < 0.0, it was treated as 0
for values > 1.0, it was treated as 1
That's why you didn't see any change when you update the values bigger than 1.0

D3D11: How to draw a simple pixel aligned line?

I tried to draw a line between two vertices with D3D11. I have some experiences in D3D9 and D3D11, but it seems to be a problem in D3D11 to draw a line, which starts in one given pixel and ends in an other.
What I did:
I added 0.5f to the pixel coordinates of each vertex to fit the texel-/pixel coordinate system (I read the Microsoft pages to the differeces between D3D9 and D3D11 coordinate systems):
f32 fOff = 0.5f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3(fStartX + fOff, fStartY + fOff,0), vecColorRGB },
{ D3DXVECTOR3(fEndX + fOff, fEndY + fOff,0), vecColorRGB }
};
Generated a ortho projection matrix to fit the render target:
D3DXMatrixOrthoOffCenterLH(&MatrixOrthoProj,0.0f,(f32)uRTWidth,0.0f,(f32)uRTHeight,0.0f,1.0f);
D3DXMatrixTranspose(&cbConstant.m_matOrthoProjection,&MatrixOrthoProj);
Set RasterizerState, BlendState, Viewport, ...
Draw Vertices as D3D11_PRIMITIVE_TOPOLOGY_LINELIST
Problem:
The Line seems to be one pixel to short. It starts in the given pixel coordinate an fits it perfect. The direction of the line looks correct, but the pixel where I want the line to end is still not colored. It looks like the line is just one pixel to short...
Is the any tutorial explaining this problem or does anybody have the same problem? As I remember it wasn't as difficult in D3D9.
Please ask if you need further information.
Thanks, Stefan
EDIT: found the rasterization rules for d3d10 (should be the same for d3d11):
http://msdn.microsoft.com/en-us/library/cc627092%28v=vs.85%29.aspx#Line_1
I hope this will help me understanding...
According to the rasterisation rules (link in the question above) I might have found a solution that should work:
sort the vertices StartX < EndX and StartY < EndY
add (0.5/0.5) to the start vertex (as i did before) to move the vertex to the center of the pixel
add (1.0/1.0) to the end vertex to move the vertex to the lower right corner
This is needed to tell the rasterizer that the last pixel of the line should be drawn.
f32 fXStartOff = 0.5f;
f32 fYStartOff = 0.5f;
f32 fXEndOff = 1.0f;
f32 fYEndOff = 1.0f;
ColoredVertex newVertices[2] =
{
{ D3DXVECTOR3((f32)fStartX + fXStartOff, (f32)fStartY + fYStartOff,0), vecColorRGB },
{ D3DXVECTOR3((f32)fEndX + fXEndOff , (f32)fEndY + fYEndOff,0), vecColorRGB }
};
If you know a better solution, please let me know.
I don't know D3D11, but your problem sounds a lot like the D3DRS_LASTPIXEL render state from D3D9 - maybe there's an equal for D3D11 you need to look into.
I encountered the exact same issue, and i fixed thank to this discussion.
My vertices are stored into a D3D11_PRIMITIVE_TOPOLOGY_LINELIST vertex buffer.
Thank for this usefull post, you made me fix this bug today.
It was REALLY trickier than i thought at start.
Here a few line of my code.
// projection matrix code
float width = 1024.0f;
float height = 768.0f;
DirectX::XMMATRIX offsetedProj = DirectX::XMMatrixOrthographicRH(width, height, 0.0f, 10.0f);
DirectX::XMMATRIX proj = DirectX::XMMatrixMultiply(DirectX::XMMatrixTranslation(- width / 2, height / 2, 0), offsetedProj);
// view matrix code
// screen top left pixel is 0,0 and bottom right is 1023,767
DirectX::XMMATRIX viewMirrored = DirectX::XMMatrixLookAtRH(eye, at, up);
DirectX::XMMATRIX mirrorYZ = DirectX::XMMatrixScaling(1.0f, -1.0f, -1.0f);
DirectX::XMMATRIX view = DirectX::XMMatrixMultiply(mirrorYZ, viewMirrored);
// draw line code in my visual debug tool.
void TVisualDebug::DrawLine2D(int2 const& parStart,
int2 const& parEnd,
TColor parColorStart,
TColor parColorEnd,
float parDepth)
{
FLine2DsDirty = true;
// D3D11_PRIMITIVE_TOPOLOGY_LINELIST
float2 const startFloat(parStart.x() + 0.5f, parStart.y() + 0.5f);
float2 const endFloat(parEnd.x() + 0.5f, parEnd.y() + 0.5f);
float2 const diff = endFloat - startFloat;
// return normalized difference or float2(1.0f, 1.0f) if distance between the points is null. Then multiplies the result by something a little bigger than 0.5f, 0.5f is not enough.
float2 const diffNormalized = diff.normalized_replace_if_null(float2(1.0f, 1.0f)) * 0.501f;
size_t const currentIndex = FLine2Ds.size();
FLine2Ds.resize(currentIndex + 2);
render::vertex::TVertexColor* baseAddress = FLine2Ds.data() + currentIndex;
render::vertex::TVertexColor& v0 = baseAddress[0];
render::vertex::TVertexColor& v1 = baseAddress[1];
v0.FPosition = float3(startFloat.x(), startFloat.y(), parDepth);
v0.FColor = parColorStart;
v1.FPosition = float3(endFloat.x() + diffNormalized.x(), endFloat.y() + diffNormalized.y(), parDepth);
v1.FColor = parColorEnd;
}
I tested Several DrawLine2D calls, and it seems to work well.