I've been working on a minecraft clone recently and I've been able to generate simple infinite worlds with some noise for height Maps etc but the problem I'm facing are the textures. As you can see in the image below the Textures have some kind of border. They aren't seamless.I use a sprite sheet to send a single texture to the GPU and then Use Different Texture Coords for different BlockTypes.Also I'm using Vulkan As the Rendering Backend and here are some texturing details. I Would really Appreciate some insight on how to tackle this problem.
VkSamplerCreateInfo sInfo{};
sInfo.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
sInfo.pNext = nullptr;
sInfo.magFilter = VK_FILTER_NEAREST;
sInfo.minFilter = VK_FILTER_LINEAR;
sInfo.addressModeU = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.addressModeV = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.anisotropyEnable = VK_TRUE;
VkPhysicalDeviceProperties Props;
vkGetPhysicalDeviceProperties(Engine::Get().GetGpuHandle(), &Props);
sInfo.maxAnisotropy = Props.limits.maxSamplerAnisotropy;
sInfo.borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK;
sInfo.unnormalizedCoordinates = VK_FALSE;
sInfo.compareEnable = VK_FALSE;
sInfo.compareOp = VK_COMPARE_OP_ALWAYS;
sInfo.mipmapMode = VK_SAMPLER_MIPMAP_MODE_LINEAR;
sInfo.mipLodBias = 0.f;
sInfo.minLod = 0.f;
sInfo.maxLod = 1;
VkImageCreateInfo iInfo{};
iInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
iInfo.pNext = nullptr;
iInfo.arrayLayers = 1;
iInfo.extent = { extent.width,extent.height,1 };
iInfo.imageType = VK_IMAGE_TYPE_2D;
iInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
iInfo.mipLevels = 1;
iInfo.samples = samples;
iInfo.flags = 0;
iInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
Info.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
iInfo.format = format;
iInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
VkImageViewCreateInfo vInfo{};
vInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
vInfo.pNext = nullptr;
vInfo.image = m_Image;
vInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
vInfo.format = format;
vInfo.flags = 0;
vInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
vInfo.subresourceRange.baseMipLevel = 0;
vInfo.subresourceRange.levelCount = 1;
vInfo.subresourceRange.baseArrayLayer = 0;
vInfo.subresourceRange.layerCount = 1;
First I thought It's an issue with my sprite sheet so I different ones but that wasn't the problem it seems. Then I tried several other sampler parameter combinations but still No Luck
The issue you are facing with texture borders is a common problem when using texture atlases (sprite sheets) in games. The reason for this is that texture coordinates are not always perfectly aligned with the pixel boundaries of the texture, which can result in sampling from adjacent pixels and introducing seams or artifacts.
There are several techniques that can be used to address this issue, some of which are:
1.Texture coordinate offsetting
2.Texture atlasing
In your code, it seems that you are using texture wrapping with the VK_SAMPLER_ADDRESS_MODE_REPEAT mode, which can exacerbate the issue of texture bleeding. One way to mitigate this is to use a border color that matches the texture's edge pixels to reduce the visible seams. You can set the border color using the sInfo.borderColor member of the VkSamplerCreateInfo structure.
Overall, it may be helpful to experiment with different combinations of texture padding, coordinate offsetting, and texture atlasing to find the best solution for your specific case. Additionally, you may want to consider using a tool such as TexturePacker or Sprite Sheet Packer to generate optimized texture atlases with padding and other optimizations to reduce visible seams.
I know this problem may seem silly, but I'm having problems when using alpha channel values to blend textures, if my alpha values goes from 1.0 to nearly 0.501, the object fades slowly , once it gets to 0.5 or lower it simply vanishes. Here's two print screens that shows it:
Alpha set to 0.501
Alpha set to 0.5
I wanted to be able to see the tree above even at around 0.1 alpha, even though it would be barely visible and mostly transparent, instead of it just vanishing suddenly. As follows is my current code for my blend state:
D3D11_BLEND_DESC bd;
ZeroMemory(&bd, sizeof(D3D11_BLEND_DESC));
bd.RenderTarget->BlendEnable = true;
bd.AlphaToCoverageEnable = true;
bd.RenderTarget->RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
bd.RenderTarget->SrcBlend = D3D11_BLEND_SRC_ALPHA;
bd.RenderTarget->DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
bd.RenderTarget->SrcBlendAlpha = D3D11_BLEND_INV_DEST_ALPHA;
bd.RenderTarget->DestBlendAlpha = D3D11_BLEND_ONE;
bd.RenderTarget->BlendOp = D3D11_BLEND_OP_ADD;
bd.RenderTarget->BlendOpAlpha = D3D11_BLEND_OP_ADD;
hr = m_pDevice->CreateBlendState(&bd, &m_pBlendStateON);
if (FAILED(hr))
return Log("Failed to create blend state."); // Log is just a function to register errors on my app.
My blend factor is defined as follows: float BlendFactor[4] = { 0,0,0,0 };, and my sample mask uses the default value UINT SampleMask = 0xffffffff;
If anyone knows what I could do to make the transparency fade slowly from 1.0f to 0.0f values it would be a great help.
EDIT: I've found out that if I disable AlphaToCoverageEnable it will not cull colors until it gets to a 0.0f alpha value, but don't know what to do because I need the AlphaToCoverageEnable or else this squares from the tree's branches will show up, is there any way I can change the threshold so AlphaToCoverageEnable only culls the colors if alpha is actually 0.0f?
I am totally lost now. Have been trying to read the backbuffer inside a vertex shader for days with no luck whatsoever.
I'm trying to read the vertexes position from the backbuffer and it's neighboring pixels. (I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader). I've created a separate ID3D11Texture2D and an SRV to go with the backBuffer. I copy the backbuffer into this SRV's resource. Bind the SRV using VSSetShaderResources but just can't seem to be able to read from it inside the vertex shader.
I will share some code here from the creation of these elements as well as include some RenderDoc screenshots that keep showing that the SRV is being bound to the VS stage and has the right texture associated with it but every Load or []operator or tex2dlod or SampleLevel(i bound a SamplerState too)
just keeps returning a single 1.0 value with the rest of the float4 never being returned, meaning i only get a float1 back. I will also include a renderdoc capture file if anyone wants to take a look.
This is a simple scene from tutorial 42 on the rastertek.com site, there is a ground plane with a cube and a sphere on it :
https://i.imgur.com/cbVC48E.gif
// Here is some code when creating the secondary texture and SRV that houses a //backBuffer
// Get the pointer to the back buffer.
result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&backBufferPtr);
if(FAILED(result))
{
MessageBox((*(hwnd)), L"Get the pointer to the back buffer FAILED", L"Error", MB_OK);
return false;
}
// Create another texture2d that we will use to make an SRV out of, and this texture2d will be used to copy the backbuffer to so we can read it in a shader
D3D11_TEXTURE2D_DESC bbDesc;
backBufferPtr->GetDesc(&bbDesc);
bbDesc.MipLevels = 1;
bbDesc.ArraySize = 1;
bbDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bbDesc.Usage = D3D11_USAGE_DEFAULT;
bbDesc.MiscFlags = 0;
bbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
result = m_device->CreateTexture2D(&bbDesc, NULL, &m_backBufferTx2D);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Create a Tx2D for backbuffer SRV FAILED", L"Error", MB_OK);
return false;
}
D3D11_SHADER_RESOURCE_VIEW_DESC descSRV;
ZeroMemory(&descSRV, sizeof(descSRV));
descSRV.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
descSRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
descSRV.Texture2D.MipLevels = 1;
descSRV.Texture2D.MostDetailedMip = 0;
result = GetDevice()->CreateShaderResourceView(m_backBufferTx2D, &descSRV, &m_backBufferSRV);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Creating BackBuffer SRV FAILED.", L"Error", MB_OK);
return false;
}
// Create the render target view with the back buffer pointer.
result = m_device->CreateRenderTargetView(backBufferPtr, NULL, &m_renderTargetView);
First I render the scene in all white and then I copy that to the SRV and bind it for the next shader that's supposed to sample it. I'm expecting to get a float4(1.0, 1.0, 1.0, 1.0) value returned when i sample the backbuffer with the vertex's on screen position
https://i.imgur.com/N9CYg8c.png
as shown on the top left in the event browser, there were three drawindexed calls for rendering everything in white and then a CopyResource.
I've selected the next (fourth) DrawIndexed and on the right side outlined in red are the inputs for this next shader clearly showing that the backBuffer has been successfully bound to the vertex shader.
And now for the part that's giving me trouble
https://i.imgur.com/ENuXk0n.png
I'm gonna be debugging this top-left vertex as shown on the screenshot,
the vertex Shader has a
Texture2D prevBackBuffer: register(t0);
written at the top
https://i.imgur.com/8cihNsq.png
When trying to sample the left neighboring pixel
this line of code returns newCoord = float2(158, 220)
when entering these pixel values in the texture view i get this pixel
https://i.imgur.com/DT72Fl1.png
so the coordinates are ok so far, and as outlined i'm expecting to get a float4(0.0, 0.0, 0.0, 1,0) returned when i sample this pixel
(I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader)
AND YET, when i sample that pixel right after altering the pixel coordinates since load counts pixels from bottom left so i need
newCoord = float2(158, 379), i get this
https://i.imgur.com/8SuwOzz.png
why is this, even if it's out of range, load should return all zeros, since I'm not sure about the whole load counts from bottom left thing I tried sampling using the top left coordinates (158, 220) but end up getting 0.0, ?, ?, ?
I'm completely stumped and have no idea what to try next. I've tried using a sample state :
// Create a clamp texture sampler state description.
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = device->CreateSamplerState(&samplerDesc, &m_sampleStateClamp);
but still never get a proper float4 back when reading the texture.
Any ideas, suggestions, I'll take anything at this point.
Oh and here's a RenderDoc file of the frame i was examining :
http://www.mediafire.com/file/1bfiqdpjkau4l0n/my_capture.rdc/file
So from my experience, reading from the back buffer is not really an operation that you want to be doing in the first place. If you have to do any operation on the rendered scene, the best way to do that is to render the scene to an intermediate texture, perform the operation on that texture, then render the final scene to the back buffer. This is generally how things like dynamic shadows are done - the scene is rendered from the perspective of the light, and the resulting buffer is interpreted to get a shadow value that is then applied to the final scene (this is also why dynamic light sources are limited in commercial game engines - they're rather expensive to use).
A similar idea can be applied here. First, render the whole scene to an intermediate texture, bound as a render target view (where the pixel format is specified by you, the programmer). Next, rebind that intermediate texture as a shader resource view, and render the scene again, using the edge detection shader and the real back buffer (where the pixel format is defined by the hardware).
This, fundamentally, is what I believe the issue is - a back buffer is a device dependent resource, and its format can change depending on the hardware. Therefore, using it from a shader is not safe, as you don't always know what the format will be. A device independent resource, on the other hand, will always have the same format, and you can safely use it however you like from a shader.
I wasn't able to get sampling an SRV in the vertex shader to work
but what i was able to get working
is using a backBuffer.SampleLevel inside a compute shader
I also had to change the sampler to something like this :
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0.5f;
samplerDesc.BorderColor[1] = 0.5f;
samplerDesc.BorderColor[2] = 0.5f;
samplerDesc.BorderColor[3] = 0.5f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = 0;
For some reason, I cannot specify DXGI_FORMAT_R32G32B32_FLOAT format when creating a texture 2d in directx11. I can do it just fine in OpenGL, however. It also works fine when using DXGI_FORMAT_R32G32B32A32_FLOAT. I am using these textures as rendertargets for the gbuffer.
// create gbuffer textures/rendertargets
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(D3D11_TEXTURE2D_DESC));
textureDesc.Width = swapChainDesc.BufferDesc.Width;
textureDesc.Height = swapChainDesc.BufferDesc.Height;
textureDesc.ArraySize = 1;
textureDesc.MipLevels = 1;
textureDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; <----- dosn't like this; returns E_INVALIDARG
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
for (uint32_t index = 0; index < GBuffer::GBUFFER_NUM_RENDERTARGETS; index++)
{
DXCALL(device->CreateTexture2D(&textureDesc, NULL, &mGeometryTextures[index]));
DXCALL(device->CreateRenderTargetView(mGeometryTextures[index], NULL, &mRenderTargets[index]));
}
Why cant I use DXGI_FORMAT_R32G32B32_FLOAT when creating a 2d texture in directx 11?
I do not need the extra float in my texture, hence I'd rather have just three elements rather than four.
Not all hardware supports using R32G32B32_FLOAT as a render-target and shader-resource (it's optional). You can verify whether the hardware supports the format for those uses by calling CheckFormatSupport. If it is succeeding on the same hardware with OpenGL, this likely means OpenGL is padding the resource out to the full 4-channel variant behind the scenes.
DXGI_FORMAT_R32G32B32_FLOAT support for render targets is optional: http://msdn.microsoft.com/en-us/library/windows/desktop/ff471325(v=vs.85).aspx#RenderTarget
If you think that this format should be supported by your device then turn on debug output as MooseBoys suggested. This should explain why you're getting E_INVALIDARG.
I have a texture and was curious as to what the texture sampler should be for sampling the sprite texture? I am using DirectX11, though if you know what it should be for DX9/10, I believe it is transferable.
I tried
AddressU = D3D11_TEXTURE_ADDRESS_WRAP
AddressV = D3D11_TEXTURE_ADDRESS_WRAP
AddressW = D3D11_TEXTURE_ADDRESS_WRAP
ComparisonFunc = D3D11_COMPARISON_NEVER
Filter = D3D11_FILTER_MIN_MAG_MIP_POINT
MaxAnisotropy = 1;
MaxLOD = D3D11_FLOAT32_MAX;
MinLOD = 0;
MipLODBias = 0;
Although when rendering, there appeared to be artifacts and it did not seem as clear as it should be.
This is an example of what the artifcats are. The top text with a light blue background you can see artifacts (for example, the A and C). The bottom text with the black background is the origin image.