Seamless Textures in Voxel Worlds - c++

I've been working on a minecraft clone recently and I've been able to generate simple infinite worlds with some noise for height Maps etc but the problem I'm facing are the textures. As you can see in the image below the Textures have some kind of border. They aren't seamless.I use a sprite sheet to send a single texture to the GPU and then Use Different Texture Coords for different BlockTypes.Also I'm using Vulkan As the Rendering Backend and here are some texturing details. I Would really Appreciate some insight on how to tackle this problem.
VkSamplerCreateInfo sInfo{};
sInfo.sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO;
sInfo.pNext = nullptr;
sInfo.magFilter = VK_FILTER_NEAREST;
sInfo.minFilter = VK_FILTER_LINEAR;
sInfo.addressModeU = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.addressModeV = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.addressModeW = VK_SAMPLER_ADDRESS_MODE_REPEAT;
sInfo.anisotropyEnable = VK_TRUE;
VkPhysicalDeviceProperties Props;
vkGetPhysicalDeviceProperties(Engine::Get().GetGpuHandle(), &Props);
sInfo.maxAnisotropy = Props.limits.maxSamplerAnisotropy;
sInfo.borderColor = VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK;
sInfo.unnormalizedCoordinates = VK_FALSE;
sInfo.compareEnable = VK_FALSE;
sInfo.compareOp = VK_COMPARE_OP_ALWAYS;
sInfo.mipmapMode = VK_SAMPLER_MIPMAP_MODE_LINEAR;
sInfo.mipLodBias = 0.f;
sInfo.minLod = 0.f;
sInfo.maxLod = 1;
VkImageCreateInfo iInfo{};
iInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
iInfo.pNext = nullptr;
iInfo.arrayLayers = 1;
iInfo.extent = { extent.width,extent.height,1 };
iInfo.imageType = VK_IMAGE_TYPE_2D;
iInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
iInfo.mipLevels = 1;
iInfo.samples = samples;
iInfo.flags = 0;
iInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
Info.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
iInfo.format = format;
iInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
VkImageViewCreateInfo vInfo{};
vInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
vInfo.pNext = nullptr;
vInfo.image = m_Image;
vInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
vInfo.format = format;
vInfo.flags = 0;
vInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
vInfo.subresourceRange.baseMipLevel = 0;
vInfo.subresourceRange.levelCount = 1;
vInfo.subresourceRange.baseArrayLayer = 0;
vInfo.subresourceRange.layerCount = 1;
First I thought It's an issue with my sprite sheet so I different ones but that wasn't the problem it seems. Then I tried several other sampler parameter combinations but still No Luck

The issue you are facing with texture borders is a common problem when using texture atlases (sprite sheets) in games. The reason for this is that texture coordinates are not always perfectly aligned with the pixel boundaries of the texture, which can result in sampling from adjacent pixels and introducing seams or artifacts.
There are several techniques that can be used to address this issue, some of which are:
1.Texture coordinate offsetting
2.Texture atlasing
In your code, it seems that you are using texture wrapping with the VK_SAMPLER_ADDRESS_MODE_REPEAT mode, which can exacerbate the issue of texture bleeding. One way to mitigate this is to use a border color that matches the texture's edge pixels to reduce the visible seams. You can set the border color using the sInfo.borderColor member of the VkSamplerCreateInfo structure.
Overall, it may be helpful to experiment with different combinations of texture padding, coordinate offsetting, and texture atlasing to find the best solution for your specific case. Additionally, you may want to consider using a tool such as TexturePacker or Sprite Sheet Packer to generate optimized texture atlases with padding and other optimizations to reduce visible seams.

Related

Sampling Back Buffer in vertex Shader always returns 0 and float1 instead of float4

I am totally lost now. Have been trying to read the backbuffer inside a vertex shader for days with no luck whatsoever.
I'm trying to read the vertexes position from the backbuffer and it's neighboring pixels. (I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader). I've created a separate ID3D11Texture2D and an SRV to go with the backBuffer. I copy the backbuffer into this SRV's resource. Bind the SRV using VSSetShaderResources but just can't seem to be able to read from it inside the vertex shader.
I will share some code here from the creation of these elements as well as include some RenderDoc screenshots that keep showing that the SRV is being bound to the VS stage and has the right texture associated with it but every Load or []operator or tex2dlod or SampleLevel(i bound a SamplerState too)
just keeps returning a single 1.0 value with the rest of the float4 never being returned, meaning i only get a float1 back. I will also include a renderdoc capture file if anyone wants to take a look.
This is a simple scene from tutorial 42 on the rastertek.com site, there is a ground plane with a cube and a sphere on it :
https://i.imgur.com/cbVC48E.gif
// Here is some code when creating the secondary texture and SRV that houses a //backBuffer
// Get the pointer to the back buffer.
result = m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&backBufferPtr);
if(FAILED(result))
{
MessageBox((*(hwnd)), L"Get the pointer to the back buffer FAILED", L"Error", MB_OK);
return false;
}
// Create another texture2d that we will use to make an SRV out of, and this texture2d will be used to copy the backbuffer to so we can read it in a shader
D3D11_TEXTURE2D_DESC bbDesc;
backBufferPtr->GetDesc(&bbDesc);
bbDesc.MipLevels = 1;
bbDesc.ArraySize = 1;
bbDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
bbDesc.Usage = D3D11_USAGE_DEFAULT;
bbDesc.MiscFlags = 0;
bbDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
result = m_device->CreateTexture2D(&bbDesc, NULL, &m_backBufferTx2D);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Create a Tx2D for backbuffer SRV FAILED", L"Error", MB_OK);
return false;
}
D3D11_SHADER_RESOURCE_VIEW_DESC descSRV;
ZeroMemory(&descSRV, sizeof(descSRV));
descSRV.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
descSRV.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
descSRV.Texture2D.MipLevels = 1;
descSRV.Texture2D.MostDetailedMip = 0;
result = GetDevice()->CreateShaderResourceView(m_backBufferTx2D, &descSRV, &m_backBufferSRV);
if (FAILED(result))
{
MessageBox((*(m_hwnd)), L"Creating BackBuffer SRV FAILED.", L"Error", MB_OK);
return false;
}
// Create the render target view with the back buffer pointer.
result = m_device->CreateRenderTargetView(backBufferPtr, NULL, &m_renderTargetView);
First I render the scene in all white and then I copy that to the SRV and bind it for the next shader that's supposed to sample it. I'm expecting to get a float4(1.0, 1.0, 1.0, 1.0) value returned when i sample the backbuffer with the vertex's on screen position
https://i.imgur.com/N9CYg8c.png
as shown on the top left in the event browser, there were three drawindexed calls for rendering everything in white and then a CopyResource.
I've selected the next (fourth) DrawIndexed and on the right side outlined in red are the inputs for this next shader clearly showing that the backBuffer has been successfully bound to the vertex shader.
And now for the part that's giving me trouble
https://i.imgur.com/ENuXk0n.png
I'm gonna be debugging this top-left vertex as shown on the screenshot,
the vertex Shader has a
Texture2D prevBackBuffer: register(t0);
written at the top
https://i.imgur.com/8cihNsq.png
When trying to sample the left neighboring pixel
this line of code returns newCoord = float2(158, 220)
when entering these pixel values in the texture view i get this pixel
https://i.imgur.com/DT72Fl1.png
so the coordinates are ok so far, and as outlined i'm expecting to get a float4(0.0, 0.0, 0.0, 1,0) returned when i sample this pixel
(I'm trying to count how many black pixels are around a vertex, and if there are any color that vertex red in the pixel shader)
AND YET, when i sample that pixel right after altering the pixel coordinates since load counts pixels from bottom left so i need
newCoord = float2(158, 379), i get this
https://i.imgur.com/8SuwOzz.png
why is this, even if it's out of range, load should return all zeros, since I'm not sure about the whole load counts from bottom left thing I tried sampling using the top left coordinates (158, 220) but end up getting 0.0, ?, ?, ?
I'm completely stumped and have no idea what to try next. I've tried using a sample state :
// Create a clamp texture sampler state description.
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = device->CreateSamplerState(&samplerDesc, &m_sampleStateClamp);
but still never get a proper float4 back when reading the texture.
Any ideas, suggestions, I'll take anything at this point.
Oh and here's a RenderDoc file of the frame i was examining :
http://www.mediafire.com/file/1bfiqdpjkau4l0n/my_capture.rdc/file
So from my experience, reading from the back buffer is not really an operation that you want to be doing in the first place. If you have to do any operation on the rendered scene, the best way to do that is to render the scene to an intermediate texture, perform the operation on that texture, then render the final scene to the back buffer. This is generally how things like dynamic shadows are done - the scene is rendered from the perspective of the light, and the resulting buffer is interpreted to get a shadow value that is then applied to the final scene (this is also why dynamic light sources are limited in commercial game engines - they're rather expensive to use).
A similar idea can be applied here. First, render the whole scene to an intermediate texture, bound as a render target view (where the pixel format is specified by you, the programmer). Next, rebind that intermediate texture as a shader resource view, and render the scene again, using the edge detection shader and the real back buffer (where the pixel format is defined by the hardware).
This, fundamentally, is what I believe the issue is - a back buffer is a device dependent resource, and its format can change depending on the hardware. Therefore, using it from a shader is not safe, as you don't always know what the format will be. A device independent resource, on the other hand, will always have the same format, and you can safely use it however you like from a shader.
I wasn't able to get sampling an SRV in the vertex shader to work
but what i was able to get working
is using a backBuffer.SampleLevel inside a compute shader
I also had to change the sampler to something like this :
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_BORDER;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0.5f;
samplerDesc.BorderColor[1] = 0.5f;
samplerDesc.BorderColor[2] = 0.5f;
samplerDesc.BorderColor[3] = 0.5f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = 0;

D3D11: Rendering (depth) to texture results in red square, normal rendering works

I'm currently working on a D3D project and want to implement directional shadow mapping. I set everything up according to the Microsoft Guide, but it just doesn't work.
I've created a 2D texture object, a depth stencil view and a shader resource view and set them up using the following descriptions:
D3D11_TEXTURE2D_DESC shadowMapDesc;
ZeroMemory(&shadowMapDesc, sizeof(D3D11_TEXTURE2D_DESC));
shadowMapDesc.Width = width;
shadowMapDesc.Height = height;
shadowMapDesc.MipLevels = 1;
shadowMapDesc.ArraySize = 1;
shadowMapDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
shadowMapDesc.SampleDesc.Count = 1;
shadowMapDesc.SampleDesc.Quality = 0;
shadowMapDesc.Usage = D3D11_USAGE_DEFAULT;
shadowMapDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
shadowMapDesc.CPUAccessFlags = 0;
shadowMapDesc.MiscFlags = 0;
ID3D11Device& d3ddev = dev.getD3DDevice();
uint32_t *initData = new uint32_t[width * height];
ZeroMemory(initData, sizeof(uint32_t) * width * height);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = initData;
data.SysMemPitch = sizeof(uint32_t) * width;
data.SysMemSlicePitch = 0;
HRESULT hr = d3ddev.CreateTexture2D(&shadowMapDesc, &data, &texture_);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
hr = d3ddev.CreateDepthStencilView(texture_, &depthStencilViewDesc, &stencilView_);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
ZeroMemory(&shaderResourceViewDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
hr = d3ddev.CreateShaderResourceView(texture_, &shaderResourceViewDesc, &shaderView_);
Between these steps there is additional error checking, but all the create-functions return successfully. I then bind the texture, render my scene and unbind the texture using the following functions:
void D3DDepthTexture2D::bindAsTarget(D3DDevice& dev)
{
dev.getDeviceContext().ClearDepthStencilView(stencilView_, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Bind target
dev.getDeviceContext().OMSetRenderTargets(0, 0, stencilView_);
// Set viewport
dev.setViewport(static_cast<float>(width_), static_cast<float>(height_), 0.0f, 0.0f);
}
void D3DDepthTexture2D::unbindAsTarget(D3DDevice& dev, float width, float height)
{
// Unbind target
dev.resetRenderTarget();
// Reset viewport
dev.setViewport(width, height, 0.0f, 0.0f);
}
My render-to-depth-texture routine basically looks like this (removing all the unnecessary details):
camera = buildCameraFromLight(light);
setCameraCBuffer(camera);
bindTexture();
activateShader();
for(Object j : objects) {setTransformationCBuffer(j); renderObject(j);}
deactivateShader();
unbindTexture();
Rendering the scene from the light's perspective to the normal render target (screen) results in the proper image (both the actual image and just rendering the depth values). I use a simple vertex shader that just transforms the vertices and a pixel shader that does nothing at all OR returns the depth values (I tried both, doesn't change anything about the end result since we don't care about the color buffer).
After clearing the texture and rendering to it, I render it onto a quad to my screen, but all I get is a red square - so the depth value is 1.0f, the value I've cleared the texture to. I'm really at a loss for what to do, I tried everything, implemented every possible solution from online tutorials or changed things around on my own, but nothing helps. Here's a list of all the things I already checked:
All FAILED(hr)-calls return false, no error message is printed to the console
I tested whether the geometry gets transformed properly by rendering the geometry and their depth values (z / w) to screen, which worked and looked correct
I tested calculating the depth values in the fragment shader and rendering to a normal render target (basically trying to render my color buffer to texture) instead of a depth stencil texture, but that didn't work either, red square
I tested different formats and format combinations for the shadow map and the views, which either caused the creation to fail or didn't change a thing
I checked whether any call between setting and unsetting my texture as the render target during the render call resetted the depth stencil target to something else - not the case
I debugged my texture-to-screen/quad rendering routine already and it works properly with other textures, so I am in fact seeing what the depth texture looks like
I changed the geometry and camera perspective around to see whether that makes anything visible in the depth texture - it doesn't
I came across this similar StackOverflow problem and checked whether my default depth stencil buffer had the same dimensions, AA settings etc. as my texture - and it does (count 1, quality 0)
I really don't know what's up, I've been trying to debug this for hours and hours. I hope someone here can give me any advice on what I'm doing wrong or what I could try to fix this. I'm using C++11 with Direct3D11.
Note: I can't debug any of this using NSight or any Visual Studio tools since they don't seem to work properly with my system right now and I don't have any administrative rights to fix any of it. I just have to deal with it for now. I hope the given information and code samples are enough to provide a rough idea of what I could also try to make this work.
Thanks in advance.
I got NSight to work and debugged the whole thing with that. Turns out the depth texture was properly created and filled with the depth and stencil data and I just forgot that all the depth information is stored in the first channel - so I ignored the g and b data and used 1.0 for a and it worked. Using the g and b channels somehow made the whole thing red (maybe someone wants to add to this and explain why).
Debugging this got much easier once I could observe the texture that is present in the shader - I should've used a debugging tool like NSight or RenderDoc way earlier. Thanks to #EgorShkorov for the advice.

Transparent spectrogram selection overlays

I'm trying to create transparent selection overlays on top of a spectrogram but it doesn't quite work. I mean the result is not really satisfactory. In contrast, the overlays painted on top of a waveform work well but I need to support both the waveform as well as the spectrogram view (and maybe other views in the future)
The selection overlay works fine in the waveform view
Here's the selection overlay in the spectrogram view (the selection looks really bad and obscures parts of the spectrogram)
The code (VCL) is the same for both views
void TWaveDisplayContainer::DrawSelectedRegion(){
if(selRange.selStart.x == selRange.selEnd.x){
DrawCursorPosition( selRange.selStart.x);
return;
}
Graphics::TBitmap *pWaveBmp = eContainerView == WAVEFORM ? pWaveBmpLeft : pSfftBmpLeft;
TRect selRect(selRange.selStart.x, 0, selRange.selEnd.x, pWaveLeft->Height);
TCanvas *pCanvas = pWaveLeft->Canvas;
int copyMode = pCanvas->CopyMode;
pCanvas->Draw(0,0, pWaveBmp);
pCanvas->Brush->Color = clActiveBorder;
pCanvas->CopyMode = cmSrcAnd;
pCanvas->Rectangle(selRect);
pCanvas->CopyRect(selRect, pWaveBmp->Canvas, selRect);
pCanvas->CopyMode = copyMode;
if(numChannels == 2){
TCanvas* pOtherCanvas = pWaveRight->Canvas;
pWaveBmp = eContainerView == WAVEFORM ? pWaveBmpRight :
pSfftBmpRight;
pOtherCanvas->Draw(0,0, pWaveBmp);
pOtherCanvas->Brush->Color = clActiveBorder;
pOtherCanvas->CopyMode = cmSrcAnd;
pOtherCanvas->Rectangle(selRect);
pOtherCanvas->CopyRect(selRect, pWaveBmp->Canvas, selRect);
pOtherCanvas->CopyMode = copyMode;
}
}
So, I'm using cmSrcAnd copy mode and the CopyRect method to do the actual painting/drawing (TCanvas corresponds to a device context (HDC on Windows). I think, since a spectrogram, unlike a waveform, doesn't really have a single background colour using simple mixing copy modes isn't going to work well in most cases.
Note that I can still accomplish what I want but that would require messing with the individual pixels, which is something I'd like to avoid if possible)
I'm basically looking for an API (VCL wraps GDI so even WINAPI is fine) able to do this.
Any help is much appreciated
I'm going to answer my own question and hopefully this will prove to be useful to some people. Since there's apparently no way this can be achieved
in either plain VCL or using WINAPI (except in some situations), I've written a simple function that blends a bitmap (32bpp / 24bpp) with an overlay colour (any colour).
The actual result will also depend on the weights (w0,w1) given to the red, green and blue components of an individual pixel. Changing these will produce
an overlay that leans more toward the spectrogram colour or the overlay colour respectively.
The code
Graphics::TBitmap *TSelectionOverlay::GetSelectionOverlay(Graphics::TBitmap *pBmp, TColor selColour,
TRect &rect, EChannel eChannel){
Graphics::TBitmap *pSelOverlay = eChannel==LEFT ? pSelOverlayLeft : pSelOverlayRight;
const unsigned cGreenShift = 8;
const unsigned cBlueShift = 16;
const unsigned overlayWidth = abs(rect.right-rect.left);
const unsigned overlayHeight = abs(rect.bottom-rect.top);
pSelOverlay->Width = pBmp->Width;
pSelOverlay->Height = pBmp->Height;
const unsigned startOffset = rect.right>rect.left ? rect.left : rect.right;
pSelOverlay->Assign(pBmp);
unsigned char cRed0, cGreen0, cBlue0,cRed1, cGreen1, cBlue1, bRedColor0, bGreenColor0, bBlueColor0;
cBlue0 = selColour >> cBlueShift;
cGreen0 = selColour >> cGreenShift & 0xFF;
cRed0 = selColour & 0xFF;
unsigned *pPixel;
for(int i=0;i<overlayHeight;i++){
pPixel = (unsigned*)pSelOverlay->ScanLine[i];//provides access to the pixel array
for(int j=0;j<overlayWidth;j++){
unsigned pixel = pPixel[startOffset+j];
cBlue1 = pixel >> cBlueShift;
cGreen1 = pixel >> cGreenShift & 0xFF;
cRed1 = pixel & 0xFF;
//blend the current bitmap pixel with the overlay colour
const float w0 = 0.5f; //these weights influence the appearance of the overlay (here we use 50%)
const float w1 = 0.5f;
bRedColor0 = cRed0*w0+cRed1*w1;
bGreenColor0 = cGreen0*w0+cGreen1*w1);
bBlueColor0 = cBlue0*w0+cBlue1*w1;
pPixel[startOffset+j] = ((bBlueColor0 << cBlueShift) | (bGreenColor0 << cGreenShift)) | bRedColor0;
}
}
return pSelOverlay;
}
Note that for some reason, CopyRect used with a CopyMode value of cmSrcCopy didn't work well so I used Draw instead.
pCanvas->CopyMode = cmSrcCopy;
pCanvas->CopyRect(dstRect, pSelOverlay->Canvas, srcRec);//this still didn't work well--possibly a bug
so I used
pCanvas->Draw(0,0, pSelOverlay);
The result

Directx11 Texture2D formats

For some reason, I cannot specify DXGI_FORMAT_R32G32B32_FLOAT format when creating a texture 2d in directx11. I can do it just fine in OpenGL, however. It also works fine when using DXGI_FORMAT_R32G32B32A32_FLOAT. I am using these textures as rendertargets for the gbuffer.
// create gbuffer textures/rendertargets
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(D3D11_TEXTURE2D_DESC));
textureDesc.Width = swapChainDesc.BufferDesc.Width;
textureDesc.Height = swapChainDesc.BufferDesc.Height;
textureDesc.ArraySize = 1;
textureDesc.MipLevels = 1;
textureDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; <----- dosn't like this; returns E_INVALIDARG
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
for (uint32_t index = 0; index < GBuffer::GBUFFER_NUM_RENDERTARGETS; index++)
{
DXCALL(device->CreateTexture2D(&textureDesc, NULL, &mGeometryTextures[index]));
DXCALL(device->CreateRenderTargetView(mGeometryTextures[index], NULL, &mRenderTargets[index]));
}
Why cant I use DXGI_FORMAT_R32G32B32_FLOAT when creating a 2d texture in directx 11?
I do not need the extra float in my texture, hence I'd rather have just three elements rather than four.
Not all hardware supports using R32G32B32_FLOAT as a render-target and shader-resource (it's optional). You can verify whether the hardware supports the format for those uses by calling CheckFormatSupport. If it is succeeding on the same hardware with OpenGL, this likely means OpenGL is padding the resource out to the full 4-channel variant behind the scenes.
DXGI_FORMAT_R32G32B32_FLOAT support for render targets is optional: http://msdn.microsoft.com/en-us/library/windows/desktop/ff471325(v=vs.85).aspx#RenderTarget
If you think that this format should be supported by your device then turn on debug output as MooseBoys suggested. This should explain why you're getting E_INVALIDARG.

How should my texture sampler be for rendering a bitmap font/sprite?

I have a texture and was curious as to what the texture sampler should be for sampling the sprite texture? I am using DirectX11, though if you know what it should be for DX9/10, I believe it is transferable.
I tried
AddressU = D3D11_TEXTURE_ADDRESS_WRAP
AddressV = D3D11_TEXTURE_ADDRESS_WRAP
AddressW = D3D11_TEXTURE_ADDRESS_WRAP
ComparisonFunc = D3D11_COMPARISON_NEVER
Filter = D3D11_FILTER_MIN_MAG_MIP_POINT
MaxAnisotropy = 1;
MaxLOD = D3D11_FLOAT32_MAX;
MinLOD = 0;
MipLODBias = 0;
Although when rendering, there appeared to be artifacts and it did not seem as clear as it should be.
This is an example of what the artifcats are. The top text with a light blue background you can see artifacts (for example, the A and C). The bottom text with the black background is the origin image.