Load a .obj model with ASSIMP in DirectX9 - c++

This is my first time posting. I have this issue with this 3d model loading library called ASSIMP. I am trying to integrate it in a sample Direct3d9 app. and it is not going so well. I am an experienced C++ programmer so it shouldn't take too much hassle to help me :). So i have in the past made several d3d9 apps and rendered manual primitives. but now i am trying to render an obj model loaded with ASSIMP. when i try to render it, NOTHING is rendered at all. this is very weird, not even one poly is rendered. this is VERY frustrating as i have spent 1 week trying to fix this one problem and searching on google returns no useful results. you guys are honestly my last hope lol. ok so here is my code. pretty please take a look and help me understand what i am doing wrong. also if you know of a link where a directx9 ASSIMP example may be that would also be appreciated as google only shows OpenGL :(. Any help will be much appreciated thanks :)
bool Mesh::LoadMesh(const std::string& Filename)
{
Assimp::Importer Importer;
const aiScene *pScene = NULL;
const aiMesh *pMesh = NULL;
pScene = Importer.ReadFile(Filename.c_str(), aiProcess_Triangulate | aiProcess_ConvertToLeftHanded | aiProcess_ValidateDataStructure | aiProcess_FindInvalidData);
if (!pScene)
{
printf("Error parsing '%s': '%s'\n", Filename.c_str(), Importer.GetErrorString());
return false;
}
pMesh = pScene->mMeshes[0];
if (!pMesh)
{
printf("Error Finding Model In file. Did you export an empty scene?");
return false;
}
for (unsigned int i = 0; i < pMesh->mNumFaces; i++)
{
if (pMesh->mFaces[i].mNumIndices == 3)
{
m_NumIndices = m_NumIndices + 3;
}
else
{
printf("Error parsing Faces. Try to Re-Export model from 3d package!");
return false;
}
}
m_NumFaces = pMesh->mNumFaces;
m_NumVertecies = pMesh->mNumVertices;
ZeroMemory(&m_pVB, sizeof(m_pVB));
m_pRenderDevice->CreateVertexBuffer(sizeof(Vertex) * m_NumVertecies, 0, VertexFVF, D3DPOOL_DEFAULT, &m_pVB, NULL);
m_pVB->Lock(0, 0, (void**)&m_pVertecies, 0);
for (int i = 0; i < pMesh->mNumVertices; i++)
{
Vertex *pvertex = new Vertex(D3DXVECTOR3(pMesh->mVertices[i].x, pMesh->mVertices[i].y, pMesh->mVertices[i].z), D3DXVECTOR2(pMesh->mTextureCoords[0][i].x, pMesh->mTextureCoords[0][i].y), D3DXVECTOR3(pMesh->mNormals[i].x, pMesh->mNormals[i].y, pMesh->mNormals[i].z));
m_pVertecies[i] = pvertex;
}
m_pVB->Unlock();
return true;
}
void Mesh::Render()
{
m_pRenderDevice->SetStreamSource(0, m_pVB, 0, sizeof(Vertex));
m_pRenderDevice->SetFVF(VertexFVF);
m_pRenderDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, m_NumFaces);
}
void Render()
{
D3DCOLOR Color = D3DCOLOR_ARGB(255, 0, 0, 255);
//Clear the Z and Back buffers
g_pRenderDevice->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, Color, 1.0f, 0);
g_pRenderDevice->BeginScene();
InitializeViewMatrix();
D3DXMATRIX Scale;
D3DXMatrixScaling(&Scale, CameraScaleX, CameraScaleY, CameraScaleZ);
D3DXMATRIX Rotation;
CameraRotX += 0.025;
D3DXMatrixRotationYawPitchRoll(&Rotation, CameraRotX, CameraRotY, CameraRotZ);
g_pRenderDevice->SetTransform(D3DTS_WORLD, &D3DXMATRIX(Scale * Rotation));
if (pMesh)
{
pMesh->Render();
}
g_pRenderDevice->EndScene();
g_pRenderDevice->Present(NULL, NULL, NULL, NULL);
}

I might be getting old but i can't find anything wrong in this code. Are you sure your pointers are all pointing where they should?

Related

Why is FreeType outputting the glyph bitmap weirdly?

I'm still new to FreeType, and at the moment, I'm just trying to save each glyph's bitmap into a png. I can loop through each glyph and save the png. But my main issue is just the fact that the bitmaps look really weird. I honestly don't know if it's meant to be like that (maybe it's a format), or it's just not reading the bitmap correctly.
FT_Library library;
FT_Face face;
auto error = FT_Init_FreeType(&library);
if (error) {
LOGGER_LOG("FONT_RESOURCE", "Failed to initialize freetype library.");
return;
}
error = FT_New_Face(library, path, 0, &face);
if (error == FT_Err_Unknown_File_Format) {
LOGGER_LOG("FONT_RESOURCE", "Failed To Load Font, Unsupported File Format Detected.");
return;
}
FT_Set_Pixel_Sizes(face, 0, 64);
FT_GlyphSlot slot = face->glyph;
for (unsigned int i = 0; i < face->num_glyphs; i++) {
if (FT_Load_Char(face, i, FT_LOAD_RENDER)) continue;
if (FT_Render_Glyph(slot, FT_RENDER_MODE_NORMAL)) continue;
lodepng_encode32_file(std::string("CharacterTests/" + std::to_string(i) + ".png").c_str(), slot->bitmap.buffer, slot->bitmap.width, slot->bitmap.rows);
}
FT_Done_Face(face);
FT_Done_FreeType(library);
Here's all of my current code. It saves fine, just the result looks weird. For example, here's the A character:
If someone could please just explain why this is happening, or even be able to help to push me in the right direction, that would be much appreciated. My next steps after this is to make a font atlas, but I think I already know how I'm gonna do that.

Assimp GLTF: Meshes not properly scaled (?)

I am currently trying to load the Sponza model for my PBR Renderer. It is in the GLTF format. I have been able to load smaller models quite successfully, but when I try to load the Sponza scene, some meshes aren't properly scaled. This image demonstrates my issue (I am not actually doing the PBR calculations here, that's just the albedo of the model). The wall is there, but it's only a 1x1 quad with the wall texture, even though it's supposed to be a lot bigger ans stretch across the entire model. Same goes for every wall and every floor in the model. The model is not broken as Blender and that default windows model viewer can load it correctly. I am even applying the mNode->mTransformation, but it still doesn't work. My model loading code looks kind of like this:
void Model::LoadModel(const fs::path& filePath)
{
Assimp::Importer importer;
std::string pathString = filePath.string();
m_Scene = importer.ReadFile(pathString, aiProcess_Triangulate | aiProcess_GenNormals | aiProcess_CalcTangentSpace);
if (!m_Scene || m_Scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !m_Scene->mRootNode)
{
// Error handling
}
ProcessNode(m_Scene->mRootNode, m_Scene, glm::mat4(1.0f));
}
void Model::ProcessNode(aiNode* node, const aiScene* m_Scene, glm::mat4 parentTransformation)
{
glm::mat4 transformation = AiMatrix4x4ToGlm(&node->mTransformation);
glm::mat4 globalTransformation = transformation * parentTransformation;
for (int i = 0; i < node->mNumMeshes; i++)
{
aiMesh* assimpMesh = m_Scene->mMeshes[node->mMeshes[i]];
// This will just get the vertex data, indices, tex coords, etc.
Mesh neroMesh = ProcessMesh(assimpMesh, m_Scene);
neroMesh.SetTransformationMatrix(globalTransformation);
m_Meshes.push_back(neroMesh);
}
for (int i = 0; i < node->mNumChildren; i++)
{
ProcessNode(node->mChildren[i], m_Scene, globalTransformation);
}
}
glm::mat4 Model::AiMatrix4x4ToGlm(const aiMatrix4x4* from)
{
glm::mat4 to;
to[0][0] = (GLfloat)from->a1; to[0][1] = (GLfloat)from->b1; to[0][2] = (GLfloat)from->c1; to[0][3] = (GLfloat)from->d1;
to[1][0] = (GLfloat)from->a2; to[1][1] = (GLfloat)from->b2; to[1][2] = (GLfloat)from->c2; to[1][3] = (GLfloat)from->d2;
to[2][0] = (GLfloat)from->a3; to[2][1] = (GLfloat)from->b3; to[2][2] = (GLfloat)from->c3; to[2][3] = (GLfloat)from->d3;
to[3][0] = (GLfloat)from->a4; to[3][1] = (GLfloat)from->b4; to[3][2] = (GLfloat)from->c4; to[3][3] = (GLfloat)from->d4;
return to;
}
I don't think the ProcessMesh function is necessary for this, but if it is I can post it as well.
Does anyone see any issues? I am really getting desperate over this...
Turns out I am the dumbest human being on earth. I implemented Parallax Mapping in my code, but disabled it immediately after as it was not working correctly on every model. BUT I still had this line in my shader code:
if (texCoords.x > 1.0 || texCoords.y > 1.0 || texCoords.x < 0.0 || texCoords.y < 0.0)
discard;
Which ended up removing some parts of my model.
If I had to rate my stupidity on a scale from 1 to 10, I'd give it a 100000000000.

How to Render To Texture in DirectX12 & C++? What is the process?

I have been trying to figure out how to render the entire scene to a texture in DX12. I know how to do this in OpenGL, but I'm having trouble figuring it out in DirectX12. Plus, there isn't many resources online on how its done.
(Currently we have a 3D Model rendering in the scene with a texture applied)
Would anyone be able to point me towards some resources that I can use to learn Render Targets and Rendering to a Texture in DX12? or any good websites?
Any help is much appreciated.
Kind regards,
Charlie
OpenGL is more like Direct3D 11, where Direct3D 12 and Vulkan are more alike in terms of design/usage and level of graphics knowledge needed to use them effectively. As such, you may find it easier to start with Direct3D 11 before jumping into Direct3D 12 rendering. The concepts and HLSL programming are all very similar between 11 & 12, so it can be a good place to start.
The biggest thing to know about DirectX 12 is that it makes the application (i.e. the programmer) responsible for many aspects that were handled by the Direct3D 11 Runtime: CPU/GPU synchronization, memory management, resource scheduling, etc. DirectX 12 is intended to give the experienced graphics programmer more control and therefore able to achieve higher-levels of CPU-side performance for the same complexity of rendering. This additional control and responsibility, however, can be overwhelming for someone new to graphics or DirectX. It's much easier in DX12 to write something that 'works on my machine', but won't run or even crashes on other people's machines.
With all that said, some good resources for starting with Direct3D 12:
There is a new 'landing page' for DirectX here with many useful links and resources for DirectX 12 development: https://devblogs.microsoft.com/directx/landing-page/
Official DirectX 12 samples written by the DirectX graphics team are at DirectX-Graphics-Samples.
Public samples written by the Xbox Advanced Technology Group are at Xbox-ATG-Samples. In particular, see the IntroGraphics samples which offer many basic samples in both DX11 & DX12 form.
The DirectX Tool Kit is an open-source C++ library that provides helpers for getting going with Direct3D development. There are both DirectX 11 and DirectX 12 versions. If you learn the DX 11 version first, it's pretty simple to move over to DX 12 from there as it handles a number of the 'house-keeping' tasks for you as you learn the new API.
As for the question of 'rendering-to-texture' in DirectX 12, there are some specific samples to look at:
SimpleMSAA does render-to-texture.
This HDR rendering tutorial for DirectX Tool Kit for DX12 does render-to-texture.
The second one uses this this helper class h / cpp.
class RenderTexture
{
public:
RenderTexture(DXGI_FORMAT format) noexcept;
void SetDevice(_In_ ID3D12Device* device, D3D12_CPU_DESCRIPTOR_HANDLE srvDescriptor, D3D12_CPU_DESCRIPTOR_HANDLE rtvDescriptor);
void SizeResources(size_t width, size_t height);
void ReleaseDevice() noexcept;
void TransitionTo(_In_ ID3D12GraphicsCommandList* commandList, D3D12_RESOURCE_STATES afterState);
void BeginScene(_In_ ID3D12GraphicsCommandList* commandList)
{
TransitionTo(commandList, D3D12_RESOURCE_STATE_RENDER_TARGET);
}
void EndScene(_In_ ID3D12GraphicsCommandList* commandList)
{
TransitionTo(commandList, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE);
}
void SetClearColor(DirectX::FXMVECTOR color)
{
DirectX::XMStoreFloat4(reinterpret_cast<DirectX::XMFLOAT4*>(m_clearColor), color);
}
ID3D12Resource* GetResource() const noexcept { return m_resource.Get(); }
D3D12_RESOURCE_STATES GetCurrentState() const noexcept { return m_state; }
void SetWindow(const RECT& rect);
DXGI_FORMAT GetFormat() const noexcept { return m_format; }
private:
Microsoft::WRL::ComPtr<ID3D12Device> m_device;
Microsoft::WRL::ComPtr<ID3D12Resource> m_resource;
D3D12_RESOURCE_STATES m_state;
D3D12_CPU_DESCRIPTOR_HANDLE m_srvDescriptor;
D3D12_CPU_DESCRIPTOR_HANDLE m_rtvDescriptor;
float m_clearColor[4];
DXGI_FORMAT m_format;
size_t m_width;
size_t m_height;
};
RenderTexture::RenderTexture(DXGI_FORMAT format) noexcept :
m_state(D3D12_RESOURCE_STATE_COMMON),
m_srvDescriptor{},
m_rtvDescriptor{},
m_clearColor{},
m_format(format),
m_width(0),
m_height(0)
{
}
void RenderTexture::SetDevice(_In_ ID3D12Device* device, D3D12_CPU_DESCRIPTOR_HANDLE srvDescriptor, D3D12_CPU_DESCRIPTOR_HANDLE rtvDescriptor)
{
if (device == m_device.Get()
&& srvDescriptor.ptr == m_srvDescriptor.ptr
&& rtvDescriptor.ptr == m_rtvDescriptor.ptr)
return;
if (m_device)
{
ReleaseDevice();
}
{
D3D12_FEATURE_DATA_FORMAT_SUPPORT formatSupport = { m_format, D3D12_FORMAT_SUPPORT1_NONE, D3D12_FORMAT_SUPPORT2_NONE };
if (FAILED(device->CheckFeatureSupport(D3D12_FEATURE_FORMAT_SUPPORT, &formatSupport, sizeof(formatSupport))))
{
throw std::runtime_error("CheckFeatureSupport");
}
UINT required = D3D12_FORMAT_SUPPORT1_TEXTURE2D | D3D12_FORMAT_SUPPORT1_RENDER_TARGET;
if ((formatSupport.Support1 & required) != required)
{
#ifdef _DEBUG
char buff[128] = {};
sprintf_s(buff, "RenderTexture: Device does not support the requested format (%u)!\n", m_format);
OutputDebugStringA(buff);
#endif
throw std::runtime_error("RenderTexture");
}
}
if (!srvDescriptor.ptr || !rtvDescriptor.ptr)
{
throw std::runtime_error("Invalid descriptors");
}
m_device = device;
m_srvDescriptor = srvDescriptor;
m_rtvDescriptor = rtvDescriptor;
}
void RenderTexture::SizeResources(size_t width, size_t height)
{
if (width == m_width && height == m_height)
return;
if (m_width > UINT32_MAX || m_height > UINT32_MAX)
{
throw std::out_of_range("Invalid width/height");
}
if (!m_device)
return;
m_width = m_height = 0;
auto heapProperties = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
D3D12_RESOURCE_DESC desc = CD3DX12_RESOURCE_DESC::Tex2D(m_format,
static_cast<UINT64>(width),
static_cast<UINT>(height),
1, 1, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET);
D3D12_CLEAR_VALUE clearValue = { m_format, {} };
memcpy(clearValue.Color, m_clearColor, sizeof(clearValue.Color));
m_state = D3D12_RESOURCE_STATE_RENDER_TARGET;
// Create a render target
ThrowIfFailed(
m_device->CreateCommittedResource(&heapProperties, D3D12_HEAP_FLAG_ALLOW_ALL_BUFFERS_AND_TEXTURES,
&desc,
m_state, &clearValue,
IID_GRAPHICS_PPV_ARGS(m_resource.ReleaseAndGetAddressOf()))
);
SetDebugObjectName(m_resource.Get(), L"RenderTexture RT");
// Create RTV.
m_device->CreateRenderTargetView(m_resource.Get(), nullptr, m_rtvDescriptor);
// Create SRV.
m_device->CreateShaderResourceView(m_resource.Get(), nullptr, m_srvDescriptor);
m_width = width;
m_height = height;
}
void RenderTexture::ReleaseDevice() noexcept
{
m_resource.Reset();
m_device.Reset();
m_state = D3D12_RESOURCE_STATE_COMMON;
m_width = m_height = 0;
m_srvDescriptor.ptr = m_rtvDescriptor.ptr = 0;
}
void RenderTexture::TransitionTo(_In_ ID3D12GraphicsCommandList* commandList, D3D12_RESOURCE_STATES afterState)
{
TransitionResource(commandList, m_resource.Get(), m_state, afterState);
m_state = afterState;
}
void RenderTexture::SetWindow(const RECT& output)
{
// Determine the render target size in pixels.
auto width = size_t(std::max<LONG>(output.right - output.left, 1));
auto height = size_t(std::max<LONG>(output.bottom - output.top, 1));
SizeResources(width, height);
}
You'd use it like this:
// Setup
m_scene = std::make_unique<DX::RenderTexture>( /* format that matches your resource and your Pipeline State Objects you will use to render */ );
m_scene->SetClearColor( /* color value you use to clear */ );
m_scene->SetDevice(m_device,
/* CPU descriptor handle for your scene as a SRV texture */,
/* CPU descriptor handle for your scene as a RTV texture */);
m_scene->SetWindow( /* provide viewport size for your render texture */ );
// Reset command list and allocator.
// Transition the backbuffer target into the correct state to allow for
// Clear the render texture
CD3DX12_CPU_DESCRIPTOR_HANDLE rtvDescriptor(
/* CPU descriptor handle for your scene as a RTV texture */
static_cast<INT>(m_backBufferIndex), m_rtvDescriptorSize);
CD3DX12_CPU_DESCRIPTOR_HANDLE dsvDescriptor(m_dsvDescriptorHeap->GetCPUDescriptorHandleForHeapStart());
m_commandList->OMSetRenderTargets(1, &rtvDescriptor, FALSE, &dsvDescriptor);
m_commandList->ClearRenderTargetView(rtvDescriptor, /* clear color */, 0, nullptr);
m_commandList->ClearDepthStencilView(dsvDescriptor, D3D12_CLEAR_FLAG_DEPTH, 1.0f, 0, 0, nullptr);
// Set the viewport and scissor rect.
D3D12_VIEWPORT viewport = { 0.0f, 0.0f, /* width/height of your render texture */, D3D12_MIN_DEPTH, D3D12_MAX_DEPTH };
D3D12_RECT scissorRect = { 0, 0, /* width/height of your render texture */ };
m_commandList->RSSetViewports(1, &viewport);
m_commandList->RSSetScissorRects(1, &scissorRect);
// Tell helper we are starting the render
m_scene->BeginScene(m_commandList);
/* Do rendering to m_commandList */
m_scene->EndScene(m_commandList);
Here we've scheduled the transition to render target resource state, populated all the draw calls, and then inserted a barrier back to the pixel shader resource state. At that point, you can use the descriptor handle to your render-texture's SRV to render. As with all things DirectX 12, nothing happens until you actually close the command-list and submit it for execution.

SDL2 load certain texture make SDL_RenderFillRect color weird

I made a program that has two different states, one is for menu display-"Menu State", and the other state is for drawing some stuff-"Draw State".
But I came across a weird thing, if i load certain png for texture and copy them to renderer to display , then leave "Menu State" to enter "Draw State". The texture will somehow make the rectangle color not display properly (for example make green go dark).
In my code, switching to a new state(invoke MenuState::onExit()) will erase the texture map(map of texture smart pointer indexing with std::string)
So the texutre loaded doesn't even exist in the "Drawing State".
I couldn't figure out what went wrong. Here is some of my codes
void TextureManager::DrawPixel(int x, int y, int width, int height, SDL_Renderer *pRenderer)
{
SDL_Rect rect;
rect.x = x;
rect.y = y;
rect.w = width;
rect.h = height;
SDL_SetRenderDrawColor(pRenderer, 0, 255, 0, 255);//same color value
SDL_RenderFillRect(pRenderer, &rect);
}
static bool TextureManagerLoadFile(std::string filename, std::string id)
{
return TextureManager::Instance().Load(filename, id, Game::Instance().GetRenderer());
}
bool TextureManager::Load(std::string filename, std::string id, SDL_Renderer *pRenderer)
{
if(m_textureMap.count(id) != 0)
{
return false;
}
SDL_Surface *pTempSurface = IMG_Load(filename.c_str());
SDL_Texture *pTexutre = SDL_CreateTextureFromSurface(pRenderer, pTempSurface);
SDL_FreeSurface(pTempSurface);
if(pTexutre != 0)
{
m_textureMap[id] = std::make_unique<textureData>(pTexutre, 0, 0);
SDL_QueryTexture(pTexutre, NULL, NULL, &m_textureMap[id]->width, &m_textureMap[id]->height);
return true;
}
return false;
}
void TextureManager::ClearFromTextureMap(std::string textureID)
{
m_textureMap.erase(textureID);
}
bool MenuState::onEnter()
{
if(!TextureManagerLoadFile("assets/Main menu/BTN PLAY.png", "play_button"))
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN Exit.png", "exit_button"))
//replace different png file here will also affect the outcome
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN SETTINGS.png", "setting_button"))
{
return false;
}
int client_w,client_h;
SDL_GetWindowSize(Game::Instance().GetClientWindow(),&client_w, &client_h);
int playBtn_w = TextureManager::Instance().GetTextureWidth("play_button");
int playBtn_h = TextureManager::Instance().GetTuextureHeight("play_button");
int center_x = (client_w - playBtn_w) / 2;
int center_y = (client_h - playBtn_h) / 2;
ParamsLoader pPlayParams(center_x, center_y, playBtn_w, playBtn_h, "play_button");
int settingBtn_w = TextureManager::Instance().GetTextureWidth("setting_button");
int settingBtn_h = TextureManager::Instance().GetTuextureHeight("setting_button");
ParamsLoader pSettingParams(center_x , center_y + (playBtn_h + settingBtn_h) / 2, settingBtn_w, settingBtn_h, "setting_button");
int exitBtn_w = TextureManager::Instance().GetTextureWidth("exit_button");
int exitBtn_h = TextureManager::Instance().GetTuextureHeight("exit_button");
ParamsLoader pExitParams(10, 10, exitBtn_w, exitBtn_h, "exit_button");
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pPlayParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pSettingParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pExitParams, s_menuExit));
//change order of the 3 line code above
//or replace different png for exit button, will make the rectangle color different
std::cout << "Entering Menu State" << std::endl;
return true;
}
bool MenuState::onExit()
{
for(auto i : m_gameObjects)
{
i->Clean();
}
m_gameObjects.clear();
TextureManager::Instance().ClearFromTextureMap("play_button");
TextureManager::Instance().ClearFromTextureMap("exit_button");
TextureManager::Instance().ClearFromTextureMap("setting_button");
std::cout << "Exiting Menu State" << std::endl;
return true;
}
void Game::Render()
{
SDL_SetRenderDrawColor(m_pRenderer, 255, 255, 255, 255);
SDL_RenderClear(m_pRenderer);
m_pGameStateMachine->Render();
SDL_RenderPresent(m_pRenderer);
}
Menu State Figure
Correct Color
Wrong Color
edit :Also, I found out that this weird phenomenon only happens when the renderer was created with 'SDL_RENDERER_ACCELERATED' flag and -1 or 0 driver index, i.e SDL_CreateRenderer(m_pWindow, 1, SDL_RENDERER_ACCELERATED); or SDL_CreateRenderer(m_pWindow, -1, SDL_RENDERER_SOFTWARE);works fine!
I have been plagued by this very same issue. The link provided by ekodes is how I resolved it, as order of operations had no effect for me.
I was able to pull the d3d9Device via SDL_RenderGetD3D9Device(), then SetTextureStageState as described in ekodes d3d blending link.
I was having the same issue. I got a vibrant green color when trying to render a light gray.
The combination of the parameters that are fixing the issue for you pertain to the driver to be used. -1 selects the first driver that meets the criteria, int this case it needs to be accelerated.
Using SDL_GetRendererInfo I was able to see this happens when using the "direct3d" driver.
I found this question talking about blending in direct3d.
I figured it out eventually. In addition to Alpha Blending there is a Color Blending. So DirectX merges color of the last texture with the last primitive.
The question does provide a fix for this in DirectX, however I'm not sure how to apply that it in regards to SDL. I also have not been able to find a solution for this problem in SDL.
I was drawing Green text with SDL_ttf, which uses a texture. Then drawing a gray rectangle for another component elsewhere on the screen.
What's strange is it doesn't seem to happen all the time. However, mine seems to predominantly happen with SDL_ttf. At first I thought it may be a byproduct of TTF_RenderText_Blended however, it happens with the other ones as well. It also does not appear to be affected by the blend mode of the Texture generated by those functions
So in my case, I was able to change the order of the operations to get the correct color.
Alternatively, using the OpenGL driver appeared to fix this as well. Similar to what you mentioned. (This was driver index 1 for me)
I'm not sure this classifies as an "Answer" but hopefully it helps someone out or points them in the right direction.

Vulkan failing to clear depth

I've been working with Vulkan for the past couple weeks and I've run into a problem that has only been happening on AMD cards. Specifically the AMD 7970M. I've ran my project on GTX 700 and 900 series cards with no problem. I've even ran on Windows an Linux (Steam OS) with Nvidia cards without a hitch. The problem only shows up on AMD cards and only with my project; all the samples and projects from Sascha Willems run no problem.
Right now I am drawing a textured Raptor model and spinning it in place. I render that off to a texture and then apply that texture to a fullscreen triangle; basic offscreen rendering. However the depth doesn't seem to clear correctly on my 7970M. Instead I get this weird artifacting like the depth isn't being cleared properly:
Of course I tried digging into this with RenderDoc and the depth is totally wrong. Both the Raptor and the Fullscreen Triangle its drawn onto are just a mess:
I've tried comparing my code to the Offscreen example from Sascha Willems and I appear do be doing almost everything the same way. I thought maybe something would be wrong with the way I created my depth but it seems fine in comparison to all the examples I've seen.
Here are some debug views of where I am creating the depth image and view:
Here's the whole method:
bool VKRenderTarget::setupFramebuffer(VKRenderer* renderer)
{
VkDevice device = renderer->GetVKDevice();
VkCommandBuffer setupCommand;
m_colorFormat = renderer->GetPreferredImageFormat();
m_depthFormat = renderer->GetPreferredDepthFormat();
renderer->CreateSetupCommandBuffer();
setupCommand = renderer->GetSetupCommandBuffer();
VkResult err;
//Color attachment
VkImageCreateInfo imageInfo = {};
imageInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
imageInfo.pNext = nullptr;
imageInfo.format = m_colorFormat;
imageInfo.imageType = VK_IMAGE_TYPE_2D;
imageInfo.extent.width = m_width;
imageInfo.extent.height = m_height;
imageInfo.mipLevels = 1;
imageInfo.arrayLayers = 1;
imageInfo.samples = VK_SAMPLE_COUNT_1_BIT;
imageInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
imageInfo.usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
imageInfo.flags = 0;
VkMemoryAllocateInfo memAllocInfo = {};
memAllocInfo.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
VkMemoryRequirements memReqs;
err = vkCreateImage(device, &imageInfo, nullptr, &m_color.image);
assert(!err);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error creating color image!\n");
#endif
return false;
}
vkGetImageMemoryRequirements(device, m_color.image, &memReqs);
memAllocInfo.allocationSize = memReqs.size;
renderer->MemoryTypeFromProperties(memReqs.memoryTypeBits, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, &memAllocInfo.memoryTypeIndex);
err = vkAllocateMemory(device, &memAllocInfo, nullptr, &m_color.memory);
assert(!err);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error allocating color image memory!\n");
#endif
return false;
}
err = vkBindImageMemory(device, m_color.image, m_color.memory, 0);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error binding color image memory!\n");
#endif
return false;
}
renderer->SetImageLayout(setupCommand, m_color.image, VK_IMAGE_ASPECT_COLOR_BIT,
VK_IMAGE_LAYOUT_UNDEFINED, VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
VkImageViewCreateInfo viewInfo = {};
viewInfo.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
viewInfo.pNext = nullptr;
viewInfo.viewType = VK_IMAGE_VIEW_TYPE_2D;
viewInfo.format = m_colorFormat;
viewInfo.flags = 0;
viewInfo.subresourceRange = {};
viewInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
viewInfo.subresourceRange.baseMipLevel = 0;
viewInfo.subresourceRange.levelCount = 1;
viewInfo.subresourceRange.baseArrayLayer = 0;
viewInfo.subresourceRange.layerCount = 1;
viewInfo.image = m_color.image;
err = vkCreateImageView(device, &viewInfo, nullptr, &m_color.view);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error creating color image view!\n");
#endif
return false;
}
//We can reuse the same info structs to build the depth image
imageInfo.format = m_depthFormat;
imageInfo.usage = VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT;
err = vkCreateImage(device, &imageInfo, nullptr, &(m_depth.image));
assert(!err);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error creating depth image!\n");
#endif
return false;
}
viewInfo.format = m_depthFormat;
viewInfo.subresourceRange.aspectMask = VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT;
vkGetImageMemoryRequirements(device, m_depth.image, &memReqs);
memAllocInfo.allocationSize = memReqs.size;
renderer->MemoryTypeFromProperties(memReqs.memoryTypeBits, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, &memAllocInfo.memoryTypeIndex);
err = vkAllocateMemory(device, &memAllocInfo, nullptr, &m_depth.memory);
assert(!err);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error allocating depth image memory!\n");
#endif
return false;
}
err = vkBindImageMemory(device, m_depth.image, m_depth.memory, 0);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error binding depth image memory!\n");
#endif
return false;
}
renderer->SetImageLayout(setupCommand, m_depth.image,
VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT,
VK_IMAGE_LAYOUT_UNDEFINED,
VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
viewInfo.image = m_depth.image;
err = vkCreateImageView(device, &viewInfo, nullptr, &m_depth.view);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error creating depth image view!\n");
#endif
return false;
}
renderer->FlushSetupCommandBuffer();
//Finally create internal framebuffer
VkImageView attachments[2];
attachments[0] = m_color.view;
attachments[1] = m_depth.view;
VkFramebufferCreateInfo framebufferInfo = {};
framebufferInfo.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
framebufferInfo.pNext = nullptr;
framebufferInfo.flags = 0;
framebufferInfo.renderPass = *((VKRenderPass*)m_renderPass)->GetVkRenderPass();
framebufferInfo.attachmentCount = 2;
framebufferInfo.pAttachments = attachments;
framebufferInfo.width = m_width;
framebufferInfo.height = m_height;
framebufferInfo.layers = 1;
err = vkCreateFramebuffer(device, &framebufferInfo, nullptr, &m_framebuffer);
if (err != VK_SUCCESS)
{
#ifdef _DEBUG
Core::DebugPrintF("VKRenderTarget::VPrepare(): Error creating framebuffer!\n");
#endif
return false;
}
return true;
}
If anyone wants more info on the code feel free to ask and I will provide it. There's a LOT of lines of code for this project so I don't want everyone to have to wade through it all. If you'd like to though all the code can be found at http://github.com/thirddegree/HatchitGraphics/tree/dev
Edit: After a bit more poking around I've found that even the color doesn't really clear properly. RenderDoc shows that each frame only renders the cutout of the raptor and doesn't clear the rest of the frame. Is this a driver problem?
Edit: Some more info. I've found that if I draw NOTHING, just begin and end a render pass not even drawing my fullscreen triangle, the screen will clear. However if I draw just the triangle, the depth is wrong (even if I don't blit anything from offscreen or apply any sort of texture).
Edit: More specifically the color will clear but the depth does not. If I don't draw anything the depth will stay black; all 0s. Why the fullscreen triangle causes the weird static of depth I am not sure.
This is exactly what happened to me when I started to get my Vulkan examples work on AMD hardware:
Their GPUs rely heavily on correct image transitions (which are mostly ignored by e.g. NVIDIA) and I think the corruption you see in your screenshots is the result of a missing pre-present barrier.
The pre-present barrier (see here) transforms the image layout of your color attachment into a presentation format for passing presenting it to the swap chain.
This has to be done after you have finished rendering to your color attachment to make sure that the attachment is completed before presenting it.
You can see an example of this in the draw routine of my examples.
On rendering the next frame you need to transform the color attachment's image format back in order to be able to render to it again.
To sum it up:
Before rendering to your color attachment transition your image from VK_IMAGE_LAYOUT_PRESENT_SRC_KHR to VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL (aka "post present")
Do your rendering
Transition your color attachment image from VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR and present that to the swap chain
Thanks to Sascha and some extra errors that popped up with the new 1.0.5 LunarG SDK I've managed to fix the problem. The commit with the fixing changes (and a couple other little things) can be found here: https://github.com/thirddegree/HatchitGraphics/commit/515d0303f45a8e9c00f67a74c824530ea37b687a
It was a combination of a few things:
I needed to set the depth image on the framebuffer attachment of the swapchain to VK_IMAGE_ASPECT_DEPTH_BIT | VK_IMAGE_ASPECT_STENCIL_BIT rather than just VK_IMAGE_ASPECT_DEPTH_BIT
For pretty much every image memory barrier I forgot to specifiy the baseArrayLayer of the subresourceRange. This did not produce an error until version 1.0.5.
Another error that didn't pop up until 1.0.5 that might help you track a similar bug down and affected my texture generation was that before I mapped device memory for a texture to host memory I needed to transition it from VK_IMAGE_LAYOUT_UNDEFINED to VK_IMAGE_LAYOUT_GENERAL, submit that command, map the memory and then transition it from GENERAL to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL (don't forget to submit that command too). Again this is only for textures that you want to sample but I guess the moral here is "actually submit your image transitions"