Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I am rendering different models. Big models (more than a thousand triangles) are rendered without issue, however small models(a single triangle, or 12 triangles) flicker.
The way I am currently rendering involves 2 passes, so I assume the flickering comes from an absence of proper barriers.
The execution goes:
start to render pass,
render to texture (image attachment of FB),
stop pass.
start pass,
render to main display
end pass.
There is only a single global graphics queue. Following the vulkan tutorial, I have a command buffer for every image in my swapchain, and I record commands to each buffer in that list, like so:
for(auto &cmd : cmd_buffers)
{
std::array<vk::ClearValue, 2> clears = {};
clears[0].color = vk::ClearColorValue(array<float, 4>({0,0,0,0}));
clears[1].depthStencil = vk::ClearDepthStencilValue(1.0f, 0.0f);
vk::RenderPassBeginInfo render_pass_info(
*render_pass, *(framebuffers[index]), {{0,0}, extent},
clears.size(), clears.data()
);
cmd.beginRenderPass(&render_pass_info, vk::SubpassContents::eInline);
index++;
}
/* submit more instructions to the command buffers later */
For the off screen target (FB) I only record a single command buffer.
I call my first render pass as described above, and then, immediately before calling the second render pass, I try waiting on an image barrier:
void RenderTarget::WaitForImage(VulkanImage* image)
{
vk::ImageMemoryBarrier barrier = {};
// TODO(undecided): maybe this isn't always true
barrier.oldLayout = vk::ImageLayout::eShaderReadOnlyOptimal;
barrier.newLayout = vk::ImageLayout::eShaderReadOnlyOptimal;
barrier.srcAccessMask = vk::AccessFlagBits::eShaderWrite;
barrier.dstAccessMask = vk::AccessFlagBits::eShaderRead;
barrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
barrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
barrier.image = image->GetImage();
barrier.subresourceRange =
vk::ImageSubresourceRange(vk::ImageAspectFlagBits::eColor, 0, 1, 0, 1);
vk::CommandBuffer command_buffer = h_interface->BeginSingleTimeCommands();
command_buffer.pipelineBarrier(vk::PipelineStageFlagBits::eAllGraphics,
vk::PipelineStageFlagBits::eFragmentShader, {}, 0, nullptr, 0,
nullptr, 1, &barrier);
h_interface->EndSingleTimeCommands(command_buffer);
}
And then somewhere else in the code I call it:
display_target.WaitForImage(&image);
active_target_id = 0;
display_target.StartFrame();
render_presentation.AddDrawData(quad_buffer, {uniform_data},
{{&image, 1}});
display_target.EndFrame();
Based on the behaviour I have seen, I assume my issue is a synchronization problem, however, the image barrier code doesn't seem to solve the flickering.
Am I using the barrier wrong? Should I look into other possible sources of error (and if so, which)?
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm using OpenGL to load a lot of different textures. As I'm loading a lot of textures though I have around 50 lines that are all calls of the same function, I was just wondering if there is a way to condense this or make it more efficient.
For reference here is what the function call and a section of the code looks like:
//Load the barrier texture image
LoadTexture("barrier/ConcreteBarrier_Texture_stripes.jpg", textureId3, true);
//Load the shed texture image
LoadTexture("shed/shed1.jpg", textureId4, true);
//Load the banana texture image
LoadTexture("banana/Banana_D01.png", textureId5, true);
//Load the UI button 1 image
LoadTexture("ui/button1.png", textureIdB1, false);
//Load the UI button 2 image
LoadTexture("ui/button2.png", textureIdB2, false);
You can put the values into an array, and then loop through the array calling the function, eg:
struct paramInfo {
string file;
GLuint id;
bool b;
};
paramInfo params[] = {
{"barrier/ConcreteBarrier_Texture_stripes.jpg", textureId3, true},
{"shed/shed1.jpg", textureId4, true},
{"banana/Banana_D01.png", textureId5, true},
{"ui/button1.png", textureIdB1, false},
{"ui/button2.png", textureIdB2, false},
...
};
for (auto &p : params) {
LoadTexture(p.file.c_str(), p.id, p.b);
}
I am coding a 2D Game using DirectX11 and DirectXTK.
I did a class Framework that initializes both the window displayed for the game and initializes DirectX. These initializations work correctly. Then, I decided to draw some backgrounds, etc in the window, but after a while it exits on an exception. I did a try{ ... } catch(){ } block, which tells me that "Texture cannot be null". However, i could not find which texture it is talking about, even by debbugging and checking all the values.
I decided to separate the different elements i was drawing in the window, to see where the problem might come from... So now i have 3 draw methods :
Draw(DWORD &elapsedTime);
DrawBackground(DWORD &elapsedTime);
DrawCharacter(DWORD &elapsedTime);
The Draw(DWORD &elapsedTime) method calls both DrawBackground() and DrawCharacter() methods.
Here is my Draw Method :
void Framework::Draw(DWORD * elapsedTime)
{
// Clearing the Back Buffer
immediateContext->ClearRenderTargetView(renderTargetView, Colors::Aquamarine);
//Clearing the depth buffer to max depth (1.0)
immediateContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //immediateContext is a ID3D11DeviceContext*
CommonStates states(d3dDevice); //d3dDevice is a ID3D11Device*
sprites.reset(new SpriteBatch(immediateContext));
sprites->Begin(SpriteSortMode_Deferred, states.NonPremultiplied());
DrawBackground1(elapsedTime);
DrawCharacter(elapsedTime);
sprites->End();
//Presenting the back buffer to the front buffer
swapChain->Present(0, 0);
}
By debugging i am almost sure that the exception comes from both DrawBackground() and DrawCharacter(). Indeed, when I comment those in the Draw method, i have no error, but as soon as i put one it sets the exception after displaying what i want during a few seconds.
Here is the method DrawBackground() for example :
void Framework::DrawBackground1(DWORD * elpasedTime)
{
RECT *try1 = new RECT();
try1->bottom = 0; try1->left = 0; try1->right = (int)WIDTH; try1->bottom = (int)HEIGHT;
ID3D11ShaderResourceView * texture2 = nullptr;
ID3D11ShaderResourceView * textureRV = nullptr;
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set2_background.dds", nullptr, &textureRV);
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set3_tiles.dds", nullptr, &texture2);
sprites->Draw(textureRV, XMFLOAT2(0, 0), try1, Colors::White);
sprites->Draw(texture2, XMFLOAT2(0, 0), try1, Colors::CornflowerBlue);
}
So as soon as i uncomment this method (or any DrawCharacter(), which follows the same steps), the window displays what i expect it to for a few seconds, but then i get the exception "Texture cannot be null". I also noticed that the method DrawCharacter() lets the window displaying what i want longer than the method DrawBackground(), whose texture is way bigger than the character's one.
I'm not sure if this information is useful but i think that maybe this might be linked to the size of the texture ?
Would you notice anything that i did wrong in this code ? Why would a texture be considered null while it does display it for a while ? I've been looking for answers for a few hours now, some help would be amazing please !
Thank you
I noticed that you create two new ID3D11ShaderResourceView every iteration without Release-ing the old ones. You could try by creating the ShaderResourceViews only once and storing them as global variables, or you could try by ->Release() them after the sprites->Draw(...) calls.
I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)
You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.
I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Improve this question
I'm getting a 'vector subscript out of range' error. I know this is caused by an indexing issue where the index is larger than the maximum size of the array/collection. However, I can't figure out why it's getting to that stage, as I am only ever incrementing the value by one, once in the entire project, and if it becomes larger than the size of the array, I reset it to 0. This is in regards to the frames of an animation in SDL. The index variable in question is m_currentFrame.
Here is the 'Process' method for the animated sprite, this is the only place in the entire project that calls 'm_currentFrame++', I did a ctrl+f search for it:
void
AnimatedSprite::Process(float deltaTime) {
// If not paused...
if (!m_paused){
// Count the time elapsed.
m_timeElapsed += deltaTime;
// If the time elapsed is greater than the frame speed.
if (m_timeElapsed > (float) m_frameSpeed){
// Move to the next frame.
m_currentFrame++;
// Reset the time elapsed counter.
m_timeElapsed = 0.0f;
// If the current frame is greater than the number
// of frame in this animation...
if (m_currentFrame > frameCoordinates.size()){
// Reset to the first frame.
m_currentFrame = 0;
// Stop the animation if it is not looping...
if (!m_loop) {
m_paused = true;
}
}
}
}
}
Here is the method (AnimatedSprite::Draw()), that is throwing the error:
void
AnimatedSprite::Draw(BackBuffer& backbuffer) {
// frame width
int frameWidth = m_frameWidth;
backbuffer.DrawAnimatedSprite(*this, frameCoordinates[m_currentFrame], m_frameWidth, m_frameHeight, this->GetTexture());
}
Here is a screenshot of the exact error:
error
if (m_currentFrame > frameCoordinates.size()){
// Reset to the first frame.
m_currentFrame = 0;
You already need to reset when m_currentFrame == frameCoordinates.size(), because the highest index of an array is its size minus one (counting begins at 0).
I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.