I am having an OpenGL VBO problem. I downloaded the old VBO example called Lesson45 from NeHe and I modified it to check something.
My end result is to create about 9 tiles, one of them being the origin. Then as the player moves on the screen, the top/bottom rows/columns update the data. But for now I want something basic:
I create one VBO and then I want to update the data in another thread. While the data is being uploaded, I do not want to draw the VBO because that would cause problems.
Here I create the VBO:
glGenBuffersARB( 1, &m_nVBOVertices );
glBindBufferARB(GL_ARRAY_BUFFER_ARB, m_nVBOVertices);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, m_nVertexCount*3*sizeof(float), m_pVertices, GL_DYNAMIC_DRAW_ARB);
I create a thread, I set up an OpenGL context, I share lists. Then I process the data, when the user presses "R" on the keyboard:
while(TerrainThreadRun)
{
//look for R
if(window.keys->keyDown[82] == TRUE && keyactivated == false)
{
keyactivated = true;
window.keys->keyDown[82] = FALSE;
}
if(keyactivated)
{
for(int i = 0; i < g_pMesh->m_nVertexCount; i++)
{
g_pMesh->m_pVertices[i].y = 800.0f;
}
while(!wglMakeCurrent(window.hDCThread,window.hRCThread))//This was removed
Sleep(5);//This was removed
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOVertices);
glBufferSubDataARB(GL_ARRAY_BUFFER_ARB, 0, g_pMesh->m_nVertexCount*3*sizeof(float), g_pMesh->m_pVertices);
keyactivated = false;
}
}
To draw the data:
if(!keyactivated)
{
glEnableClientState( GL_VERTEX_ARRAY );
glBindBufferARB(GL_ARRAY_BUFFER_ARB, g_pMesh->m_nVBOVertices);
glVertexPointer(3, GL_FLOAT, 0, (char*)NULL);
glDrawArrays(GL_TRIANGLES, 0, g_pMesh->m_nVertexCount);
glDisableClientState(GL_VERTEX_ARRAY);
}
I know that using ARB extensions is not recommended, but this is just for a quick basic example.
The problem is that when I first press "R", the data does not get updated. The VBO draws the same. The second time that I press "R", it updates the data. What can I do to force the draw. Am I doing something wrong?
Does the data need to be forced to the video card? Am I missing something?
Update: I looked over my code and now I use wglMakeCurrent only once, when the context is initialized. In the thread, I use it after sharing the lists and on the main thread as soon as the lists are shared, like this:
window->hRC = wglCreateContext (window->hDC);
if (window->hRC ==0)
{
// Failed
}
TerrainThreadRun = true;
TerrainThread = CreateThread(NULL, NULL,(LPTHREAD_START_ROUTINE)TerrainThreadProc, 0, NULL, NULL);
while(!sharedContext)
Sleep(100);
if (wglMakeCurrent (window->hDC, window->hRC) == FALSE)
And in the thread:
if (!(window.hRCThread=wglCreateContext(window.hDCThread)))
{
//Error
}
while(wglShareLists(window.hRC, window.hRCThread) == 0)
{
DWORD err = GetLastError();
Sleep(5);
}
sharedContext = true;
int cnt = 0;
while(!wglMakeCurrent(window.hDCThread,window.hRCThread))
Sleep(5);
while(TerrainThreadRun)
{
//look for R
Second update: I tried using glMapBuffer instead of glBuferSubData, but the application behaves the same. Here is the code:
void *ptr = (void*)glMapBuffer(GL_ARRAY_BUFFER_ARB, GL_READ_WRITE_ARB);
if(ptr)
{
memcpy(ptr, g_pMesh->m_pVertices, g_pMesh->m_nVertexCount*3*sizeof(float));
glUnmapMapBuffer(GL_ARRAY_BUFFER_ARB);
}
Update three:
I was doing some things wrong, so I modified them, but the problem remains the same. Here is how I do everything now:
When the application loads, I create two windows, each with its own HWND. Based on them, I create two device contexts.
Then I share the lists between them:
wglShareLists(window.hRC, window.hRCThread);
This is done only once when I initialize.
After that I show the OGL window, which renders; I make the context active. Then I load the function pointers and create the VBO.
After the main rendering OGL is done, I create the thread. When the thread is loaded, I make its device context active.
Then we do normal stuff.
So my question is: Do I need to update the function pointers for each device context? Could this be my problem?
As an update, if I run my test app in gDEBugger and I first press "R" and then pause, it doesn't display correctly. I take a look at the memory (Textures, Buffers and Image Viewers) and GLContext1(I think the main rendering thread) device context has the OLD data. While GLContext2 (Shared-GL1) (I think the thread context) has the correct data.
The odd part, if I look back at GLContext1, with the program still in pause mode, now it displays the new data, like it "refreshed" it somehow. And then if I press play, it starts drawing correctly.
I found the solution, I need to call glFinish() in the worker thread after calling glUnmapBuffer. This will solve the problem and everything will render just fine.
Related
ok so basically I am trying to inject a DLL into a game for an external menu for debugging and my hook works completely fine, i can render a normal square fine to the screen but when i try to render imgui, some games DirectX just dies and some others nothing renders at all. The issue makes no sense because I've tried everything, i've switched libraries, tried different compile settings and just started doing random shit but still to no avail, the library i am using for hooking is minhook (was using kiero but from trying to figure out the issue switched to manually getting the D3D Device).
My hooks work entirely fine as I said earlier, I can render a square to the screen without issues but I cant render imgui (and yes i checked it is the DX9 version of imgui), code:
long __stdcall EndSceneHook(IDirect3DDevice9* pDevice) // Our hooked endscene
{
D3DRECT BarRect = { 100, 100, 200, 200 };
pDevice->Clear(1, &BarRect, D3DCLEAR_TARGET, D3DCOLOR_ARGB(255, 0, 255, 0), 0.0f, 0);
if (!EndSceneInit) {
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
ImGui_ImplWin32_Init(TrackmaniaWindow);
ImGui_ImplDX9_Init(pDevice);
EndSceneInit = true;
return OldEndScene(pDevice);
}
ImGui_ImplDX9_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
ImGui::ShowDemoWindow();
ImGui::EndFrame();
ImGui::Render();
ImGui_ImplDX9_RenderDrawData(ImGui::GetDrawData());
return OldEndScene(pDevice); // Call original ensdcene so the game can draw
}
And if you are considering saying i forgot to hook Reset I did but the game pretty much never calls it so I probably did it wrong, code for that:
long __stdcall ResetHook(IDirect3DDevice9* pDevice, D3DPRESENT_PARAMETERS Parameters) {
/* Delete imgui to avoid errors */
ImGui_ImplDX9_Shutdown();
ImGui_ImplWin32_Shutdown();
ImGui::DestroyContext();
/* Check if its actually being called */
if (!ResetInit) {
std::cout << "Reset called correctly" << std::endl;
ResetInit = true;
}
/* Return old function */
return OldReset(pDevice, Parameters);
}
Just incase I did mess up the hooking process for 1 of the functions i will also include the code i used to actually hook them
if (MH_CreateHook(vTable[42], EndSceneHook, (void**)&OldEndScene) != MH_OK)
ThrowError(MinHook_Hook_Creation_Failed);
if (MH_CreateHook(vTable[16],OldReset,(void**)&OldReset)!=MH_OK)
ThrowError(MinHook_Hook_Creation_Failed);
MH_EnableHook(MH_ALL_HOOKS);
Ok so I solved the issue already, but just incase anyone else needs help I have found a few fixes as to why it would crash/not render.
First one being EnumWindow(), if you are using EnumWindows() to get your target processes HWND then that is likely one or your entire issue,
For internal cheats, Use GetForegroundWindow() when the game is loaded, or you can use FindWindow(0,"Window Name") (works for both external and internal [game needs to be loaded])
void MainThread(){
HWND ProcessWindow = 0;
WaitForProcessToLoad(GameHandle); // This is just an example of waiting for the game to load
ProcessWindow = GetForegroundWindow(); // We got the HWND
// or
ProcessWindow = FindWindow(0,"Thing.exe");
}
To start off with the 2nd possible issue, make sure your replacement functions for the functions your hooking are actually passing the right arguments (this is if your hook instantly crashes), and make sure you are returning the original function.
Make sure your WndProc function is working correctly (if you dont know how then google DX9 Hooking Tutorials and copy + paste there code for that).
Last fix is how you are rendering imgui to the screen, if imgui isnt rendering after the first fix then it is likely because you arent calling a function that is required, This is an example of correctly made imgui rendering
long __stdcall EndSceneHook(IDirect3DDevice9* pDevice) // Our hooked endscene
{
if (!EndSceneInit) {
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
ImGui::StyleColorsDark();
ImGui_ImplWin32_Init(Window);
ImGui_ImplDX9_Init(pDevice);
EndSceneInit = true;
return OldEndScene(pDevice);
}
ImGui_ImplDX9_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
ImGui::ShowDemoWindow();
ImGui::EndFrame();
ImGui::Render();
ImGui_ImplDX9_RenderDrawData(ImGui::GetDrawData());
return OldEndScene(pDevice); // Call original ensdcene so the game can draw
}
If none of these fixes worked then google the error or youtube DX9 hooking tutorials
I'll do my best to replicate this issue, since it's quite complicated. As an overview, I have a program that displays graphs using OpenGL. This is thread specific so I know the rendering is only done on one thread. My other thread queries a database and stores the data in a copy vector. Once the thread is finished, it swaps the data with the data the OpenGL thread is using (After joining the thread with the main one). In theory there is nothing about this that should make the program run so slow?
The extremely odd part of this is how it eventually "warms up" and runs much faster after a while (it varies quite a bit, sometimes almost instantaneously, sometimes after 30s of runtime). From value's side of thing to compare, the program begins running at about 30-60 fps whilst querying the data (as in, constantly loading it and swapping it and joining the threads), but then once it has warmed up it runs at 1000 fps.
I have tested some things out, beginning with making the query take A LONG time to run. During this, the fps is at a max of what it would be (3000+). It is only when the data is constantly being changed (swapping vectors) that is starts to run very slow. It doesn't make sense that this alone is causing the performance hit since it runs very well after it's "warmed up".
Edit:
I've managed to make a reasonable minimal reproducable example, and i've found some interesting result.
Here is the code:
#include <iostream>
#include <string>
#include <thread>
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include "ImGui/imgui.h"
#include "ImGui/imgui_impl_glfw.h"
#include "ImGui/imgui_impl_opengl3.h"
bool querying = false;
std::thread thread;
int m_Window_Width = 1280;
int m_Window_Height = 720;
static void LoadData()
{
querying = true;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
querying = false;
}
int main()
{
glfwInit();
const char* m_GLSL_Version = "#version 460";
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 6);
GLFWwindow* m_Window = glfwCreateWindow(m_Window_Width, m_Window_Height, "Program", NULL, NULL);
glfwMakeContextCurrent(m_Window);
glfwSwapInterval(0); // vsync
glewInit();
IMGUI_CHECKVERSION();
ImGui::CreateContext();
ImGui::StyleColorsClassic();
// Setup Platform/Renderer backends
ImGui_ImplGlfw_InitForOpenGL(m_Window, true);
ImGui_ImplOpenGL3_Init(m_GLSL_Version);
thread = std::thread(LoadData);
while (!glfwWindowShouldClose(m_Window))
{
glfwPollEvents();
ImGui_ImplOpenGL3_NewFrame();
ImGui_ImplGlfw_NewFrame();
ImGui::NewFrame();
char fps[12];
sprintf_s(fps, "%f", ImGui::GetIO().Framerate);
glfwSetWindowTitle(m_Window, fps);
//Load the data
if (thread.joinable() == false && querying == false) {
thread = std::thread(LoadData);
}
//Swap the data after thread is finished
if (thread.joinable() == true && querying == false) {
thread.join();
}
// Rendering
ImGui::Render();
glfwGetFramebufferSize(m_Window, &m_Window_Width, &m_Window_Height);
glViewport(0, 0, m_Window_Width, m_Window_Height);
glClearColor(0.45f, 0.55f, 0.60f, 1.00f);
glClear(GL_COLOR_BUFFER_BIT);
ImGui_ImplOpenGL3_RenderDrawData(ImGui::GetDrawData());
glfwSwapBuffers(m_Window);
}
ImGui_ImplOpenGL3_Shutdown();
ImGui_ImplGlfw_Shutdown();
ImGui::DestroyContext();
glfwDestroyWindow(m_Window);
glfwTerminate();
return 0;
}
Now the interesting thing here is playing around with std::this_thread::sleep_for(). I have implemented this in so I can simulate the speed it actually takes when running the query on the main database. What is interesting is that it actually causes the main thread to stop running and freezes it. These threads should be separate and not impact one another, however that is not the case here. Is there any explanation for this? This seems to be the root issue for my main program and boiled down to this.
Edit 2
To use the libraries (in Visual Studio), download from here, the binaries, https://www.glfw.org/download.html and here aswell, http://glew.sourceforge.net/ and lastly, ImGui from here, https://github.com/ocornut/imgui
Preprocessor: GLEW_STATIC; WIN32;
Linker: glfw3.lib;glew32s.lib;opengl32.lib;Gdi32.lib;Shell32.lib;user32.lib;Gdi32.lib
This may or may not be your issue, but here:
//Load the data
if (thread.joinable() == false && querying == false) {
thread = std::thread(LoadData);
}
//Swap the data after thread is finished
if (thread.joinable() == true && querying == false) {
thread.join();
}
it is possible that you start the thread in the first if block, then get to the second one before LoadData modifies that bool, causing the wait for that tread to finish.
I would set querying = true; in the main thread, right after you created LoadData thread. Also, I would use some kind of synchronization, for example declare querying as atomic<bool>.
EDIT:
It appears that you do not need to check joinable() - you know when the thread is joinable: when you enter the loop, and after you re-start that thread. This looks cleaner:
std::atomic<bool> querying = true;
void LoadData()
{
std::this_thread::sleep_for(std::chrono::milliseconds(100));
querying = false;
}
and later in your loop:
//Swap the data after thread is finished
if (!querying) {
thread.join();
querying = true;
thread = std::thread(LoadData);
}
I'm new to SFML, been trying to have a multi-threading game system (all of the game logic on the main thread, and the rendering in a dedicated thread using sf::Thread; mainly for practicing with threads) as explained in this page ("Drawing with threads" section):
Unfortunately my program has a long processing time during it's update() and makes the rendering process completely out of control, showing some frames painted and some others completely empty. If it isn't obvious my rendering thread is trying to paint something that isn't even calculated, leaving this epileptic effect.
What I'm looking for is to allow the thread to render only when the main logic has been calculated. Here's what I got so far:
void renderThread()
{
while (window->isOpen())
{
//some other gl stuff
//window clear
//window draw
//window display
}
}
void update()
{
while (window->isOpen() && isRunning)
{
while (window->pollEvent(event))
{
if (event.type == sf::Event::Closed || sf::Keyboard::isKeyPressed(sf::Keyboard::Escape))
{
isRunning = false;
}
else if (m_event.type == sf::Event::Resized)
{
glViewport(0, 0, m_event.size.width, m_event.size.height);
}
}
// really resource intensive process here
time = m_clock.getElapsedTime();
clock.restart().asSeconds();
}
}
Thanks in advance.
I guess the errors happen because you manipulate elements that are getting rendered at the same time in parallel. You need to look into mutexes.
Mutexes lock the element you want to manipulate (or draw in the the other thread) for as long as the manipulation takes and frees it afterwards.
While the element is locked it can not be accessed by another thread.
Example in pseudo-code:
updateThread(){
renderMutex.lock();
globalEntity.manipulate();
renderMutex.unlock();
}
renderThread(){
renderMutex.lock();
window.draw(globalEntity);
renderMutex.unlock();
}
I have developed an OpenGL ES 2.0 win32 application, that works fine in a single thread. But I also understand that UI thread and a rendering thread should be separate.
Currently my game loop looks something like that:
done = 0;
while(!done)
{
msg = GetMessage(..); // getting messages from OS
if(msg == QUIT) // the window has been closed
{
done = 1;
}
DispatchMessage(msg,..); //Calling KeyDown, KeyUp events to handle user input;
DrawCall(...); //Render a frame
Update(..); // Update
}
Please view it as a pseudo code, cause i don't want to bother you with details at this point.
So my next step was to turn done into an std::atomic_int and create a function
RenderThreadMain()
{
while(!done.load())
{
Draw(...);
}
}
and create a std::unique_ptr<std::thread> m_renderThread variable. As you can guess nothing has worked for me so far, so i made my code as stupid and simple as possible in order to make sure i don't break anything with the order i call methods in. So right now my game loop works like this.
done.store(0);
bool created = false;
while(!done)
{
msg = GetMessage(..); // getting messages from OS
if(msg == QUIT) // the window has been closed
{
done.store(1);
}
DispatchMessage(msg,..); //Calling KeyDown, KeyUp events to handle user input;
// to make sure, that my problem is not related to the fact, that i'm rendering too early.
if(!created)
{
m_renderThread = std::make_unique<std::thread>(RenderThreadMain, ...);
created = true;
}
Update(..); // Update
}
But this doesn't work. On every draw call, when i try to somehow access or use my buffers \ textures anything else, i get the GL_INVALID_OPERATION error code.
So my guess would be, that the problem is in me calling glGenBuffers(mk_bufferNumber, m_bufferIds); in the main thread during initialization and then calling glBindBuffer(GL_ARRAY_BUFFER, m_bufferIds[0]); in a render thread during the draw call. (the same applies to every openGL object i have)
But I don't now if i'm right or wrong.
I am working on a rendering library for SDL2 and I am running into a seg fault when I try to do anything with the renderer. Through debugging I have determined that it is created correctly from the window (now...) However I cannot figure out why SDL2 is seg faulting when I call SDL_RenderClear(data->renderer)
The main thread calls:
int RenderThread::start(std::string title, int x, int y, int w, int h, Uint32 flags) {
data.window = SDL_CreateWindow(title.c_str(), x, y, w, h, flags);
if(data.window == NULL) return -2;
data.renderer = SDL_CreateRenderer(data.window, -1, 0);
if(data.renderer == NULL) return -3;
data.rlist->setRenderer(data.renderer);
data.run = true;
if(thread == NULL)
thread = SDL_CreateThread(renderThread, "RenderThread", (void*)(&data));
else return 1;
return 0;
}
Then the actual thread is:
int RenderThread::renderThread(void* d) {
RenderData* data = (RenderData*)d;
data->rlist->render(true);
SDL_SetRenderDrawColor(data->renderer, 0xFF, 0xFF, 0xFF, 0xFF);
SDL_RenderClear(data->renderer);
while(data->run) {
data->rlist->render();
SDL_RenderPresent(data->renderer);
SDL_Delay(data->interval);
}
return 0;
}
If you need to see more of the code it is all on github.
Some platforms (e.g. Windows) don't allow interacting with windows from threads other than the one that created them.
The documentation explicitly says this:
NOTE: You should not expect to be able to create a window, render, or receive events on any thread other than the main one.
From a design's perspective, trying to render from another thread becomes the source of many problems. For instance:
Is it desirable to (unpredictably) update an object more than once per frame? What's preventing a logic thread from trying to make many updates that can't be rendered?
Is it desirable to risk re-rendering without having the chance to update an object?
Will you lock the entire scene while the update happens? Or will each object get its own lock, so you don't try to render an object that's in the middle of an update? Is it desirable for the frame rate to be unpredictable, due to other threads locking the objects?
Not to mention the costs of synchronization primitives.