Handle OS window resizing when using Dear ImGui (DX11 backend) - c++

I am trying to make a simple ImGui Win32 app with Visual C++.
I noticed a problem, that when I try to resize my host window,
the ImGui window gets stretched and the Widgets (also other stuff like lines) are deformed.
An example:
I am drawing a line from 0, 0 to 200, 200.
Theoretically, this line should always have a 45 degrees angle to the bottom of the window, but instead it stretches around when resizing the window.
I am pretty sure that this is not a D3D11 issue, because I have already worked with D3D11 a couple of times before and I know that it handles window resizing without any additional work/code.
I was looking through the imgui docs for a few days now, but I couldn't find any answer to my problem. This is really frustrating, as I thought this would be a trivial thing for such a well known framework.
Here's the rendering code:
std::optional<int> Window::beginRender() noexcept
{
if (const auto code = Window::processMessages())
return *code;
ImGui_ImplDX11_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
return {};
}
void Window::render(const char* windowTitle) noexcept
{
ImGui::SetNextWindowPos({ 0, 0 });
ImGui::SetNextWindowSize({ static_cast<float>(m_width), static_cast<float>(m_height) });
ImVec2 test{ ImGui::GetWindowSize() };
ImVec2 test2{ ImGui::GetContentRegionAvail() };
ImGui::Begin(windowTitle, nullptr, /*ImGuiWindowFlags_NoResize |*/ ImGuiWindowFlags_AlwaysAutoResize | ImGuiWindowFlags_NoSavedSettings | ImGuiWindowFlags_NoCollapse | ImGuiWindowFlags_NoMove | ImGuiWindowFlags_NoBackground | ImGuiWindowFlags_NoTitleBar);
ImGui::ShowDemoWindow();
ImGui::GetBackgroundDrawList()->AddLine(ImVec2{ 0.0f, 0.0f }, ImVec2{ 200.0f, 200.0f }, 0xFF0000FF);
ImGui::End();
}
void Window::endRender() noexcept
{
ImGui::EndFrame();
ImGui::Render();
constexpr float color[]{ 0.0f, 0.0f, 0.0f, 0.0f };
m_pContext->OMSetRenderTargets(1, &m_pRTV, nullptr);
m_pContext->ClearRenderTargetView(m_pRTV, color);
updateTargetData();
MoveWindow(m_hWnd, m_position.x, m_position.y, m_width, m_height, false);
//ImGuiIO& io{ ::ImGui::GetIO() };
//io.DisplaySize.x = m_width;
//io.DisplaySize.y = m_height;
ImGui_ImplDX11_RenderDrawData(ImGui::GetDrawData());
m_pSwapChain->Present(1, 0);
}
(About the App: It's tracking another window (notepad.exe), so I am actually resizing the target and then use MoveWindow() for resizing. But I guess this doesn't make any difference)
So how do I fix this issue? How do I correctly handle window resizing with ImGui?

In your Window Procedure Function, when you resize, you'll need to recreate the render target view, as such:
case WM_SIZE:
{
pMainRendertargetView->Release()
pSwapChain->ResizeBuffers(0, LOWORD(lParam), HIWORD(lParam), DXGI_FORMAT_UNKNOWN, 0);
ID3D11Texture2D* pBackBuffer;
pSwapChain->GetBuffer(0, IID_PPV_ARGS(&pBackBuffer));
pd3dDevice->CreateRenderTargetView(pBackBuffer, NULL, pMainRenderTargetView);
pBackBuffer->Release();
// Change Viewport Here
}

Related

Im trying to use OpenGL with the windows API on different threads

So basically I am using the windows api to create an emty window and then I use OpenGL to draw to that window from different threads. I managed to do this just with one thread, but getting and dispatching system messages so that the window is usable was slowing down the frame rate I was able to get, so I'm trying to get another thread to do that in parallel while I draw in the main thread.
To do this I have a second thread which creates an empty window and enters an infinite loop to handle the windows message loop. Before entering the loop it passes the HWND of the empty window to the main thread so OpenGl can be initialised. To do that I use the PostThreadMessage function and use the message code WM_USER and the wParam of the message to send the window handler back. Here is the code to that secondary thread:
bool t2main(DWORD parentThreadId, int x = 0, int y = 0, int w = 256, int h = 256, int pixelw = 2, int pixelh = 2, const char* windowName = "Window") {
// Basic drawing values
int sw = w, sh = h, pw = pixelw, ph = pixelh;
int ww = 0; int wh = 0;
// Windows API window handler
HWND windowHandler;
// Calculate total window dimensions
ww = sw * pw; wh = sh * ph;
// Create the window handler
WNDCLASS wc;
wc.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wc.hCursor = LoadCursor(NULL, IDC_ARROW);
wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC;
wc.hInstance = GetModuleHandle(nullptr);
wc.lpfnWndProc = DefWindowProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.lpszMenuName = nullptr;
wc.hbrBackground = nullptr;
wc.lpszClassName = "windowclass";
RegisterClass(&wc);
DWORD dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE;
DWORD dwStyle = WS_CAPTION | WS_SYSMENU | WS_VISIBLE | WS_THICKFRAME;
RECT rWndRect = { 0, 0, ww, wh };
AdjustWindowRectEx(&rWndRect, dwStyle, FALSE, dwExStyle);
int width = rWndRect.right - rWndRect.left;
int height = rWndRect.bottom - rWndRect.top;
windowHandler = CreateWindowEx(dwExStyle, "windowclass", windowName, dwStyle, x, y, width, height, NULL, NULL, GetModuleHandle(nullptr), NULL);
if(windowHandler == NULL) { return false; }
PostThreadMessageA(parentThreadId, WM_USER, (WPARAM) windowHandler, 0);
for(;;) {
MSG msg;
PeekMessageA(&msg, NULL, 0, 0, PM_REMOVE);
DispatchMessageA(&msg);
}
}
This function gets called from the main entry point, which correctly recieves the window handler and then tries to setup OpenGL with it. Here is the code:
int main() {
// Basic drawing values
int sw = 256, sh = 256, pw = 2, ph = 2;
int ww = 0; int wh = 0;
const char* windowName = "Window";
// Thread stuff
DWORD t1Id, t2Id;
HANDLE t1Handler, t2Handler;
// Pixel array
Pixel* pixelBuffer = nullptr;
// OpenGl device context to draw
HDC glDeviceContext;
HWND threadWindowHandler;
t1Id = GetCurrentThreadId();
std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();
t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);
while(true) {
MSG msg;
PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);
if(msg.message == WM_USER) {
threadWindowHandler = (HWND) msg.wParam;
break;
}
}
// Initialise OpenGL with thw window handler that we just created
glDeviceContext = GetDC(threadWindowHandler);
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), 1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
PFD_MAIN_PLANE, 0, 0, 0, 0
};
int pf = ChoosePixelFormat(glDeviceContext, &pfd);
SetPixelFormat(glDeviceContext, pf, &pfd);
HGLRC glRenderContext = wglCreateContext(glDeviceContext);
wglMakeCurrent(glDeviceContext, glRenderContext);
// Create an OpenGl buffer
GLuint glBuffer;
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);
glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
// Create a pixel buffer to hold the screen data and allocate space for it
pixelBuffer = new Pixel[sw * sh];
for(int32_t i = 0; i < sw * sh; i++) {
pixelBuffer[i] = Pixel();
}
// Test a pixel
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);
// Push the current buffer into view
glViewport(0, 0, ww, wh);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();
SwapBuffers(glDeviceContext);
for(;;) {}
}
To hold the pixel information I'm using this struct:
struct Pixel {
union {
uint32_t n = 0xFF000000; //Default 255 alpha
struct {
uint8_t r; uint8_t g; uint8_t b; uint8_t a;
};
};
Pixel() {
r = 0;
g = 0;
b = 0;
a = 255;
}
Pixel(uint8_t red, uint8_t green, uint8_t blue, uint8_t alpha = 255) {
r = red;
g = green;
b = blue;
a = alpha;
}
};
When I try to run this code I don't get the desired pixel output, instead I just get the empty window, as if OpenGl handn't initialised correctly. When I use the same code but all into one thread I get the empty window with the pixel in it. What am I doing wrong here?, Is there something I need to do before I initialise OpenGl in another thread? I apreciate all kind of feedback. Thanks in advance.
There are several issues here. Let's address them in order.
First let's recall the rules of:
OpenGL and threads
The basic rules about OpenGL with regard to windows, device context and threads are:
An OpenGL context is not associated with a particular window or device context.
You can make a OpenGL context "current" on any device context (HDC, usually associated with a Window) that is compatible to the device context with which the context was original created with.
An OpenGL context can be "current" on only one thread at a time, or not be active at all.
To move OpenGL context "current state" from one thread to another you do:
first: unmake "current" the context on the thread it's currently used on
second: make it "current" on the thread you want to be current on.
More than one (including all) threads in a process can have a OpenGL context "current" at the same time.
Multiple OpenGL contexts (including all) – which will be rule 5 be current in different threads – can be current with the same device context (HDC) at the same time.
There are no defined rules for drawing commands happening concurrently on different threads, but current on the same HDC. Ordering must happen by the user, by placing appropriate locks that work OpenGL synchronization primitives. Until the introduction of explicit, fine grains synchronization objects into OpenGL the only synchronization available were glFinish and the implicit synchronization point calls of OpenGL (e.g. glReadPixels).
Misconceptions in your understanding what OpenGL does
This comes from reading the comments in your code:
int main() {
Why is your thread function called main. main is a reserved name, exclusively to be used for the process entry function. Even if your entry is WinMain you must not use main as a functio name.
// Pixel array
Pixel* pixelBuffer = nullptr;
It's unclear what the pixelBuffer is meant for, later on. You will call it on a texture. but apparently don't set up the drawing to use a texture.
t1Id = GetCurrentThreadId();
std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();
t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);
What, I don't even. What is this supposed to do in the first place? First things first: Don't mix Win32 threads API and C++ std::thread. Decice in one, and stick with it.
while(true) {
MSG msg;
PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);
if(msg.message == WM_USER) {
threadWindowHandler = (HWND) msg.wParam;
break;
}
}
Why the hell are you passing the window handle through a thread message? This is so wrong on so many levels. Threads all live in the same address space, so you could use a queue, or global variables, or pass is as parameter to the thread entry function, etc., etc.
Furthermore you could just have created the OpenGL context in the main thread and then just passed it over.
wglMakeCurrent(glDeviceContext, glRenderContext);
// Create an OpenGl buffer
GLuint glBuffer;
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);
That doesn't create an OpenGL buffer object, it creates a texture name.
glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
// Create a pixel buffer to hold the screen data and allocate space
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);for it
Uhh, no, you don't supply drawable buffers to OpenGL in that way. Heck, you don't even supply draw buffers to OpenGL explicitly at all (this is not D3D12, Metal or Vulkan, where you do).
// Push the current buffer into view
glViewport(0, 0, ww, wh);
Noooo. That's not what glViewport does!
glViewport is part of the transformation pipeline state and ultimately is sets the destination rectangle of where inside a drawable the clip space volume will be mapped to. It does absolutely nothing with respect to the drawable buffers.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);
I think you don't understand what a texture is for. What this call does is, copying over the contexts of pixelBuffer into the currently bound texture. After that OpenGL is no longer concerned with pixelBuffer at all.
glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();
Here you draw something, but never enabled the use of the texture in the first place. So all that ado about setting up the texture is for nothing.
SwapBuffers(glDeviceContext);
for(;;) {}
}
So after swapping the window buffers you make the thread spin forever. Two problems with that: There is still the main message loop over in the other thread that does handle other messages for the window. Including maybe WM_PAINT, and depending on if you've set a background brush and/or how you handle WM_ERASEBKGND whatever you just draw might instantly vanish thereafter.
And by spinning the thread you're consuming CPU time for no reason whatsover. You could just as well end the thread.
I solved the problem with the help of #datenwolf's comment primarly. Firstly, I used variable pointer to pass variables between threads, which removed the need for PostThreadMessageA, which was the main reasson why I was using winapi threads in the first place. I also changed the OpenGl code a bit and finally got what I wanted.

QPixmap runs over my glScissor(...) setting

I apologize if this isn't exact. I'm doing the best I can to copy code by hand from one computer to another, and the destination computer doesn't have a compiler (don't ask).
Header file
#ifndef MYOPENGLWIDGET_H
#define MYOPENGLWIDGET_H
#include <qopenglwidget.h>
class MyOpenGlWidget : public QOpenGLWidget
{
Q_OBJECT
public:
explicit MyOpenGlWidget(QWidget *parent = 0, Qt::WindowFlags f = Qt::WindowFlags());
virtual ~MyOpenGlWidget();
protected:
// these are supposed to be overridden, so use the "override" keyword to compiler check yourself
virtual void initializeGL() override;
virtual void resizeGL(int w, int h) override;
virtual void paintGL() override;
private:
QPixmap *_foregroundPixmap;
}
#endif
Source file
QOpenGLFunctions_2_1 *f = 0;
MyOpenGlWidget::MyOpenGlWidget(QWidget *parent, Qt::WindowFlags f) :
QOpenGLWidget(parent, f)
{
_foregroundPixmap = 0;
QPixmap *p = new QPixmap("beveled_texture.tiff");
if (!p->isNull())
{
_foregroundPixmap = p;
}
}
MyOpenGlWidget::~MyOpenGlWidget()
{
delete _foregroundPixmap;
}
void MyOpenGlWidget::initializeGL()
{
// getting a deprecated set of functions because such is my work environment
// Note: Also, QOpenGLWidget doesn't support these natively.
f = QOpenGLContext::currentContext()->versionFunctions<QOpenGLFunctions_2_1>();
f->glClearColor(0.0f, 1.0f, 0.0f, 1.0f); // clearing to green
f->glEnable(GL_DEPTH_TEST);
f->glEnable(GL_CULL_FACE); // implicitly culling front face
f->glEnable(GL_SCISSOR_TEST);
// it is either copy the matrix and viewport code from resizeGL or just call the method
this->resizeGL(this->width(), this->height());
}
void MyOpenGlWidget::resizeGL(int w, int h)
{
// make the viewport square
int sideLen = qMin(w, h);
int x = (w - side) / 2;
int y = (h - side) / 2;
// the widget is 400x400, so this random demonstration square will show up inside it
f->glViewport(50, 50, 100, 100);
f->glMatrixMode(GL_PROJECTION);
f->glLoadIdentity();
f->glOrtho(-2.0f, +2.0f, -2.0f, +2.0f, 1.0f, 15.0f); // magic numbers left over from a demo
f->glMatrixMode(GL_MODELVIEW);
// queue up a paint event
// Note: QGLWidget used updateGL(), but QOpenGLWidget uses update().
this->update();
}
void MyOpenGlWidget::paintGL()
{
f->glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// I want to draw a texture with beveled edges the size of this widget, so I can't
// have the background clearing all the way to the edges
f->glScissor(50, 50, 200, 200); // more magic numbers just for demonstration
// clears to green in just scissored area (unless QPainter is created)
f->glClearColor(0.0f, 1.0f, 0.0f, 1.0f);
// loading identity matrix, doing f->glTranslatef(...) and f->glRotatef(...)
// pixmap loaded earlier in another function
if (_foregroundPixmap != 0)
{
// QPixmap apparently draws such that culling the back face will cull the entire
// pixmap, so have to switch culling for duration of pixmap drawing
f->glCullFace(GL_FRONT);
QPainter(this);
painter.drawPixmap(0, 0, _foregroundPixmap->scaled(this->size()));
// done, so switch back to culling the front face
f->glCullFace(GL_BACK);
}
QOpenGLFunctions_2_1 *f = 0;
void MyOpenGlWidget::initializeGL()
{
// getting a deprecated set of functions because such is my work environment
// Note: Also, QOpenGLWidget doesn't support these natively.
f = QOpenGLContext::currentContext()->versionFunctions<QOpenGLFunctions_2_1>();
f->glClearColor(0.0f, 1.0f, 0.0f, 1.0f); // clearing to green
f->glEnable(GL_DEPTH_TEST);
f->glEnable(GL_CULL_FACE); // implicitly culling front face
f->glEnable(GL_SCISSOR_TEST);
// it is either copy the matrix and viewport code from resizeGL or just call it directly
this->resizeGL(this->width(), this->height());
}
void MyOpenGlWidget::resizeGL(int w, int h)
{
// make the viewport square
int sideLen = qMin(w, h);
int x = (w - side) / 2;
int y = (h - side) / 2;
// the widget is 400x400, so this random demonstration square will show up inside it
f->glViewport(50, 50, 100, 100);
f->glMatrixMode(GL_PROJECTION);
f->glLoadIdentity();
f->glOrtho(-2.0f, +2.0f, -2.0f, +2.0f, 1.0f, 15.0f); // magic numbers left over from a demo
f->glMatrixMode(GL_MODELVIEW);
// queue up a paint event
// Note: QGLWidget used updateGL(), but QOpenGLWidget uses update().
this->update();
}
void MyOpenGlWidget::paintGL()
{
f->glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// I want to draw a texture with beveled edges the size of this widget, so I can't
// have the background clearing all the way to the edges
f->glScissor(50, 50, 200, 200); // more magic numbers just for demonstration
// clears to green in just scissored area (unless QPainter is created)
f->glClearColor(0.0f, 1.0f, 0.0f, 1.0f);
// loading identity matrix, doing f->glTranslatef(...) and f->glRotatef(...), drawing triangles
// done drawing, so now draw the beveled foreground
if (_foregroundPixmap != 0)
{
// QPixmap apparently draws such that culling the back face will cull the entire
// pixmap, so have to switch culling for duration of pixmap drawing
f->glCullFace(GL_FRONT);
QPainter(this);
painter.drawPixmap(0, 0, _foregroundPixmap->scaled(this->size()));
// done, so switch back to culling the front face
f->glCullFace(GL_BACK);
}
}
The problem is this code from paintGL():
QPainter(this);
As soon as a QPainter object is created, the glScissor(...) call that I made earlier in the function is overrun and some kind of glClearColor(...) call is made (possibly from QPainter's constructor) that clears the entire viewport to the background color that I set just after glScissor(...). Then the pixmap draws my beveled texture just fine.
I don't want QPainter to overrun my scissoring.
The closest I got to an explanation was two QPainter methods, beginNativePainting() and endNativePainting(). According to the documentation, scissor testing is disabled between these two, but in their example they re-enable it. I tried using this "native painting" code, but I couldn't stop QPainter's mere existence from ignoring GL's scissoring and clearing my entire viewport.
Why is this happening and how do I stop this?
Note: This work computer has network policies to prevent me from going to entertainment sites like imgur to upload "what I want" and "what I get" pictures, so I have to make due with text.
Why is this happening
The OpenGL context is a shared resource and you have to share it with other players.
and how do I stop this?
You can't. Just do the proper thing and set viewport, scissor rectangle and all the other drawing related state at the right moment: Right before you are going to draw something that relies on these settings. Don't set them aeons (in computer terms) before, somewhere in some "initialization" or a reshape handler. And be expected that in drawing code any function you call that makes use of OpenGL will leave some garbage behind.

Trouble converting from SDL 1.2 to 2.0, Don't know how to initialize surface properly

I'm writing a basic 2D game in C++ with SDL, and just yesterday I converted from 1.2 to 2.0.
I wanted to create a moving camera, and I realized that I should switch from using SDL_Surfaces to SDL_Textures to use some of 2.0's new features. My previous setup involved blitting a bunch of sprite surfaces to a main surface called buffer, and then flipping that to the screen, and it worked well.
So, I created a renderer and assigned it to the window. Then I used SDL_CreateTextureFromSurface to create a texture from my main SDL_Surface. My surface is currently set to nullptr at this point, and I realize that I should probably initialize it to something, but I can't figure out what. Here is the relevant code.
void Game::Init() {
// Initialize SDL.
SDL_Init(SDL_INIT_EVERYTHING);
window = nullptr;
buffer = nullptr;
camera = { player.GetLocation().x - 50, player.GetLocation().y - 50, player.GetLocation().x + 50, player.GetLocation().y + 50 };
fullscreen = false;
screenWidth = 640, screenHeight = 480;
if (fullscreen) {
window = SDL_CreateWindow("Rune", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, SDL_WINDOW_FULLSCREEN_DESKTOP);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
}
else {
// Initialize our window.
window = SDL_CreateWindow("Rune", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, SDL_WINDOW_SHOWN);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
// Assign the buffer surface to the window.
}
texture = SDL_CreateTextureFromSurface(renderer, buffer);
SDL_SetRenderTarget(renderer, texture);
// Run the game loop.
Run();
}
In Run, there's normal working code that involves a game loop, but in the game loop I have a method, Render(), which is causing problems. Here it is.
void Game::Render() {
// Clear the buffer.
SDL_FillRect(buffer, NULL, 0x000000);
// Draw everything to the buffer.
level.Draw(buffer);
if (enemy.IsAlive()) enemy.Draw(buffer);
player.Draw(buffer);
SDL_UpdateTexture(texture, NULL, buffer->pixels, buffer->pitch);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
// Draw buffer to the screen.
//SDL_UpdateWindowSurface(window);
}
Running this code gives me the error that buffer is nullptr, which comes up on the SDL_UpDateTexture line. If I allocate memory for buffer in the Init() function, I get a memory access error. Can anyone offer a solution? Thank you.
After changing all my SDL_Surfaces to Textures, I realized that in my header file I had been calling constructors that relied on the renderer to have already been initialized, which it wasn't. I set them to blank constructors and then initialized them in the Game.cpp file, after the renderer, and now everything works. Thanks Benjamin for your help!

I want to write bitmap image file with what I have already rendered through OpenGL

I'm trying to write bitmap files for every frames that I rendered through OpenGL.
Please notice that I'm not going to read bitmap, I'm gonna WRITE NEW BITMAP files.
Here is part of my C++ code
void COpenGLWnd::ShowinWnd(int ID)
{
if(m_isitStart == 1)
{
m_hDC = ::GetDC(m_hWnd);
SetDCPixelFormat(m_hDC);
m_hRC = wglCreateContext(m_hDC);
VERIFY(wglMakeCurrent(m_hDC, m_hRC));
m_isitStart = 0;
}
GLRender();
CDC* pDC = CDC::FromHandle(m_hDC);
//pDC->FillSolidRect(0, 0, 100, 100, RGB(100, 100, 100));
CRect rcClient;
GetClientRect(&rcClient);
SaveBitmapToDirectFile(pDC, rcClient, _T("a.bmp"));
SwapBuffers(m_hDC);
}
"GLRender" is the function which can render on the MFC window.
"SaveBitmapToDirectFile" is the function that writes a new bitmap image file from the parameter pDC, and I could check that it works well if I erase that double slash on the second line, because only gray box on left top is drawn at "a.bmp"
So where has m_hDC gone? I have no idea why rendered scene wasn't written on "a.bmp".
Here is GLRender codes, but I don't think that this function was the problem, because it can render image and print it out well on window.
void COpenGLWnd::GLFadeinRender()
{
glViewport(0,0, m_WndWidth, m_WndHeight);
glOrtho(0, m_WndWidth, 0, m_WndHeight, 0, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(m_BlendingSrc, m_BlendingDest);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)(1-m_BlendingAlpha));
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w1, -(GLfloat)m_WndHeight/(GLfloat)m_h1);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w1, (GLsizei)m_h1, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageA);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)m_BlendingAlpha);
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w2, -(GLfloat)m_WndHeight/(GLfloat)m_h2);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w2, (GLsizei)m_h2, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageB);
glFlush();
}
I'm guessing you're using MFC or Windows API functions to capture the bitmap from the window. The problem is that you need to use glReadPixels to get the image from a GL context -- winapi isn't able to do that.

viewport or projection not resizing correctly on window resize in wxWidgets

I have a wxWigets application, using the wxGLCanvas widget from the contrib package. Every time the widget is refreshed, I recreate the projection matrix and the viewport before rendering the scene. The problem is, after the frame is resized, whether by dragging or maximizing or w/e, the widget is correctly resized, but the viewport or projection (can't really discern which) is not resized, so the scene ends up being cropped.
Now for some code.
The constructor of my wxGLCanvas widget looks like this:
int attrib_list[] = { WX_GL_RGBA, WX_GL_DOUBLEBUFFER };
RenderCanvas::RenderCanvas(wxFrame* parent) : wxGLCanvas(parent, idRender, wxPoint(200, -1), wxSize(800, 600), wxFULL_REPAINT_ON_RESIZE, wxT("GLCanvas"), attrib_list)
{
Connect(idRender, wxEVT_PAINT, wxPaintEventHandler(RenderCanvas::PaintIt));
Connect(idRender, wxEVT_SIZE, wxSizeEventHandler(RenderCanvas::Resize));
Connect(idRender, wxEVT_LEFT_DOWN, wxMouseEventHandler(RenderCanvas::OnMouseLeftDown));
Connect(idRender, wxEVT_LEFT_UP, wxMouseEventHandler(RenderCanvas::OnMouseLeftUp));
Connect(idRender, wxEVT_MOTION, wxMouseEventHandler(RenderCanvas::OnMouseMotion));
}
I overriode the PaintIt method like so:
void RenderCanvas::PaintIt(wxPaintEvent& event)
{
SetCurrent();
wxPaintDC(this);
Render();
Refresh(false);
}
and the resize method:
void RenderCanvas::Resize(wxSizeEvent& event)
{
Refresh();
}
The Render method used above looks like this:
void RenderCanvas::Render()
{
Prepare2DViewport(0,GetClientSize().y,GetClientSize().x, 0);
glClear(GL_COLOR_BUFFER_BIT);
//Renders stuff...
glFlush();
SwapBuffers();
}
And lastly, the Prepare2DViewport, where the glViewport and glOrtho functions are being called is:
void RenderCanvas::Prepare2DViewport(int x, int h, int w, int y)
{
glClearColor(0.7f, 0.7f, 0.7f, 1.0f); // Grey Background
glEnable(GL_TEXTURE_2D); // textures
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glViewport(x, y, w, -h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(x, (float)w, y, (float)h, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
It should also be noted that I have tried using GetSize() instead of GetClientSize() but the effects are the same. Also, I am on Ubuntu 12.04 and wxWidgets is using GTK
UPDATE:
It should also be noted that I have click/drag functionality implemented as well (not shown), and every time the mouse is clicked or dragged, the widget is refreshed. With that said, I tried disconnecting the wxEVT_SIZE event, but after resizing and then clicking and dragging, the same effect persists.
From a quick glance at your code, I notice that your resize event handler is not doing anything more than asking for another refresh. This does not seem correct to me. A resize handler would normally do some calculation, call resize handlers of one or more child windows and so on before asking for a refresh. So, I think that your problem is that the canvas has not been resized BEFORE being repainted. This would explain why your scene is clipped. Not sure what the fix would be - hopefully this will be enough of a hint so you can look in the right direction.
Okay, so it turned out to be a very, very stupid mistake on my part.
in this line
glViewport(x, y, w, -h);
I accidentally set the height to a negative value. removing the negative sign fixed the issue. I am not sure why it was causing a crop. I would think having the height negative would prevent me from seeing anything, but it works, so I can't complain I guess. Thanks again all for the help!