Given that I am programming within another program already using OpenGL (let's say theoretically that I have no idea how they are using it).
Can I just set up my context however I want and push/pop it from the stack and all should work as expected, or MUST I know how my (calling) program is using OpenGL in order to avoid accidentally screwing things up?
Also, how would I go about "initializing" OpenGL when it might have already been initialized?
Thanks for any advice you might have!
To answer your first question, you probably could get away with calling glPushAttrib(GL_ALL_ATTRIB_BITS) before calling the program's functions, and calling glPopAtrib() afterward. Note that this can be slow if you do it frequently (say, every frame in a loop).
I'm not sure what you mean by initializing OpenGL. Do you mean setting up the rendering context? Setting viewport and projection? Disabling or enabling certain features? You can always check if certain states are enabled (using glGet functions), but the rest depends on how your program and the other program work.
I am told that theoretically OpenGL should be able to work within any context as long as you restore the prior context afterwards.
What exactly do you want to do with OpenGL? Are you trying to draw into the same window? Do you want to draw into your own window? If you want to just draw into your own window, and the fact that the existing app uses OpenGL is just a coincidence, then you can probably get away with just creating a completely new context and ignoring the existing stuff. The only gotcha is that you will need to make the existing context current whenever you finish what you are doing, and make your context current whenever you want to do something with it. The existing code won't be expecting to need to make its own context current, and may wind up randomly drawing into your context if you aren't careful.
If you want to draw into the existing window, then your use almost certainly requires some sort of idea about what the existing stuff is doing.
Related
I have a vulkan application that uses GLFW as its window manager.
Upon the window resize event, vulkan needs to update its drawable area. I have seen 2 ways that this is possible. One is to recreate the swapchain alongside all of the other objects that are tied to it and the other is to use dynamic state for the viewport so that recreation is not needed.
What is the difference between these two and when should I prefer one over the other?
If the window is resized to a smaller size, the display engine may not force you to change your swapchain image sizes. It may inform you of this through the VK_SUBOPTIMAL_KHR error code (though it may not give you even that if performance of presenting is not affected). However, if the window is resized to be larger, the display engine may throw VK_ERROR_OUT_OF_DATE_KHR. That is not something you can ignore. Nor is it something a display engine can promise to never give you.
This means your code must be able to do swapchain rebuilding. Since you have to make allowances for this regardless, the only question is whether you do it whenever the window is resized or just when the display engine forces you to.
I would say that, if the display engine doesn't make you rebuild the swapchain, then it's probably faster not to. Using dynamic state isn't particularly slower than pipeline state, and its not like you're going to be changing it mid-frame. Indeed, you shouldn't rebuild all of your pipelines just because the swapchain was resized, so you should be using dynamic state for the viewport anyway.
In short: you ought to do both.
I am planning on making a 3D scene editor app using c++ and open gl. And I have to keep track of the current loaded project and different scenes it contains also the user preferences and other things. The best solution I can think of is to wrap them in a context class which will be a singleton. Is there a better way of doing this?
Not a good idea, probably. One reason is technical. The actual OpenGL context's lifetime is limited and is far shorter than app itself, usually related to surface you're outputting to. You would need to initialize context after your visualizing window is ready and de-initialize it before window is gone. Trying to do so when window gone may end to undefined behaviour depending on platform. In some cases you might need several contexts.
Another reason is, it doesn't look like proper separation of responsibilities. User settings aren't part of context, but some may affect only a single render pass (out of plural). You likely would have Preferences, Renderer which would be an interface to Context manager, Geometries, Textures (or materials) separate, an Scene Manager as well (think of scene tree in Blender or DAZ studio, each item in scene can have separate user settings, regarding how to visualize them).
I'm Using the SDL library in C to learn some game dev , however i'm quite confused as to what SDL_RenderClear does and when do we use it. I did check the SDL documentation about the same , however i still wasn't able to understand where exactly we would use it and what is its use.
It's like rendering background color to color you have set via another API namely SDL_SetRenderDrawColor as you already found on documentation online.
Imagine you have rendered several things on screen. To begin again, you need to clear it to start over. Underlying of SDL_RenderClear wraps specific native platform your application runs on top, it can be OpenGL, DirectX, etc. It helps in communicating with such specific function that platform provides in order for you to flexibly clear your screen without a need to know low level functions, and use SDL2 for other things else like windowing, inputs from keyboard/mouse/joysticks, sound, and even utility functions related to rendering to aid your own rendering implementation.
To add a few more, SDL2 provides minimal but optimized rendering capability. SDL_RenderClear is one of those several functions you can use in rendering category. Anyway, you can decide to go on integrating with what you prefer i.e. OpenGL, DirectX, Vulkan etc. yourself.
As I mentioned in a question before, I am trying to make a simple game engine in C++ with OpenGL.
I am currently using GLFW for drawing the OpenGL context and I chose it because I heard it's one of the faster ones out there. However, it doesn't support widgets and I really don't want to write them myself. So I decided to get into Qt a bit, because it would allow me to have a pane for the render context and different handy bars as well as all the fancy elements for editing a world map, setting OpenGL rules, etc.
I want to use GLFW on the exported version of that game, though. Is that possible without an abstraction layer of some kind?
Thanks in advance! :)
Yes it is definitely possibile, infact I'm writing a 3D engine that is not coupled to any windowing library and can be used with Qt, SDL or whatever.
You of course have just to wrap regular GL calls into a higher level layer, this require you don't call "SwapBuffers" inside your GL code.
If by abstraction layer you mean "inversion of control" so, you don't want to override a "Render/Update" method that's exactly what I done. If by "abstraction layer" you mean you want to use GL directly than it is still possible.
Basically every windowing system have "some place" where you can make your GL calls (between MakeCurrent and SwapBuffers). Just read documentation of your windowing system.
I've just started learning DX so I know almost nothing about it although I do know OpenGL (to certain extent). I'm follow a tutorial (http://www.rastertek.com/tutdx11.html) and I have a working window rendering just a white background (clear).
Now - how do I actually switch from windowed mode to fullscreen and vice versa? I know there are many tutorials, some even provide a code for doing that but since I'm a newbie that's not really helpful. Why? Because every code sample is different and trying to find a pattern in all of them is apparently too difficult for me.
So I don't ask for code - instead I would like you to tell me what things I need to release/recreate/change to toggle correctly (and all of them). I know I need to change the display settings, I know I have to change something about the swap chain and release/recreate some buffers - but not really sure which exactly.
You can use SetFullScreenState on your swap chain:
swapChain->SetFullScreenState(true, NULL);
MSDN
The main thing you have to do is release all reference to the IDXGISwapChain, call ResizeBuffers, then re-create everything.
Since Win32 throws the WM_SIZE message upon window initialization, it's entirely possible to:
Clear the previous window-size-specific context
If the swap chain already exists, resize it, otherwise create one
Obtain the backbuffer for this window which will be the final 3D rendertarget.
Create a view interface on the rendertarget to use on bind.
Allocate a 2-D surface as the depth/stencil buffer and create a DepthStencil view on this surface to use on bind.
Create a viewport descriptor of the full window size.
Set the current viewport using the descriptor.
inside a static function (unless WinMain has an object from which to call), and call that function when the WM_SIZE message is triggered.
You can check out how the DirectXTK does it here:
https://directxtk.codeplex.com/