Can I do anything I want between glPushAttrib(GL_ALL_ATTRIB_BITS) and glPopAttrib() and still do not harm the other code using - setting gl states before and after (do these functions provide complete isolation)?
No, that only pushes the server-side states onto the stack.
There are client-side states too, like pixel store and vertex arrays. For them, you have to use glPushClientAttrib (...).
While pixel store states are infrequently changed, vertex array state changes are very common in deprecated code. So if you want to do this properly, you need to save and restore both server and client state.
Related
I am trying to update my vertex buffer data with the map function in dx. Though it does update the data once, but if i iterate over it the model disappears. i am actually trying to manipulate vertices in real-time by user input and to do so i have to update the vertex buffer every frame while the vertex is selected.
Perhaps this happens because the Map function disables GPU access to the vertices until the Unmap function is called. So if the access is blocked every frame, it kind of makes sense for it to not be able render the mesh. However when i update the vertex every frame and then stop after sometime, theatrically the mesh should show up again, but it doesn't.
i know that the proper way to update data every frame is to use constant buffers, but manipulating vertices with constant buffers might not be a good idea. and i don't think that there is any other way to update the vertex data. i expect dynamic vertex buffers to be able to handle being updated every frame.
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
// Disable GPU access to the vertex buffer data.
pRenderer->GetDeviceContext()->Map(pVBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
// Update the vertex buffer here.
memcpy((Vertex*)mappedResource.pData + index, pData, sizeof(Vertex));
// Reenable GPU access to the vertex buffer data.
pRenderer->GetDeviceContext()->Unmap(pVBuffer, 0);
As this has been already answered the key issue that you are using Discard (which means you won't be able to retrieve the contents from the GPU), I thought I would add a little in terms of options.
The question I have is whether you require performance or the convenience of having the data in one location?
There are a few configurations you can try.
Set up your Buffer to have both CPU Read and Write Access. This though mean you will be pushing and pulling your buffer up and down the bus. In the end, it also causes performance issues on the GPU such as blocking etc (waiting for the data to be moved back onto the GPU). I personally don't use this in my editor.
If memory is not the issue, set up a copy of your buffer on CPU side, each frame map with Discard and block copy the data across. This is performant, but also memory intensive. You obviously have to manage the data partioning and indexing into this space. I don't use this, but I toyed with it, too much effort!
You bite the bullet, you map to the buffer as per 2, and write each vertex object into the mapped buffer. I do this, and unless the buffer is freaking huge, I havent had issue with it in my own editor.
Use the Computer shader to update the buffer, create a resource view and access view and pass the updates via a constant buffer. Bit of a Sledgehammer to crack a wallnut. And still doesn't stop the fact you may need pull the data back off the GPU ala as per item 1.
There are some variations on managing the buffer, such as interleaving you can play with also (2 copies, one on GPU while the other is being written to) which you can try also. There are some rather ornate mechanisms such as building the content of the buffer in another thread and then flagging the update.
At the end of the day, DX 11 doesn't offer the ability (someone might know better) to edit the data in GPU memory directly, there is alot shifting between CPU and GPU.
Good luck on which ever technique you choose.
Mapping buffer with D3D11_MAP_WRITE_DISCARD flag will cause entire buffer content to become invalid. You can not use it to update just a single vertex. Keep buffer on the CPU side instead and then update entire buffer on GPU side once per frame.
If you develop for UWP - use of map/unmap may result in sync problems. ID3D11DeviceContext methods are not thread safe: https://learn.microsoft.com/en-us/windows/win32/direct3d11/overviews-direct3d-11-render-multi-thread-intro.
If you update buffer from one thread and render from another - you may get different errors. In this case you must use some synchronization mechanism, such as critical sections. Example is here https://developernote.com/2015/11/synchronization-mechanism-in-directx-11-and-xaml-winrt-application/
I have a texture, whose texture wrapping needs to change based on the view.
I am using bindless textures, therefore I made it resident.
I understand I cant call glTexParameter/glTextureParameter if the handles are resident, but this does not work either:
makeNonResident()
glTextureParameter(....) -> invalid_operation
makeResident()
What am I missing? Strangely enough, I am not even rendering yet, this is just after creating the texture and making it resident..
Once you call glGetTextureHandleARB to retrieve a handle from a texture, that texture becomes immutable. Not immutable storage, but is completely immutable.
You cannot change any of its parameters. Ever again. There is no undo.
The reason for this is that the handle stores all of the texture's parameters into it internally. So changing those parameters would not affect the handle's copy of them, and allowing such changes to affect every handle that a texture references would cause an undue burden on performance and synchronization.
What you really want is to use glGetTextureSamplerHandleARB to get a new handle from a texture/sampler pair. So you can create a sampler with whatever sampling parameters you want, then get a new handle for it and the original texture. The sampler's parameters will override those from the texture, and you'll get a new handle out of it that encodes both the texture and sampler's parameters.
Now, you don't want to keep creating handle after handle for these sorts of things. So you should plan out exactly which texture/sampler pairs you need, and create them up-front.
I'm working on a plugin for a scripting language that allows the user to access the OpenGL 1.1 command set. On top of that, all functions of the scripting language's own gfx command set are transparently redirected to appropriate OpenGL calls. Normally, the user should use either the OpenGL command set or the scripting language's inbuilt gfx command set which basically contains just your typical 2D drawing commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc.
Under certain conditions, however, the user might want to mix calls to the OpenGL and the inbuilt gfx command sets. This leads to the problem that my OpenGL implementations of inbuilt commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc. have to be able to deal with whatever state the OpenGL state machine might currently be in.
Therefore, my idea was to first save all state information on the stack, then prepare a clean OpenGL context needed for my implementations of commands like DrawLine(), etc. and then restore the original state. E.g. something like this:
glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glPushMatrix();
....prepare OpenGL context for my needs.... --> problem: see below #2
....do drawing....
glPopMatrix();
glPopClientAttrib();
glPopAttrib();
Doing it like this, however, leads to several problems:
glPushAttrib() doesn't push all attributes, e.g save pixel pack and unpack state, render mode state, and select and feedback state are not pushed. Also, extension states are not saved. Extension states are not important as my plugin is not designed to support extensions. Saving and restoring other information (pixel pack and unpack) could probably be implemented manually using glGet().
Big problem: How should I prepare the OpenGL context after having saved all state information? I could save a copy of a "clean" state on the stack right after OpenGL's initialization and then try to pop this stack but for this I'd need a function to move data inside the stack, i.e. I'd need a function to copy or move a saved state from the back to the top of stack so that I can pop it. But I didn't see a function that can accomplish this...
It's probably going to be very slow but this is something I could live with because the user should not mix OpenGL and inbuilt gfx calls. If he does nevertheless, he will have to live with a very poor performance.
After these introductory considerations I'd finally like to present my question: Is it possible to "beat" the OpenGL state machine somehow? By "beating" I mean the following: Is it possible to completely save all current state information, then restore the default state and prepare it for my needs and do the drawing, and then finally restore the complete previous state again so that everything is exactly as it was before. For example, an OpenGL based version of the scripting language's DrawLine() command would do something like this then:
1. Save all current state information
2. Restore default state, set up a 2D projection matrix
3. Draw the line
4. Restore all saved state information so that the state is exactly the same as before
Is that possible somehow? It doesn't matter if it's very slow as long as it is 100% guaranteed to put the state into exactly the same state as it was before.
You can simply use different contexts, especially if you do not care about performance. Just keep an context for your internal gfx operations and another one the user might mess with and just bind the appropriate one to your window (and thread).
The way you describe it looks like you never want to share objects with the user's GL stuff, so simple "unshared" contexts will do fine. All you seem to want to share is the framebuffer - and the GL framebuffer (including back and front color buffers, depth buffer, stencil, etc..) is part of the drawable/window, not the context - so you get access to it whit any context when you make the context current. Changing the contexts mid-frame is not a problem.
I want to write a general purpose utility function that will use an OpenGL Framebuffer Object to create a texture that can be used by some OpenGL program for whatever purpose a third party programmer would like.
Lets say for argument stake the function looks like
void createSpecialTexture(GLuint textureID)
{
MyOpenGLState state;
saveOpenGLState(state);
setStateToDefault();
doMyDrawing();
restoreOpenGLState(state);
}
What should MyOpenGLState, saveOpenGLState(state), setStateToDefault() and restoreOpenGLState(state) look like to ensure that doMyDrawing will behave correctly and that nothing that I do in doMyDrawing will affect anything else that a third party developer might be doing?
The problem that has been biting me is that OpenGL has a lot of implicit state and I am not sure I am capturing it all.
Update: My main concern is OpenGL ES 2.0 but I thought I would ask the question more generally
Don't use a framebuffer object to render your texture. Rather create a new EGLContext and EGLSurface. You can then use eglBindTexImage to turn your EGLSurface into a texture. This way you are guaranteed that state from doMyDrawing will not pollute the main gl Context and visa versa.
As for saving and restoring, glPushAttrib and glPopAttrib will get you very far.
You cannot, however, restore GL to a default state. However, since doMyDrawing() uses and/or modifies only state that should be known to you, you can just set that to values that you need.
That depends a lot on what yo do on your doMyDrawing();, but basically you have to restore everything (all states) that you change in this function. Without having a look at what is going on inside doMyDrawing(); it is impossible to guess what you have to restore.
If modifications on the projection or modelview matrix are done inside doMyDrawing(), remember to go initially push both GL_PROJECTION and GL_MODELVIEW matrix through glPushMatrix and restore them after the drawing through glPopMatrix. Any other state that you modify can be push and pop though glPushAttrib and the right attribute. Remember also to unbind any texture, FBO, PBO, etc.. that you might bind inside doMyDrawing();
I have some objects in the scene, some may occlude others. When I click the mouse or drag-select to get a selection rectangle, I want to select/pick only the objects that I can see from this perspective. The application currently uses GL_SELECT render mode but as we know, this selects occluded objects too. Also, I read that this is deprecated in OpenGL 3.
There are two methods that are currently appealing to me. First is Object Selection using the Back Buffer (Red book, chapter 14): setting the colour of each object to it's object id and reading the colour of pixels from the back frame buffer. The second is occlusion queries (superbible, 4th ed, chap 13).
Other approaches I have ruled out are looking at the min/max z values in the selection buffer and doing custom ray/object detection outside of GL.
I have some questions:
1) If GL_SELECT is deprecated in recent OpenGL, what alternatives are developers supposed to be using?
2) I've only ever read about occlusion queries being employed to speed up rendering. Can they be used for selection/picking, and are there drawbacks?
3) The existing application has a handful of glColorXXX calls. If I went the back buffer route, and used glColorMask(FALSE,FALSE,FALSE,FALSE), will this effectively turn the glColourXXX calls into calls that have no effect, thereby letting me control the colour in a single place when rendering in select mode?
4) Which route is best/canonical?
I decided to implement the selection using the back buffer. Here's my attempt to answer my questions:
If GL_SELECT is deprecated in recent OpenGL, what alternatives are developers supposed to be using?
I think it's best to not employ OpenGL to do this task but to use spatial acceleration structures as user chamber85 suggested in comments to the original question.
I've only ever read about occlusion queries being employed to speed up rendering. Can they be used for selection/picking, and are there drawbacks?
I'm sure they could but one would need to know all the objects they want to query for occlusion before the draw. Using back buffer and colour selection, one can just see what is under the cursor or within a rectangular region and filter from there.
The existing application has a handful of glColorXXX calls. If I went the back buffer route, and used glColorMask(FALSE,FALSE,FALSE,FALSE), will this effectively turn the glColorXXX calls into calls that have no effect, thereby letting me control the colour in a single place when rendering in select mode?
The answer is no. Calling glColorMask() with all GL_FALSE parameters will not mean that glColor3ub() calls (for example) will not be honoured. It simply specifies a filter/mask for colours just before they are written to the colour buffer. The original thought was to set the colour to the object id, then call glColorMask() to ignore all subsequent glColorXXX() calls. This strategy is doomed as the colour representing the object id would also be masked out.
4) Which route is best/canonical?
I would say the back buffer colour selection is generally best as it doesn't require setting up the occlusion queries before/during the draw.