In my spare time, I am working on a 3D engine using D3D11. To get the 3D effect, I use the typical model view projection matrix multiplication in my HLSL shaders. These matrices are uploaded to a d3d11 constant buffer. The projection matrix only changes when the viewport is resized but the model and view matrix can change on a frame per frame basis (when the model or camera is moved). These changes in the matrices must be uploaded to the same constant buffer in order to be used in the shaders. When uploading these changes, the projection matrix is (generally) not changed so I do not want to reupload this matrix. So in short, I need to partially update my constant buffer by only updating specific parts (offsets) in my buffer.
In openGL, we have uniform buffers and these work (I think) in the same way as a d3d11 constant buffer does. However, if you want to update a specific part of the uniform buffer, you can use the openGL function glBufferSubData. I tried looking for a similar way to do this in D3D11 but I did not found anything. I did find someone with a similar issue but he was using D3D11.1.
link to original post: How to partially update constant buffer in DirectX 11.1. Someone also said that in D3D11, you need to upload all the data in to the constant buffer if you want to change a specifi portion. But this could result in keeping an entire copy of my buffer on the CPU side (RAM). There must be a better way right?
tldr; How can I update my d3d11 constant buffer on a specific offset?
You have several options to manage those.
The simplest ones is to create one constant buffer per update rate (so you would have one buffer for world matrices, which changes per object, one for view which would update once per frame, and one for projection which updates very sporadically.
That said, I never found the fact of splitting view and projection matrices to be that beneficial (I mean we speak about a 64 bytes extra data update per frame, which is really nothing, specially considering constant buffers are aligned on a 256 bytes boundary).
This is also even more true if you will start to add shadow maps (if you do frustum fitting your shadow camera projection matrix can change every frame).
Also if you plan to other post effects later on (like ao, depth of field...), you will also need to add extra data for camera (position, view inverse, view projection inverse, projection inverse), so it's generally more beneficial to update all that camera data once per frame (I normally use a reserved cbuffer slot for camera buffer, so I don't have to rebind it all the time).
Why not have multiple constant buffers? That would allow you to have one that rarely if ever changes while others can be changed with each frame. I'm pretty sure I've seen that used in some of the samples (though I didn't go looking).
Related
I'm pretty new to OpenGL and am trying to implement a simple program where I can draw cubes, move them around with the mouse, and delete them.
Previously I had done my drag operations by translating on the CPU. In this way I was able to use ray-tracing to pick out the element I wanted because the vertices themselves were being updated.
However, I'm trying to move all of the transformations to the GPU and in doing so realized that I would then be giving up updated access to the vertices on the CPU (as the CPU still thinks the vertices are the un-transformed ones). How does one do this communication so that I wouldn't have to manually do transformations on the CPU as well as in the Vertex Shader?
No matter where you're doing your transformations, you will typically have a model matrix that describes where each object is in the scene. Instead of transforming each object into world space just so you can check for intersection with a world-space ray, you can also transform the ray into the object space of each object by transforming the ray with the inverse model matrix.
One general issue with ray-tracing is that, as your scene gets larger, brute force testing of each object will get increasingly slow. You can use acceleration structures like an Octree or a Bounding Volume Hierarchy to speed things up. A completely different approach when it comes to picking would be just render an ID buffer, i.e. a buffer that has the same resolution as your currently rendered frame and for each pixel saves the ID of the object that is visible at that pixel. Then you can simply read back the value of the pixel underneath the cursor to find out what object you hit without the need to do any raytracing. Rendering the ID buffer could be done as a separate pass or can likely just be added as an additional render target to a pass you're already doing, e.g., prefilling the depth buffer or just when rendering the scene in case you only do one pass.
Currently I'm creating a particle system and I would like to transfer most of the work to the GPU using OpenGL, for gaining experience and performance reasons. At the moment, there are multiple particles scattered through the space (these are currently still created on the CPU). I would more or less like to create a histogram of them. If I understand correctly, for this I would first translate all the particles from world coordinates to screen coordinates in a vertex shader. However, now I want to do the following:
So, for each pixel a hit count of how many particles are inside. Each particle will also have several properties (e.g. a colour) and I would like to sum them for every pixel (as shown in the lower-right corner). Would this be possible using OpenGL? If so, how?
The best tool I recomend for having the whole data (if it fits on GPU memory) is the use of SSBO.
Nevertheless, you need data after transforming them (e.g. by a projection). Still SSBO is your best option:
In the fragment shader you read the properties of already handled particles (let's say, the rendered pixel) and write modified properties (number of particles at this pixel, color, etc) to the same index in the buffer.
Due to parallel nature of GPU, several instances coming from different particles may be doing concurrently the work for the same index. Thus you need to handle this on your own. Read Memory model and Atomic operations
Another approach, but limited, is using Blending
The idea is that each fragment increments the actual color value of the frame buffer. This can be done using GL_FUNC_ADD for glBlendEquationSeparate and using as fragment-output-color a value of 1/255 (normalized integer) for each RGB/a component.
Limitations come from the [0-255] range: Only up to 255 particles in the same pixel, the rest amount is clamped to this range and so "lost".
You have four components RGBA, thus four properties can be handled. But can have several renderbuffers in a FBO.
You can read the FBO by glReadPixels. Use glReadBuffer first with a GL_COLOR_ATTACHMENTi if you use a FBO instead of the default frame buffer.
So let's say I have a single vertex (to make things easy) in my program, (0, 0, 0). Right at the origin. I render a single frame with a simple translation matrix, moving the vertex two units down the x-axis. The vertex is rendered accordingly. Does the same vertex now show up in the VRAM as
(2, 0, 0)? I've read that it's important to load all the respective identity matrices in OpenGL every time a frame is rendered--and I assume that's because everything would continually move, rotate, etc. further and further, implying that applying transformations DOES modify actual data, not just the appearance onscreen.
Strictly speaking, OpenGL is just an API definition. An implementation can do whatever it wants as long as it meets the specifications.
That being said, the answer to your question is generally: NO. It's hard to picture how storing transformed vertices back into the memory that also contained the original vertices would ever make sense.
The original vertex positions are passed into the vertex shader, where they are processed, which can include transformations. Once they exit the vertex shader, the transformed positions will most likely be stored in some kind of cache or dedicated on-chip GPU memory until they are processed by the next steps of the pipeline, which includes perspective division, application of the viewport transform, and rasterization. Once those vertex processing steps are completed, the transformed vertices can be discarded. They may stay in a cache for a little longer, for possible reuse of the processed vertex in case the same original vertex is used again. But they are not stored in any persistent way.
The way I interpret it, what you heard about having to reset the matrices for each frame was probably a misunderstanding. If you want to apply the same matrices in the next frame, you don't have to do anything at all.
What they were most likely talking about is related to how the matrix stack in legacy OpenGL works. Most calls that modify the current matrix, like glTranslatef(), glRotatef(), etc, are applied incrementally to the current matrix. For example, if you call glRotatef(), the rotation is combined with the transformation that was already on the matrix stack. The result it that your newly specified rotation is applied to the vertices first, followed by the transformations that were already on the matrix stack.
Based on this, if you want to specify transformations from scratch at the start of each frame, you will call glLoadIdentity() to reset the current transformation on the matrix stack before you start specifying your new transformations. Or you can use glPushMatrix()/glPopMatrix() to save and restore the desired state of the matrix stack.
If you use what many people call "modern OpenGL", meaning that you don't use the legacy fixed pipeline functionality, you don't have to worry about any of that. The matrix stack is gone for good, and you get to calculate your own transformation matrices, and pass them to your shader code.
Here is a link on wiki about the mathematics involved with Transformation Matrices. https://en.wikipedia.org/wiki/Transformation_matrix this will give you an understanding of the math behind the scenes. Another way to look at this is also on the lines of linear or vector algebra. So what happens under the hood when you render a scene is that all of the vertex (pixel) data is sent from the CPU to the GPU to be rasterized and drawn to the screen. This is your batch process or render call, now you also have a frame function that will happen x amount of times per second which will give you your frames per second. So if you are rendering at say 60 FPS then these pixels, vertices, triangles etc., will be drawn 60 times each second. When you apply a transformation to this set of vertices what happens here is you have a transformation matrix that is being multiplied to your model view projection matrix. MVP * T which this will be saved back into your existing MVP matrix if this is how you have your calculations set up. There are some differences between which version of OpenGL you are using as you go from OpenGL v1.0 Pure CPU calls up to v4.5. As far as I know after version 3.2 or 3.3 I don't remember which version off hand you have to implement the MVP yourself where versions greater than v1.5 where shaders were first introduced was handled for you already. Here is the documentation on OpenGL https://www.opengl.org/ and on the main page there will be a topic that says documentation from there you can either select OpenGL Registry or which ever specific version you want to look at. From here you can read their documentation about the OpenGL API since this site covers everything that is available in their API. So as you begin to understand this process, yes the actual coordinate data for these vertices does change, however it will not continuously change unless you are incrementing a static type variable with a factor of time thus giving you some kind of simulation of movement or animation. If you apply only a single transformation then these pixels, vertices, triangles, etc., will either Rotate, Translate, Scale, or Shear depending on which Transformation you are applying. I will tell you that the order of these operations does matter, but I will not tell you which order they are, that will be for you to read up on and to figure out. These reason this does matter is due to the fact that not every Matrix Multiplication has a valid Inverse Matrix. The Identity is used for reasons such as round off errors and floating point precision, so that if you happen to apply say 1,000 transformations in a matter of about 10 seconds, you do not have astronomical errors. This should be enough to point you in the right direction and also serve as a guide as to how the OpenGL API works.
I'm doing a 2D turn based RTS game with 32x32 tiles (400-500 tiles per frame). I could use a VBO for this, but I may have to change almost all the VBO data each frame, as the background is a scrolling one and the visible tiles will change every time the map scrolls. Will using VBOs rather than client side vertex arrays still yield a performance benefit here? Also if using VBOs which data format is most efficient (float, or int16, or ...)?
If you are simply scrolling, you can use the vertex shader to manipulate the position rather than update the vertices themselves. Pass in a 'scroll' value as a uniform to your background and simply add that value to the x (or y, or whatever applies to your case) value of each vertex.
Update:
If you intend to modify the VBO often, you can tell the driver this using the usage param of glBufferData. This page has a good description of how that works: http://www.opengl.org/wiki/Vertex_Buffer_Object, under Accessing VBOs. In your case, it looks like you should specify GL_DYNAMIC_DRAW to glBufferData so that the driver puts your VBO in the best place in memory for your application.
The regular approach is to move the camera and perform culling instead of updating the content of the VBOs. For a 2d game culling will use simple rectangle intersection algorithm, which you will need anyway for unit selection in the game. As a bonus, manipulating the camera will allow to rate the camera and zoom in and zoom out. Also you could combine several tiles (4, 9 or 16) into one VBO.
I would strongly advise against writing logic to move the tiles instead of the camera. It will take you longer, have more bugs, and be less flexible.
The format will depend on what data you are storing in the VBOs. When in doubt, just use uint8 for color and float32 for everything else. Though for a 2d game your VBOs or vertex array are going to be very small compared to 3d applications, so it's highly unlikely VBO will make any difference.
I'm attempting to draw a 2D image to the screen in Direct3D, which I'm assuming must be done by mapping a texture to a rectangular billboard polygon projected to fill the screen. (I'm not interested or cannot use Direct2D.) All the texture information I've found in the SDK describes loading a bitmap from a file and assigning a texture to use that bitmap, but I haven't yet found a way to manipulate a texture as a bitmap pixel by pixel.
What I'd really like is a function such as
void TextureBitmap::SetBitmapPixel(int x, int y, DWORD color);
If I can't set the pixels directly in the texture object, do I need to keep around a DWORD array that is the bitmap and then assign the texture to that every frame?
Finally, while I'm initially assuming that I'll be doing this on the CPU, the per-pixel color calculations could probably also be done on the GPU. Is the HLSL code that sets the color of a single pixel in a texture, or are pixel shaders only useful for modifying the display pixels?
Thanks.
First, your direct question:
You can, technically, set pixels in a texture. That would require use of LockRect and UnlockRect API.
In D3D context, 'locking' usually refers to transferring a resource from GPU memory to system memory (thereby disabling its participation in rendering operations). Once locked, you can modify the populated buffer as you wish, and then unlock - i.e., transfer the modified data back to the GPU.
Generally locking was considered a very expensive operation, but since PCIe 2.0 that is probably not a major concern anymore. You can also specify a small (even 1-pixel) RECT as a 2nd argument to LockRect, thereby requiring the memory-transfer of a negligible data volume, and hope the driver is indeed smart enough to transfer just that (I know for a fact that in older nVidia drivers this was not the case).
The more efficient (and code-intensive) way of achieving that, is indeed to never leave the GPU. If you create your texture as a RenderTarget (that is, specify D3DUSAGE_RENDERTARGET as its usage argument), you could then set it as the destination of the pipeline before making any draw calls, and write a shader (perhaps passing parameters) to paint your pixels. Such usage of render targets is considered standard, and you should be able to find many code samples around - but unless you're already facing performance issues, I'd say that's an overkill for a single 2D billboard.
HTH.