Direct2D Depth Buffer - c++

I need to draw a list of shapes and I am using Direct2D. I get the list of shapes from a file. The list is sorted and the order of the elements inside the file represents the order these shapes will be drawn. So, if for example the file specifies two rectangle in the same position and with the same sizes, only the second one will be visible (since the first will be overwritten).
Given my list of shapes I proceede to its drawing in the following way:
list<Shape> shapes;
for (const auto& shape : shapes)
shape.draw();
It is straightforward to see that if I have two shapes I cannot invert the order of the drawing operations, and this means that I must be sure that shape2 will be always drawn after shape1 and so on. Follows that I can not use multiple threads to draw my shapes, and this is a huge disadvantage in terms of performances.
I read that Direct3D supports the depth buffer (or z-buffer), which specifies for each pixel its z-coordinate, such that only the "visible" pixels (the onces closer to the viewer) will be drawn, regardless of the order in which the shapes are drawn. And I have the depth information of each shape when I read the file.
Is there a way to use the depth buffer in Direct2D, or a similar technique which allows me the use of multiple threads to draw my shapes?

Is there a way to use the depth buffer in Direct2D, or a similar
technique which allows me the use of multiple threads to draw my
shapes?
The answer here is no. Althought the Direct2D library is built on top of Direct3D, it doesn't provide the user such feature through the API, since the primitives you can draw are only described by two-dimensional coordinates. The last primitive you draw to the render target is ensured to be visible, so no depth testing is taking place. Also, the depth buffer in Direct3D doesn't have much to do with multi-threading on the CPU side.
Also note that even if you are issuing drawing commands using multiple threads they will be serialized by the Direct3D driver and performed sequentially. Some newer graphical APIs like Direct3D 12 and Vulkan does provide multithreaded drivers which allows you to effectively draw different content from different threads, but they come with higher complexity.
So eventually if you stick to Direct2D you are left with the option of drawing each shape sequentially using a single thread.
But what can be done is that you can eliminate the effectively occluded shapes by testing for occlusion each shape against all others. So the occluded shapes can be discarded from the list and never rendered at all. The trick here is that some of the shapes does not fill their bounds rect entirely, due to transparent regions (like text) or if the shape is a complex polygon. Such shapes can not be easily tested or will need more complex algorithms.
So you have to iterate thourgh all shapes and if the current shape is a rectangle only then perform occlusion testing with all previous shapes' bounds rects.
The following code should be considered pseudo-code, it is intended just to demonstrates the idea.
#define RECTANGLE 0
#define TEXT 1
#define TRIANGLE 2
//etc
typedef struct {
int type; //We have a type field
Rect bounds_rect; //Bounds rect
Rect coordinates; //Coordinates, which count vary according to shape type
//Probably you have many other fields here
} Shape;
//We have all shapes in a vector
std::vector<Shape> shapes;
Iterate all shapes.
for (int i=1; i<shapes.size; i++) {
if(shape[i].type != RECTANGLE) {
//We will not perform testing if the current shape is not rectangle.
continue;
}
for(int j=0; j<i; j++) {
if(isOccluded(&shape[j], &shape[i])) {
//shape[j] is totally invisible, so remove it from 'shapes' list
}
}
}
Occlusion testing is something like this
bool isOccluded(Shape *a, Shape *b) {
return (a.bounds_rect.left > b.coordinates.left && a.bounds_rect.right < b.coordinates.right &&
a.bounds_rect.top > b.coordinates.to && a.bounds_rect.bottom < b.coordinates.bottom);
}
And you don't have to iterate all shapes with a single thread, you can create multiple threads to perform tests for different parts of the shape list. Of course you will need some locking technique like mutex when deleting shapes from the list, but that is another topic.

The depth buffer is used to discard primitives that will be occluded by something in front of it in the 3D space, saving on redrawing time by not bothering with stuff that won't be seen anyway. If you think of a scene with a tall, thin candle in front of a ball facing the camera, the entire ball is not drawn and then the candle drawn over it, just the visible sides of the ball are. This is how order of drawing does not matter
I have not heard of the use of a depth buffer in D2D as it is somewhat meaningless; everything is drawn onto one plane in D2D, how can something be in front of or behind something else? The API may support it but I doubt it as it makes no abstract sense. The depth information on each shape is just the order to draw it in essentially which you already have
Instead what you could do, divide and allocate your shapes to your threads while maintaining order, ie
t1 { shape1, shape2, shape3 } = shape123
t2 { shape4, shape5, shape6 } = shape456
...
And draw the shapes onto a new object (but not the backbuffer), depending on your shape class you maybe be able to represent the result as a shape. This will leave you with t many shapes which are still in order but have been computed in parallel. You can then gradually compose your final result by drawing the results in order, ie
t1 { shape123, shape456, shape789 }
t2 { shape101112, shape131415 }
t1 { shape123456789, shape101112131415 } = final shape
Now you have the final shape you can just draw that as normal

Related

Find out the texture portion needed for a mesh

I've got a very specific problem. I have an OpenGL application that is used to render video onto 3D meshes. As it turns out, I can make my video sources send me rectangular portions of the image, reducing memory usage. These portions are specified as a Rectangle2D(int x, int y, int width, int height) with 0 <= x <= w <= sourceVideoWidth and 0 <= y <= h <= sourceVideoHeight.
With that said, I want to find out, for each frame, and for each mesh the following:
Whether the mesh is visible
If so, what portion of image should I request
The benefit is reducint the texture upload to GPU, this operation is often the bottleneck in my application.
In order to simplify the problem let's make the assumption that all meshes are 3D rectangles arbitrarily positioned. A 3D rectangle is defined by four points:
class Rectangle3D
{
public:
Vec3 topLeft;
Vec3 topRight;
Vec3 botLeft;
Vec3 botRight;
}
Possible solutions:
A) Split the mesh into a point grid of points with known texture coordinates, and run frustum culling for each point, then, from the visible points find the top left and bottom right texture coordinates that we must request. This is rather inefficient, and the number of points to test multiplies when we add another mesh to the scene. Solutions that use just the four corners of the rectangle might be preferable.
B) Using the frustum defining planes (see frustum culling). For further simplicity, using only the four planes that correspond to the screen sides. Finding out whether the mesh is visible is rather simple. Finding the visible texture coordinates would need several cases:
- One or more frustum sides intersect with the mesh
- No frustum sides intersect with the mesh
- Either the mesh is fully visible
- Or the mesh is surrounding the screen sides
In any case I need several plane-plane and plane-line segment intersections. Which are not necessarily efficient.
C) Make a 2D projection of the Rectangle3D lines, resulting into a four side polygon, then using line segment intersection between the screen sides and the polygon sides. Also accounting for cases where we have no intersection and the mesh is still visible.
D) Using OpenGL occlusion query objects, this way a render pass could generate information about the visible mesh portion.
Is there any other solution that best solves this problem? If not which one would you use and why?
Just one more thought on to your solutions,
Why don't you incorporate one rendering pass for occlusion queries. Split your mesh into imaginary rectangles which tells you about the visible parts of the mesh. Like
Left part of the image is with imaginary sub-rectangles, right part of the image shows sub-rectangles visible within the screen area (red rectangle in this case). Based on this pass result, you will get the co-ordinates of mesh which are visible.
UPDATE:
This is a sample view that explains my point. This can be done by using opengl query objects.
r is result of GL_SAMPLES_PASSED
Since you will know which rectangles are visible through the result of the query objects , you will come to know which co-ordinates are visible.Google for opengl occlusion queries you will get detailed info. Hope this helps.

OpenGL 3.2+ Drawing Cubes around existing vertices

So I have a cool program that renders a pretty cube in the centre of the screen.
I'm trying to now create a tiny cube on each corner of the existing cube (so 8 tiny cubes), centred on each of the existing cubes corners (or vertices).
I'm assuming an efficient way to implement this would be with a loop of some kind, to minimise the amount of code.
My query is, how does this affect the VAO/VBO's? Even in a loop, would each one need it's own buffer or could they all be sent at the same time...
Secondly, if it can be done, what would the structure of this loop be like, in terms of focusing on separate vertices given that each vertex has different coordinates...
As Vaughn Cato said, each object (using the same VBOs) can simply be drawn at different locations in world space, so you do not need to define separate VBO's for each object.
To complete this task, you simply need a loop to modify the given matrix before each one is rendered to the screen to change the origins of where each cube is drawn.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.

What exactly is a buffer in OpenGL, and how can I use multiple ones to my advantage?

Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.

Store Vertices DirectX C++

Im currently implementing an Octree for my bachelor thesis project.
My Octree takes a std::vector as argument:
octree::Octree::Octree(std::vector<const D3DXVECTOR3*> vec) :
m_vertices(std::vector<D3DXVECTOR3*>()),
m_size(m_vertices.size())
{
int i=0;
for(;i<m_size;++i) {
m_vertices.push_back(new D3DXVECTOR3(*vec.at(i)));
}
}
Im asking for what is typically used to store the vertices in before rendering them and making any culling test etc to them.
I kept this very simple for now, all i have is a function that renders a grid. Some snippets:
#define GRIDFVF (D3DFVF_XYZ | D3DFVF_DIFFUSE)
struct GridVertex {
D3DXVECTOR3 position;
DWORD color;
};
g_dev->SetTransform(D3DTS_WORLD, &matIdentity);
g_dev->SetStreamSource(0, g_buffer, 0, sizeof(GridVertex));
g_dev->SetTexture(0, NULL);
g_dev->DrawPrimitive(D3DPT_LINELIST, 0, GridSize * 4 + 2);
Now when rendering this i use my custom struct GridVertex, thats saves a D3DXVECTOR9 for pos and a DWORD for the color value and the tell the GPU by setting the flexible vertex format to GRIDFVF.
But in my Octree i only want to store the positions to perform the test if certain vertices are inside nodes within my Octree and so on. Therefore I thought of creating another class called SceneManager and storing all values within an std::vector and finally pass it to my Octree class, that does the test and afterwards pass the checked vertices to the GPU.
Would this be a solid solution or whats common to implement something like this?
Thanks in advance
Generally, one does not put the actual render geometry vertices themselves in the octree or whatever spatial partitioning structure one uses. That level of granularity is not useful, because if a set of vertices that make up a model spans partition nodes such that some subset of those vertices would be culled, you couldn't properly draw the model.
What you'd typically want to do is have an object representing an entity and its bounds within the world (axis-oriented bounding boxes, or bounding spheres, are simple and efficient bounding volumes, for example). Each entity is also associated with (or can be associated with by some other subsystem) rendering geometry. The entities themselves are sorted within the octree.
Then, you use your octree to determine which entities are visible, and submit all of their associated render geometry to the card.