Is there a way of keeping track of the relationship between vertices and pixels using either OpenGL or DirectX? 11 - opengl

I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help

Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.

Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.

Related

Creating a 3d card using DirectXTK with Dynamic texture on one side

I am trying to create a card game. I want to have a deck of cards where the back of the card is a fixed texture but the front is dynamic, i.e. it has some text fields on it as well as a picture. I have created a box sized 3x2x0.16 to represent my card. I can get the fixed texture to load but I cannot find any code examples on the web that show me how to load a fixed texture on one side of the box and a dynamic one on the other. Can anyone point me to some examples please. I'm using DirectXTK mainly, but can probably fathom it out from any DirectX code too.
DirectX11 is version of DirectX I am using.
Any recommendations on how to do this would also be welcome.
Thanks
Easiest method for generating your cards, depending on how many there are and how large you want them, is to generate the faces at startup by using render to texture. Effectively, draw your dynamic card faces exactly like you would draw them in the world, but use an orthographic projection matrix and a blank 2D texture object as the render target. Once you have that, cache these "dynamic" textures in an std::map and bind them when drawing a specific card.
If your faces are relatively small, or you want to save on texture memory, you can stitch multiple card faces together into a large sheet of textures, then use some shader scaling logic to reference a subsection of the sheet for rendering a specific texture. With this, you can assemble "decks" of cards that only contain the faces in use in that particular game, allowing you to evict the others from GPU RAM.

C++ OpenGL array of coordinates to draw lines/borders and filled rectangles?

I'm working on a simple GUI for my application on OpenGL and all I need is to draw a bunch of rectangles and a 1px border arround them. Instead of going with glBegin and glEnd for each widget that has to draw (which can reduce performance). I need to know if this can be done with some sort of arrays/lists (batch data) of coordinates and their color.
Requirements:
Rectangles are simple filled with one color for every corner or each corner with a color. (mainly to form gradients)
Lines/borders are simple with one color and 1px thick, but they may not always closed (do not form a loop).
Use of textures/images is excluded. Only geometry data.
Must be compatible with older OpenGL versions (down to version 1.3)
Is there a way to achieve this with some sort of arrays and not glBegin and glEnd? I'm not sure how to do this for lines/borders.
I've seen this kind of implementation in Gwen GUI but it uses textures.
Example: jQuery EasyUI Metro Theme
In any case in modern OpenGL you should restrain to use old fashion API calls like glBegin and the likes. You should use the purer approach that has been introduced with core contexts from OpenGL 3.0. The philosophy behind it is to become much closer to actual way of modern hardware functionning. DiretX10 took this approach, somehow OpenGL ES also.
It means no more lists, no more immediate mode, no more glVertex or glTexCoord. In any case the drivers were already constructing VBOs behind this API because the hardware only understands that. So the OpenGL core "initiative" is to reduce OpenGL implementation complexity in order to let the vendors focus on hardware and stop producing bad drivers with buggy support.
Considering that, you should go with VBO, so you make an interleaved or multiple separated buffer data to store positions and color information, then you bind to attributes and use a shader combination to render the whole. The attributes you declare in the vertex shader are the attributes you bound using glBindVertexBuffer.
good explanation here:
http://www.opengl.org/wiki/Vertex_Specification
The recommended way is then to make one vertex buffer for the whole GUI and every element should just be put one after another in the buffer, then you can render your whole GUI in one draw call. This is how you will get the best performance.
Then if your GUI has dynamic elements this is no longer possible exept if using glUpdateBufferSubData or the likes but it has complex performance implications. You are better to cut your vertex buffer in as many buffers that are necessary to compose the independent parts, then you can render with uniforms modified between each draw call at will to configure the change of looks that is necessary in the dynamic part.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

Using Vertex Buffer Objects for a tile-based game and texture atlases

I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.

What is the most efficient way to draw voxels (cubes) in opengl?

I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.