How to override a named display on display list - glCallList() - opengl

About the glCallList , assume I have some named display on the display list ,I know that each of them could be re-drawn by calling glCallList(i) .
My question is - is it possible to overwrite this named display ? mean , once the i'st is exist , I will make a display calling and it would be stored at glCallList(i) .
Edit:
For example - right now glCallList(1) draw a cube , I want to overwrite it , and make
glCallList(1) to draw a triangular .

Be aware that display lists are among the oldest parts of OpenGL and their use was frowned upon even before they were officially deprecated. The primary purpose they originally served was to "record" sequences of commands that would setup state / data persistently in lieu of modern OpenGL's state/data objects (e.g Texture Objects, Vertex Buffer Objects, Sampler Objects, etc.).
Nevertheless, a pair of calls to glNewList (...) and glEndList (...) will actually replace a display list rather than allocating a new one if you pass it a handle that already had data. So you do not need to go through the trouble of glDeleteLists (...) and then glGenLists (...) to reuse the same handle (name).

Related

Can i use vtkGlyph3D without a SourceConnection?

I want to create a vtkGlyph3D from a vtkPolyData containing points, triangles and colors
polydata->SetPoints(points);
polydata->SetPolys(triangles);
polydata->GetPointData()->SetScalars(colors);
Now in all the examples for vtkGlyph3D there is always the call to SetSourceConnection to which a vtkAlgorithmOutput object is passed.
Since i don't use a vtkCubeSource or vtkConeSource or the like, i don't know what i should pass here.
Can i just omit this call and simply do this
vtkNew<vtkGlyph3D> glyph3D;
glyph3D->SetColorModeToColorByScalar();
glyph3D->SetInputData(polydata);
glyph3D->ScalingOff();
glyph3D->Update();
to build my glyph?
Or do i somehow have to create a vtkAlgorithmOutput from my polydata?
From the doc, vtkGlyph3D
copy oriented and scaled glyph geometry to every input point
In other word, this filter copy the Source geometry on every (Nth) points of your Input (and some more option of scale / orientation). So it does not make sense to use it without a SetSourceConnection (with any kind of source/reader/filter providing a polydata)

Unable to get textures to work in OpenGL in Common Lisp

I am building a simple Solar system model and trying to set textures on some spheres.
The geometry is properly generated, and I tried a couple different ways to generate the texture coordinates. At present I am relying on glu:quadric-texture for generating the coordinates when glu:sphere is called.
However, the textures never appear - objects are rendered in flat colors.
I went through several OpenGL guides and I do not think I am missing a step, but who knows.
Here is what is roughly happening:
call gl:enable :texture-2d to turn on textures
load images using cl-jpeg
call gl:bind-texture
copy data from image using gl:tex-image-2d
generate texture ids with gl:gen-textures. Also tried generating ids one by one instead of all at once, which had no effect.
during drawing create new quadric, enable texture coordinates generation and bind the texture before generating the quadric points:
(let ((q (glu:new-quadric)))
(if (planet-state-texture-id ps)
(progn (gl:enable :texture-gen-s)
(gl:enable :texture-gen-t)
(glu:quadric-texture q :true)
(gl:bind-texture :texture-2d planet-texture-id)))
(glu:quadric-texture q :false))
(glu:sphere q
planet-diameter
*sphere-resolution*
*sphere-resolution*)
I also tried a more manual method of texture coordinates generation, which had no effect.
Out of ideas hereā€¦
make-texture function
texture id generation
quadric drawing
When the program runs, I can see the textures are loaded and texture ids are reserved, it prints
loading texture from textures/2k_neptune.jpg with id 1919249769
Loaded data. Image dimensions: 1024x2048
I don't know if you've discovered a solution to your problem, but after creating a test image, and modifying some of your code, I was able to get the texture to be applied to the sphere.
The problem comes into play with the fact that you are attempting to upload textures to the GPU before you've enabled them. (gl:enable :texture-2d) has to be called before you start handling texture/image data.
I'd recommend putting the let* block with the planets-init that is in the main function after 'setup-gl', and also moving the 'format' function with the planets data to work correctly without an error coming up.
My recommendation is something like:
(let ((camera ...
...
(setup-gl ...)
(let* ((planets...
...
(format ... planet-state)
In your draw-planet function, you'll want to add (gl:bind-texture :texture-2d 0) at the end of it so that the texture isn't used for another object, like the orbital path.
As is, the (gl:color 1.0 ...) before the (gl:quadratic-texture ...) will modify the color of the rendered object, so it may not look like what you're expecting it to look like.
Edit: I should've clarified this, but as your code stands it goes
initialize-planets > make-textures > enable-textures > render
When it should be
enable-textures > init-planets > make-textures > render
You're correct about not missing a step, the steps in your code are just misordered.

DXR Descriptor Heap management for raytracing

After watching videos and reading the documentation on DXR and DX12, I'm still not sure how to manage resources for DX12 raytracing (DXR).
There is quite a difference between rasterizing and raytracing in terms of resource management, the main difference being that rasterizing has a lot of temporal resources that can be bound on the fly, and raytracing being in need of all resources being ready to go at the time of casting rays. The reason is obvious, a ray can hit anything in the whole scene, so we need to have every shader, every texture, every heap ready and filled with data before we cast a single ray.
So far so good.
My first test was adding all resources to a single heap - based on some DXR tutorials. The problem with this approach arises with objects having the same shaders but different textures. I defined 1 shader root signature for my single hit group, which I had to prepare before raytracing. But when creating a root signature, we have to exactly tell which position in the heap corresponds to the SRV where the texture is located. Since there are many textures with different positions in the heap, I would need to create 1 root signature per object with different textures. This of course is not preferred, since based on documentation and common sense, we should keep the root signature amount as small as possible.
Therefore, I discarded this test.
My second approach was creating a descriptor heap per object, which contained all local descriptors for this particular object (Textures, Constants etc..). The global resources = TLAS (Top Level Acceleration Structure), and the output and camera constant buffer were kept global in a separate heap. In this approach, I think I misunderstood the documentation by thinking I can add multiple heaps to a root signature. As I'm writing this post, I could not find a way of adding 2 separate heaps to a single root signature. If this is possible, I would love to know how, so any help is appreciated.
Here the code I'm usign for my root signature (using dx12 helpers):
bool PipelineState::CreateHitSignature(Microsoft::WRL::ComPtr<ID3D12RootSignature>& signature)
{
const auto device = RaytracingModule::GetInstance()->GetDevice();
if (device == nullptr)
{
return false;
}
nv_helpers_dx12::RootSignatureGenerator rsc;
rsc.AddRootParameter(D3D12_ROOT_PARAMETER_TYPE_SRV,0); // "t0" vertices and colors
// Add a single range pointing to the TLAS in the heap
rsc.AddHeapRangesParameter({
{2 /*t2*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1}, /* 2nd slot of the first heap */
{3 /*t3*/, 1, 0, D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 3}, /* 4nd slot of the first heap. Per-instance data */
});
signature = rsc.Generate(device, true);
return signature.Get() != nullptr;
}
Now my last approach would be to create a heap containing all necessary resources
-> TLAS, CBVs, SRVs (Textures) etc per object = 1x heap per object effectively. Again, as I was reading documentation, this was not advised, and documentation was stating that we should group resources to global heaps. At this point, I have a feeling I'm mixing DX12 and DXR documentation and best practices, by using proposals from DX12 in the DXR domain, which is probably wrong.
I also read partly through Nvidia Falcor source code and they seem to have 1 resource heap per descriptor type effectively limiting the number of descriptor heaps to a minimum (makes total sense) but I did not jet find how a root signature is created with multiple separate heaps.
I feel like I'm missing one last puzzle part to this mystery before it all falls into place and creates a beautiful image. So if anyone could explain how the resource management (heaps, descriptors etc.. ) should be handled in DXR if we want to have many objects which different resources, it would help me a lot.
So thanks in advance!
Jakub
With DXR you need to start at shader model 6.2 where dynamic indexing started to have a much more official support than just "the last descriptor is free to leak in seemingly-looking overrun indices" that was the "secret" approach in 5.1
Now you have full "bindless" using a type var[] : register(t4, 1); declarative syntax and you can index freely var[1] will access register (t5,1) etc.
You can setup register ranges in the descriptor table, so if you have 100 textures you can span 100.
You can even declare other resources after the array variable as long as you remember to jump all the registers. But it's easier to use different virtual spaces:
float4 ambiance : register(b0, 0);
Texture2D all_albedos[] : register(t0, 1);
matrix4x4 world : register(b1, 0);
Now you can go to t100 with no disturbance on the following space0 declarations.
The limit on the the register value is lifted in SM6. It's
up to max supported heap allocation
So all_albedos[3400].Sample(..) is a perfectly acceptable call (provided your heap has bound the views).
Unfortunatly in DX12 they give you the feeling you can bind multiple heaps with the CommandList::SetDescriptorHeaps function, but if you try you'll get runtime errors:
D3D12 ERROR: ID3D12CommandList::SetDescriptorHeaps: pDescriptorHeaps[1] sets a descriptor heap type that appears earlier in the pDescriptorHeaps array.
Only one of any given descriptor heap type can be set at a time. [ EXECUTION ERROR #554: SET_DESCRIPTOR_HEAP_INVALID]
It's misleading so don't trust that plural s in the method name.
Really if we have multiple heaps, that would only be because of triple buffering circular update/usage case, or upload/shader-visible I suppose. Just put everything in your one heap, and let the descriptor table index in it as demanded.
A descriptor table is a very lightweight element, it's just 3 ints. A descriptor start, a span and a virtual space. Just use that, you can span for 1000 textures if you have 1000 textures in your scene. You can get the material ID if you embed it into an indirection texture that would have unique UVs like a lightmap. Or in the vertex data, or just the whole hitgroup (if you setup for 1 hitgroup = 1 object). Your hitgroup index, which is given by a system value in the shader, will be your texture index.
Dynamic indexing of HLSL 5.1 might be the solution to this issue.
https://learn.microsoft.com/en-us/windows/win32/direct3d12/dynamic-indexing-using-hlsl-5-1
With dynamic indexing, we can create one heap containing all materials and use an index per object that will be used in the shader to take the correct material at run time
Therefore, we do not need multiple heaps of the same type, since it's not possible anyway. Only 1 heap per heap type is allowed at the same time

How to have public class member that can be accessed but not changed by other classes C++

This probably has been asked before, but I can't find something on it other than the general private/public/const solutions. Basically, I need to load fonts into an array when an instance of my class text is created. The text class is in my text.h file, and defined in text.cpp. In both of these files, they also include a Fonts class, and so I want my fonts class to have my selection of fonts preloaded in an array ready to be accessed by my text class AFTER the first instance is created. I want these fonts to be able to be accessed by my text class, but not able to be changed. I can't create a TTF_Font *Get_Font() method in the fonts class as each time a font is created, it loads memory that needs to be manually closed, so I couldn't exactly close it after it runs out of the method's scope, so rather, I would want to do something like, when creating a character for example, call TTF_RenderText_Blended(Fonts::arialFonts[10], "123", black); which would select the font type of arial in size 11 for example.
I'm not sure what type of application you are using your fonts in, but I've done some 3D graphics programming using OpenGL and Vulkan. This may or may not help you, but should give you some kind of context on the structure of the framework that is commonly used in Game Engine development.
Typically we will have a Texture class or struct that has a vector of color triplets or quads that represent the color data for each pixel in the image. Other classes will contain UV coordinates that the texture will be applied to... We also usually have functions to load in textures or graphics files such as PNG, JPG, TGA, etc. You can write your own fairly trivially or you can use many of the opensource loader libraries that are out there if you're doing graphics type programming. The texture class will contain other properties such as mipmap, if it is repeated or mirrored, its quality, etc... So we will typically load a texture and assign it an ID value and will store it into a hash table. If this texture is trying to be loaded again from a file, the code will recognize that it exists and exits that function call, otherwise it will store this new texture into the hash table.
Since most rendered texts are 2D, instead of creating a 3D model and sending it to the render to be processed by the model-view-projection matrix... We create what is commonly called a Sprite. This is another class. It has vertices to make up its polygonal edges. Typically a sprite will have 4 vertices since it is a QUAD. It will also have texture coordinates that will be associated with it. We don't apply the textures directly to the sprite because we want to instance a single sprite having only a single copy in memory. What we will typically do here, is we will send a reference of it to the renderer along with a reference to the texture by id, and transformation matrix to changes its shape, size, and world position. When this is being processed by the GPU through the shaders, since it is a 2D object, we use an orthographic projection. So this saves a matrix multiplication within the vertex shader for every vertex. Basically, it will be processed by the view-projection matrix. Here our Sprites will be stored similarly to our textures, in a hash table with an associated ID.
Now we have a sprite that is basically a Graphics Image that is drawn to the screen, a simple quad that can be resized and placed anywhere easily. We only need a single quad in memory but can draw hundreds even thousands because they are instanced via a reference count.
How does this help with Text or Fonts? You want your Text class to be separate from your Font class. The text class will contain where you want to draw it, which font to use, the size of the font, the color to be applied, and the text itself... The Font class would typically inherit from the basic Sprite Class.
We will normally create a separate tool or mini- console program that will allow you to take any of the known true type fonts or windows fonts by name that will generate 2 files for you. You will pass flags into the program's command-line arguments along with other commands such as -all for all characters or -"abc123" for just the specific characters you want, and -12, for the font size, and by doing so it will generate your needed files. The first will be a single textured image that we call a Font Atlas which is basically a specific form of a Sprite Sheet and the other file will be a CSV text file with generated values for texture positioning of each character's quad.
Back in the main project, the font class will load in these two files the first is simple as it's just a Texture image which we've already done before, and the later is the CSV file for the information that is needed to generate all of the appropriate quads, however, the "Font" class does have quite a bit of complex calculations that need to be performed. Again, when a Font is loaded into memory, we do the same as we did before. We check to see if it is already loaded into memory by either filename or by id and if it isn't we then store it into a hash table with a generated associated ID.
Now when we go to use the Text class to render our text, the code might look something like this:
void Engine::loadAssets() {
// Textures
assetManager_.loadTexture( "assets\textures\shells.png" );
assetManager_.loadTexture( "assets\textures\blue_sky.jpg" );
// Sprites
assetManager_.loadSprite( "assets\sprites\jumping_jim.spr" );
assetManager_.loadSprite( "assets\sprites\exploading_bomb.spr" );
assetManager_.loadFont( "assets\fonts\"arial.png", 12 );
// Same font as above, but the code structure requires a different font id for each different size that is used.
assetManager_.loadFont( "assets\fonts\"arial.png", 16 );
assetManager_.loadFont( "assets\fonts\"helvetica.png" );
}
Now, these are all stored as a single instance in our AssetManager class, this class containers several hashtables for each of the different types of assets. Its role is to manage their memory and lifetime. There is only ever a single instance of each, but we can reference them 1,000 times... Now somewhere else in the code, we may have a file with a bunch of standalone enumerations...
enum class FontType {
ARIAL,
HELVETICA,
};
Then in our render call or loadScene function....
void Engine::loadScene() {
fontManager_.drawText( ARIAL, 18, glm::vec3(-0.5, 1.0, 0.5), glm::vec4(245, 169, 108, 128), "Hello World!");
fontManager_.drawText( ARIAL, 12, glm::vec3(128, 128, 0), glm::vec4(128, 255, 244, 80), "Good Bye!");
}
The drawText function would be the one that takes the ARIAL id and gets the reference into the hashtable for that stored font. The renderer uses the positioning, and color values, the font size, and the string message... The ID and Size are used to retrieve the appropriate Font Atlas or Sprite Sheet. Then each character in the message string will be matched and the appropriate texture coordinates will be used to apply that to a quad with the appropriate size based on the font's size you specified.
All of the file handling, opening, reading from, and closing of was already previously done in the loadAssets function. All of the required information is already stored in a set of hashtables that we reference in to via instancing. There is no need or concern to have to worry about memory management or heap access at this time, this is all done through utilizing the cache. When we draw the text to the screen, we are only manipulating the pixels through the shaders by matrix transformations.
There is another major component to the engine that hasn't been mentioned yet, but we typically use a Batch Process and a Batch Manager class which handles all of the processing for sending the Vertices, UV Coordinates, Color, or Texture Data, etc... to the video card. CPU to GPU transfer across the buss and or the PCI-Express lanes are considered slow, we don't want to be sending 10,000, 100,000, or even 1 million individual render calls every single frame! So we will typically create a set of batches that has a priority queue functionality and when all buckets are full, either the bucket with the highest priority value or the fullest bucket will be sent to the GPU and then emptied. A single bucket can hold 10,000 - 100,000 primitives... where single a primitive could be a point, a line, a triangle list, a triangle fan, etc... This makes the code much more efficient. The heap is seldom used. The BatchManager, AssetManager, TextureManager, AudioManager, FontManager classes, etc. are the ones that live on the heap, but all of their stored assets are used by reference and due to that we can instance a single object a million times! I hope this explanation helps.

Draw multiple meshes to different locations (DirectX 12)

I have a problem with DirectX 12. I have made a small 3D renderer. Models are translated to 3D space in vertex shader with basic World View Projection matrixes that are in constant buffer.
To change data of the constant buffer i'm currently using memcpy(pMappedConstantBuffer + alignedSize * frame, newConstantBufferData, alignedSize) this command replaces constant buffer's data immediately.
So the problem comes here, drawing is recorded to a command list that will be later sent to the gpu for execution.
Example:
/* Now i want to change the constant buffer to change the next draw call's position to (0, 1, 0) */
memcpy(/*Parameters*/);
/* Now i want to record a draw call to the command list */
DrawInstanced(/*Parameters*/);
/* But now i want to draw other mesh to other position so i have to change the constant buffer. After this memcpy() the draw position will be (0, -1, 0) */
memcpy(/*Parameters*/);
/* Now i want to record new draw call to the list */
DrawInstanced(/*Parameters*/);
After this i sent the command list to gpu for execution, but quess what all the meshes will be in the same position, because all memcpys are executed before even the command list is sent to gpu. So basically the last memcpy overwrites the previous ones.
So basically the question is how do i draw meshes to different positions or how to replace constant buffer's data in the command list so the constant buffer changes between each draw call on gpu?
Thanks
No need for help anymore i solved it by myself. I created constant buffer for each mesh.
About execution order, you are totally right, you memcpy calls will update the buffers immediately, but the commands will not be processed until you push your command list in the queue (and you will not exactly know when this will happen).
In Direct3D11, when you use Map on a buffer, this is handled for you (some space will be allocated to avoid that if required).
So In Direct3D12 you have several choices, I'll consider that you want to draw N objects, and you want to store one matrix per object in your cbuffer.
First is to create one buffer per object and set data independently. If you have only a few, this is easy to maintain (and extra memory footprint due to resource allocations will be ok)
Other option is to create a large buffer (which can contain N matrices), and create N constant buffer views that points to the memory location of each object. (Please note that you also have to respect 256 bytes alignment in that case too, see CreateConstantBufferView).
You can also use a StructuredBuffer and copy all data into it (in that case you do not need the alignment), and use an index in the vertex shader to lookup the correct matrix. (it is possible to set a uint value in your shader and use SetGraphicsRoot32BitConstant to apply it directly).