Can i use vtkGlyph3D without a SourceConnection? - c++

I want to create a vtkGlyph3D from a vtkPolyData containing points, triangles and colors
polydata->SetPoints(points);
polydata->SetPolys(triangles);
polydata->GetPointData()->SetScalars(colors);
Now in all the examples for vtkGlyph3D there is always the call to SetSourceConnection to which a vtkAlgorithmOutput object is passed.
Since i don't use a vtkCubeSource or vtkConeSource or the like, i don't know what i should pass here.
Can i just omit this call and simply do this
vtkNew<vtkGlyph3D> glyph3D;
glyph3D->SetColorModeToColorByScalar();
glyph3D->SetInputData(polydata);
glyph3D->ScalingOff();
glyph3D->Update();
to build my glyph?
Or do i somehow have to create a vtkAlgorithmOutput from my polydata?

From the doc, vtkGlyph3D
copy oriented and scaled glyph geometry to every input point
In other word, this filter copy the Source geometry on every (Nth) points of your Input (and some more option of scale / orientation). So it does not make sense to use it without a SetSourceConnection (with any kind of source/reader/filter providing a polydata)

Related

How to have public class member that can be accessed but not changed by other classes C++

This probably has been asked before, but I can't find something on it other than the general private/public/const solutions. Basically, I need to load fonts into an array when an instance of my class text is created. The text class is in my text.h file, and defined in text.cpp. In both of these files, they also include a Fonts class, and so I want my fonts class to have my selection of fonts preloaded in an array ready to be accessed by my text class AFTER the first instance is created. I want these fonts to be able to be accessed by my text class, but not able to be changed. I can't create a TTF_Font *Get_Font() method in the fonts class as each time a font is created, it loads memory that needs to be manually closed, so I couldn't exactly close it after it runs out of the method's scope, so rather, I would want to do something like, when creating a character for example, call TTF_RenderText_Blended(Fonts::arialFonts[10], "123", black); which would select the font type of arial in size 11 for example.
I'm not sure what type of application you are using your fonts in, but I've done some 3D graphics programming using OpenGL and Vulkan. This may or may not help you, but should give you some kind of context on the structure of the framework that is commonly used in Game Engine development.
Typically we will have a Texture class or struct that has a vector of color triplets or quads that represent the color data for each pixel in the image. Other classes will contain UV coordinates that the texture will be applied to... We also usually have functions to load in textures or graphics files such as PNG, JPG, TGA, etc. You can write your own fairly trivially or you can use many of the opensource loader libraries that are out there if you're doing graphics type programming. The texture class will contain other properties such as mipmap, if it is repeated or mirrored, its quality, etc... So we will typically load a texture and assign it an ID value and will store it into a hash table. If this texture is trying to be loaded again from a file, the code will recognize that it exists and exits that function call, otherwise it will store this new texture into the hash table.
Since most rendered texts are 2D, instead of creating a 3D model and sending it to the render to be processed by the model-view-projection matrix... We create what is commonly called a Sprite. This is another class. It has vertices to make up its polygonal edges. Typically a sprite will have 4 vertices since it is a QUAD. It will also have texture coordinates that will be associated with it. We don't apply the textures directly to the sprite because we want to instance a single sprite having only a single copy in memory. What we will typically do here, is we will send a reference of it to the renderer along with a reference to the texture by id, and transformation matrix to changes its shape, size, and world position. When this is being processed by the GPU through the shaders, since it is a 2D object, we use an orthographic projection. So this saves a matrix multiplication within the vertex shader for every vertex. Basically, it will be processed by the view-projection matrix. Here our Sprites will be stored similarly to our textures, in a hash table with an associated ID.
Now we have a sprite that is basically a Graphics Image that is drawn to the screen, a simple quad that can be resized and placed anywhere easily. We only need a single quad in memory but can draw hundreds even thousands because they are instanced via a reference count.
How does this help with Text or Fonts? You want your Text class to be separate from your Font class. The text class will contain where you want to draw it, which font to use, the size of the font, the color to be applied, and the text itself... The Font class would typically inherit from the basic Sprite Class.
We will normally create a separate tool or mini- console program that will allow you to take any of the known true type fonts or windows fonts by name that will generate 2 files for you. You will pass flags into the program's command-line arguments along with other commands such as -all for all characters or -"abc123" for just the specific characters you want, and -12, for the font size, and by doing so it will generate your needed files. The first will be a single textured image that we call a Font Atlas which is basically a specific form of a Sprite Sheet and the other file will be a CSV text file with generated values for texture positioning of each character's quad.
Back in the main project, the font class will load in these two files the first is simple as it's just a Texture image which we've already done before, and the later is the CSV file for the information that is needed to generate all of the appropriate quads, however, the "Font" class does have quite a bit of complex calculations that need to be performed. Again, when a Font is loaded into memory, we do the same as we did before. We check to see if it is already loaded into memory by either filename or by id and if it isn't we then store it into a hash table with a generated associated ID.
Now when we go to use the Text class to render our text, the code might look something like this:
void Engine::loadAssets() {
// Textures
assetManager_.loadTexture( "assets\textures\shells.png" );
assetManager_.loadTexture( "assets\textures\blue_sky.jpg" );
// Sprites
assetManager_.loadSprite( "assets\sprites\jumping_jim.spr" );
assetManager_.loadSprite( "assets\sprites\exploading_bomb.spr" );
assetManager_.loadFont( "assets\fonts\"arial.png", 12 );
// Same font as above, but the code structure requires a different font id for each different size that is used.
assetManager_.loadFont( "assets\fonts\"arial.png", 16 );
assetManager_.loadFont( "assets\fonts\"helvetica.png" );
}
Now, these are all stored as a single instance in our AssetManager class, this class containers several hashtables for each of the different types of assets. Its role is to manage their memory and lifetime. There is only ever a single instance of each, but we can reference them 1,000 times... Now somewhere else in the code, we may have a file with a bunch of standalone enumerations...
enum class FontType {
ARIAL,
HELVETICA,
};
Then in our render call or loadScene function....
void Engine::loadScene() {
fontManager_.drawText( ARIAL, 18, glm::vec3(-0.5, 1.0, 0.5), glm::vec4(245, 169, 108, 128), "Hello World!");
fontManager_.drawText( ARIAL, 12, glm::vec3(128, 128, 0), glm::vec4(128, 255, 244, 80), "Good Bye!");
}
The drawText function would be the one that takes the ARIAL id and gets the reference into the hashtable for that stored font. The renderer uses the positioning, and color values, the font size, and the string message... The ID and Size are used to retrieve the appropriate Font Atlas or Sprite Sheet. Then each character in the message string will be matched and the appropriate texture coordinates will be used to apply that to a quad with the appropriate size based on the font's size you specified.
All of the file handling, opening, reading from, and closing of was already previously done in the loadAssets function. All of the required information is already stored in a set of hashtables that we reference in to via instancing. There is no need or concern to have to worry about memory management or heap access at this time, this is all done through utilizing the cache. When we draw the text to the screen, we are only manipulating the pixels through the shaders by matrix transformations.
There is another major component to the engine that hasn't been mentioned yet, but we typically use a Batch Process and a Batch Manager class which handles all of the processing for sending the Vertices, UV Coordinates, Color, or Texture Data, etc... to the video card. CPU to GPU transfer across the buss and or the PCI-Express lanes are considered slow, we don't want to be sending 10,000, 100,000, or even 1 million individual render calls every single frame! So we will typically create a set of batches that has a priority queue functionality and when all buckets are full, either the bucket with the highest priority value or the fullest bucket will be sent to the GPU and then emptied. A single bucket can hold 10,000 - 100,000 primitives... where single a primitive could be a point, a line, a triangle list, a triangle fan, etc... This makes the code much more efficient. The heap is seldom used. The BatchManager, AssetManager, TextureManager, AudioManager, FontManager classes, etc. are the ones that live on the heap, but all of their stored assets are used by reference and due to that we can instance a single object a million times! I hope this explanation helps.

How do images work in opencl kernel?

I'm trying to find ways to copy multidimensional arrays from host to device in opencl and thought an approach was to use an image... which can be 1, 2, or 3 dimensional objects. However I'm confused because when reading a pixle from an array, they are using vector datatypes. Normally I would think double pointer, but it doesn't sound like that is what is meant by vector datatypes. Anyway here are my questions:
1) What is actually meant to vector datatype, why wouldn't we just specify 2 or 3 indices when denoting pixel coordinates? It looks like a single value such as float2 is being used to denote coordinates, but that makes no sense to me. I'm looking at the function read_imageui and read_image.
2) Can the input image just be a subset of the entire image and sampler be the subset of the input image? I don't understand how the coordinates are actually specified here either since read_image() only seams to take a single value for input and a single value for sampler.
3) If doing linear algebra, should I just bite the bullet and translate 1-D array data from the buffer into multi-dim arrays in opencl?
4) I'm still interested in images, so even if what I want to do is not best for images, could you still explain questions 1 and 2?
Thanks!
EDIT
I wanted to refine my question and ask, in the following khronos documentation they define...
int4 read_imagei (
image2d_t image,
sampler_t sampler,
int2 coord)
But nowhere can I find what image2d_t's definition or structure is supposed to be. The samething for sampler_t and int2 coord. They seem like structs to me or pointers to structs since opencl is supposed to be based on ansi c, but what are the fields of these structs or how do I note the coord with what looks like a scala?! I've seen the notation (int2)(x,y), but that's not ansi c, that looks like scala, haha. Things seem conflicting to me. Thanks again!
In general you can read from images in three different ways:
direct pixel access, no sampling
sampling, normalized coordinates
sampling, integer coordinates
The first one is what you want, that is, you pass integer pixel coordinates like (10, 43) and it will return the contents of the image at that point, with no filtering whatsoever, as if it were a memory buffer. You can use the read_image*() family of functions which take no sampler_t param.
The second one is what most people want from images, you specify normalized image coords between 0 and 1, and the return value is the interpolated image color at the specified point (so if your coordinates specify a point in between pixels, the color is interpolated based on surrounding pixel colors). The interpolation, and the way out-of-bounds coordinates are handled, are defined by the configuration of the sampler_t parameter you pass to the function.
The third one is the same as the second one, except the texture coordinates are not normalized, and the sampler needs to be configured accordingly. In some sense the third way is closer to the first, and the only additional feature it provides is the ability to handle out-of-bounds pixel coordinates (for instance, by wrapping or clamping them) instead of you doing it manually.
Finally, the different versions of each function, e.g. read_imagef, read_imagei, read_imageui are to be used depending on the pixel format of your image. If it contains floats (in each channel), use read_imagef, if it contains signed integers (in each channel), use read_imagei, etc...
Writing to an image on the other hand is straightforward, there are write_image{f,i,ui}() functions that take an image object, integer pixel coordinates and a pixel color, all very easy.
Note that you cannot read and write to the same image in the same kernel! (I don't know if recent OpenCL versions have changed that). In general I would recommend using a buffer if you are not going to be using images as actual images (i.e. input textures that you sample or output textures that you write to only once at the end of your kernel).
About the image2d_t, sampler_t types, they are OpenCL "pseudo-objects" that you can pass into a kernel from C (they are reserved types). You send your image or your sampler from the C side into clSetKernelArg, and the kernel gets back a sampler_t or an image2d_t in the kernel's parameter list (just like you pass in a buffer object and it gets a pointer). The objects themselves cannot be meaningfully manipulated inside the kernel, they are just handles that you can send into the read_image/write_image functions, along with a few others.
As for the "actual" low-level difference between images and buffers, GPU's often have specially reserved texture memory that is highly optimized for "read often, write once" access patterns, with special texture sampling hardware and texture caches to optimize scatter reads, mipmaps, etc..
On the CPU there is probably no underlying difference between an image and a buffer, and your runtime likely implements both as memory arrays while enforcing image semantics.

Transform part of an ID3D11Buffer

I am making a racing game as part of an assignment using DirectX 11, and I have loaded in a model of a car into a ID3D11Buffer. From this, I was wondering how to rotate the wheels of the car separately to the rest of the model. I have the start and end indices of each part of the car, so I know which part it is to be rotated, just not how to translate it separately. (I'm not sure if me including code would help, but if so, just let me know)
Why not load the model to mess? the wheels is part of the car, and thus a sub-mess of the car.
with sub-mess, you can transform and render it separately without changes the other parts.
If you really want to load it into a ID3DBuffer, I recommend you create two buffers, one hold the static part of the car, another hold the wheels, in this way, you can transform and render the wheels while keep the static part unchanged.
Immediately after asking, I figured it out, and now I have it implemented so that it works fine.
Given that .obj files support groups of vertices, when you load in the file, store the start and end location of each group of vertices, the name of each part that's preceded in the .obj file with a 'g' at the beginning of a line, and calculate the centre point of each group.
Then, make sure that you have a XMFLOAT4X4 for each part that you want to transform separately, as well as one for the whole object. In this case, I have objectMatrix, and wheel1-4Matrix. I apply any transformations I want to the objectMatrix that is general world transformations to do with moving the car. For each wheel, do the following translation
XMMATRIX translate;
translate = XMMatrixTranslation(-centre.x, -centre.y, -centre.z);
translate *= XMMatrixRotationX(angleToBeMoved);
translate *= XMMatrixTranslation(centre.x, centre.y, centre.z);
translate = XMMatrixMultiply(translate, XMLoadFloat4x4(&objectMatrix));
XMStoreFloat4x4(&wheel1, translate);
Apply this to each wheel during the update. During the Draw method, make sure that you're using the DrawIndexed method, pass it both the start and end index for each group, and update your constant buffer's world view matrix with the relevant wheel matrix if it's a wheel, or with the objectMatrix in all other circumstances.

How to override a named display on display list - glCallList()

About the glCallList , assume I have some named display on the display list ,I know that each of them could be re-drawn by calling glCallList(i) .
My question is - is it possible to overwrite this named display ? mean , once the i'st is exist , I will make a display calling and it would be stored at glCallList(i) .
Edit:
For example - right now glCallList(1) draw a cube , I want to overwrite it , and make
glCallList(1) to draw a triangular .
Be aware that display lists are among the oldest parts of OpenGL and their use was frowned upon even before they were officially deprecated. The primary purpose they originally served was to "record" sequences of commands that would setup state / data persistently in lieu of modern OpenGL's state/data objects (e.g Texture Objects, Vertex Buffer Objects, Sampler Objects, etc.).
Nevertheless, a pair of calls to glNewList (...) and glEndList (...) will actually replace a display list rather than allocating a new one if you pass it a handle that already had data. So you do not need to go through the trouble of glDeleteLists (...) and then glGenLists (...) to reuse the same handle (name).

changing textureRect of a CCSprite created by CCRenderTexture

I have a CCSprite which gradually needs to be exhausted linearly from one end, lets say from left to right.For this purpose ,I am trying to change the textureRect property of the sprite so that the part that got exhausted from one end is 'outside' the displaying frame of the sprite.
I did this sort of thing before with a sprite that gets loaded from a spritesheet.And it worked perfectly.But I created this CCSprite using CCRenderTexture and by changing the textureRect property,the entire sprite gets disappeared.
The first image is the original CCSprite which I get from CCRenderTexture.The second image shows what I want to achieve.The black dotted rectangular portion of the Sprite needs to be omitted out.Only the blue dotted portion of the sprite needs to be displayed.Essentially,this blue dotted rectangle is my textureRect.
Is there any way how I could make my sprite reduce from one end.
Also is there any difference between a sprite created normally,and one created using CCRenderTexture.
I have done similar thing like this before using some low-level hack.
There is a work around solution if you use CCProgressTimer, that's very easy and I think it should be enough for your examples.
But you said in comment that you have some special requirements like "exhaust it from both the ends at once" then some low-level hack is needed. My solution from my last object is:
1) Get the texture image's raw data. In cocos2d you can use CCRenderTexture and in cocos2d-x you can use CCImage.
2) CCRenderTexture has a method of - (BOOL) saveToFile: (NSString *) name
format: (tCCImageFormat) format
. You can read its source code then try to save it into an 2D array instead like byte raw[1024][768]. Each element in this array represents one pixel on your picture(the type may not be byte, I'm not sure, nearly forget the details). The format MUST BE PNG since transparency will be needed.
3) Modify raw data directly, set pixel's transparency to 0x0 which you want it to disappear.
4) Re-initialize a CCRenderTexture using picture data you modified.
I can't provide the code directly since is a trade secret and core part of one of my projects. But I can share you my solution. You also need some knowledge about how PNG file works. Read:
https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
Turns out I was making a silly mistake.While supplying values to the textureRect(CGRect),I was actually setting the textureRect.origin.y to the height of the texture which made my textureRect go beyond(above) the texture area.This explains why they were disappearing.