Draw a multiple lines set with VTK - c++

Can somebody point me in the right direction of how to draw a multiple lines that seem connected? I found vtkLine and its SetPoint1 and SetPoint2 functions. Then I found vtkPolyLine, but there doesn't seem to be any add, insert or set function for this. Same for vtkPolyVertex.
Is there a basic function that allows me to just push some point at the end of its internal data and the simply render it? Or if there's no such function/object, what is the way to go here?
On a related topic: I don't like vtk too much. Is there a visualization toolkit, maybe with limited functionality, that is easier to use?
Thanks in advance

For drawing multiple lines, you should first create a vtkPoints class that contains all the points, and then add in connectivity info for the points you would like connected into lines through either vtkPolyData or vtkUnstructuredGrid (which is your vtkDataSet class; a vtkDataSet class contains vtkPoints as well as the connectivity information for these points). Once your vtkDataSet is constructued, you can take the normal route to render it (mapper->actor->renderer...)
For example:
vtkPoints *pts = vtkPoints::New();
pts->InsertNextPoint(1,1,1);
...
pts->InsertNextPoint(5,5,5);
vtkPolyData *polydata = vtkPolyData::New();
polydata->Allocate();
vtkIdType connectivity[2];
connectivity[0] = 0;
connectivity[1] = 3;
polydata->InsertNextCell(VTK_LINE,2,connectivity); //Connects the first and fourth point we inserted into a line
vtkPolyDataMapper *mapper = vtkPolyDataMapper::New();
mapper->SetInput(polydata);
// And so on, need actor and renderer now
There are plenty of examples on the documentation site for all the classes
Here is vtkPoints : http://www.vtk.org/doc/release/5.4/html/a01250.html
If you click on the vtkPoints (Tests) link, you can see the tests associated with the class. It provides a bunch of different sample code.
Also, the vtk mailing list is probably going to be much more useful than stack overflow.

Related

OMNeT ++ direct message transmission visualizations in 3D

I am new to OMNeT++ and I'm trying to implement a drone network that communicate with each other using direct messages.
I want to visualize my drone network with the 3D visualization in OMNeT using the OsgVisualizer in inet.visualizer.scene package.
In the dronenetwork.ned file, I have used the IntegratedVisualizer and the OsgGeographicCoordinateSystem. Then in the omnetpp.ini file, the map file to be used is defined and so the map loading and mobility of the drones works fine in the 3D visualization of the simulation run.
However, the message transmissions between the drones are not visualized in 3D even though this is properly visualized in the 2D canvas mode.
I tried adding both NetworkNodeOsgVisualizer and NetworkConnectionOsgVisualizer to my drone module as visualization simple modules and also I have defined the drones as a #networkNode and #networkConnectionNode. But it still hasn't been able to visualize the message transmissions.
Any help or hint regarding this would be highly appreciated.
Code used for visualizations in the simple module drone is as follows
import inet.visualizer.scene.NetworkNodeOsgVisualizer;
import inet.visualizer.scene.NetworkConnectionOsgVisualizer;
module drone
{
parameters:
#networkNode;
#networkConnection;
submodules:
networkNodeOsgVisualizer: NetworkNodeOsgVisualizer {
#display("p=207,50");
displayModuleName = true;
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
}
networkConnectionOsgVisualizer : NetworkConnectionOsgVisualizer{
visualizationTargetModule = "^.^";
visualizationSubjectModule = "wirelessInterface.^.^";
displayNetworkConnections = true;
}
Thank you
Message passing and direct message sending visualizations are special cases implemented by the Qtenv automatically for 2D (default) visualization only. You can add custom 2D message visualization (like the one in the aloha example). OMNeT++ does not provide any 3D visualization by default. All the code must be provided by the model (INET in this case). This is also true for any transient visualization. There is an example for this in the osg-earth omnet example where communication between cows are visualized by inflating bubbles.
So, you have to implement your own visualization effect. There is something in INET which is pretty close to what you want: DataLinkOsgVisualizer and PhysicalLinkOsgVisualizer which flashes an arrow if communication on data link or physical layer has occurred. This is not the same as message passing, but close enough. Or you can implement your own animation using these visualizers as a sample.

Modifying ATriggerSphere radius?

I've created an instance of ATriggerSphere in my game, and placed it at my character's location, like so:
//Create activate trigger radius
activateRadiusTrigger = GetWorld()->SpawnActor<ATriggerSphere>(ATriggerSphere::StaticClass(),GetActorLocation(), GetActorRotation());
activateRadiusTrigger->SetActorHiddenInGame(false);
I need to adjust its radius now. I see that in blueprints there's a parameter under Shape that lets me change the value of "Sphere Radius", but I can't find the C++ equivalent field. Can someone tell me how that's done? Thanks!
If you look up TriggerSphere.h in the engine headers, you can find its public interface there. It looks like ATriggerSphere uses a collision component that you can get using GetCollisionComponent(). I'm going to assume this component is a USphereComponent, and you can call SetSphereRadius on that!
so try:
Cast<USphereComponent>(activateRadiusTrigger->GetCollisionComponent())->SetSphereRadius(NewRadius);

How to exchange custom data between Ops in Nuke?

This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).

XTK - rendering volume in multiple renderers without .onShowtime()?

I'm wondering if someone can explain to me why I can't render the same volume in a 4 panel setup (3D, X, Y, Z) just like in XTK Tutorial 13, without the .onShowtime function. I tried altering the code to do this, rather than call the .onShowtimes function:
volume = new X.volume();
volume.file = 'http://x.babymri.org/?vol.nrrd';
sliceX.add(volume);
sliceX.render();
sliceY.add(volume);
sliceY.render();
sliceZ.add(volume);
sliceZ.render();
but when I do this, I get the load bars in the 3 display panels, but after loading, only the sliceX panel will display an image, the others remain black. Do I always have to have a main renderer and make the other renderers listen to it, as the tutorial suggests?
Thanks,
Dave
It is because the first call to 'render' is going to trigger the downloading of the data.
Once it has been downloaded 1 time and rendered the first time, we add/render it in the other renderers.
When we call 'render' on the next renderers, it will not try to download the data again since it has already been downloaded once.
If we call renderer as you are doing it right now, it tries to download the volume 3 times and it can mess up the internals of the object.
So long answer short, it is to avoid some race conditions but I agree we should improve that if possible.
Thanks
I have solved it temporaly replacing onShowTime with the JavaScript function setTimeout.You render the first renderer and after that you rend the others inside that function
volume = new X.volume();
volume.file = 'http://x.babymri.org/?vol.nrrd';
slice3D.add(volume);
slice3D.render();
setTimeout(function(){
sliceX.add(volume);
sliceX.render();
sliceY.add(volume);
sliceY.render();
sliceZ.add(volume);
sliceZ.render();
},600);
it worked for me.

How do you control a player character in Bullet Physics?

I am not sure how you are supposed to control a player character in Bullet. The methods that I read were to use the provided btKinematicCharacterController. I also saw methods that use btDynamicCharacterController from the demos. However, in the manual it is stated that kinematic controller has several outstanding issues. Is this still the preferred path? If so, are there any tutorials or documentations for this? All I found are snippets of code from the demo, and the usage of controllers with Ogre, which I do not use.
If this is not the path that should be tread, then someone point me to the correct solution. I am new to bullet and would like a straightforward, easy solution. What I currently have is hacked together bits of a btKinematicCharacterController.
This is the code I used to set up the controller:
playerShape = new btCapsuleShape(0.25, 1);
ghostObject= new btPairCachingGhostObject();
ghostObject->setWorldTransform(btTransform(btQuaternion(0,0,0,1),btVector3(0,20,0)));
physics.getWorld()->getPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
ghostObject->setCollisionShape(playerShape);
ghostObject->setCollisionFlags(btCollisionObject::CF_CHARACTER_OBJECT);
controller = new btKinematicCharacterController(ghostObject,playerShape,0.5);
physics.getWorld()->addCollisionObject(ghostObject,btBroadphaseProxy::CharacterFilter, btBroadphaseProxy::StaticFilter|btBroadphaseProxy::DefaultFilter);
physics.getWorld()->addAction(controller);
This is the code I use to access the controller's position:
trans = controller->getGhostObject()->getWorldTransform();
camPosition.z = trans.getOrigin().z();
camPosition.y = trans.getOrigin().y()+0.5;
camPosition.x = trans.getOrigin().x();
The way I control it is through setWalkDirection() and jump() (if canJump() is true).
The issue right now is that the character spazzes out a little, then drops through the static floor. Clearly this is not intended. Is this due to the lack of a rigid body? How does one integrate that?
Actually, now it just falls as it should, but then slowly sinks through the floor.
I have moved this line to be right after the dynamic world is created
physics.getWorld()->getPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
It is now this:
broadphase->getOverlappingPairCache()->setInternalGhostPairCallback(new btGhostPairCallback());
I am also using a .bullet file imported from blender, if that is relevant.
The issue was with the bullet file, which has since been fixed(the collision boxes weren't working). However, I still experience jitteryness, unable to step up occasionally, instant step down from to high a height, and other issues.
My answer to this question here tells you what worked well for me and apparently also for the person who asked.
Avoid ground collision with Bullet
The character controller implementations in bullet are very "basic" unfortunately.
To get good character controller, you'll need to invest this much.