Maya exports all animations to a single FbxAnimStack and FbxAnimLayer. My scene has several objects with independent animations, grouped by top-level objects:
Object_1
this_is_animated_1
this_is_animated_2
Object_2
this_is_animated_3
this_is_animated_4
I'd like to split the single FbxAnimLayer in several layers, one for each Object. My plan is to walk through all FbxAnimCurveNode, get the targeted property, retrieve the FbxObject3D it belongs to, find the top level FbxObject3D and add those curve to the relevant new FbxAnimLayer.
However, I'm stuck: how do I get the targeted property from an animation curve ? FbxAnimCurveNode::GetChannel is private...
for (int i = 0; i < scene->GetSrcObjectCount<FbxAnimCurveNode>() ; ++i) {
auto curve = scene->GetSrcObject<FbxAnimCurveNode>(i);
// Stuck ! How to retrieve the targeted
}
Or maybe my entire approach is wrong ?
Related
We are using Google ortools to calculate a route using an image mask in C++.
First we create a two colour mask of an image with some objects we want to avoid in white, we then run the slic segmentation algorithm over the image and discard any segment centres that fall within a white object.
Finally we run the remaining centres through ortools to plot the route.
Here is an image with the output plotted on an input mask so you can see what I'm are talking about.
I guess if I were to relate it to a map we would be plotting a route avoiding a number of lakes!
The TSP Code I'm using is based entirely on the circuit board example in ortools, but here's the code I'm using to calculate the route for completeness. In the code below the build_path method is simply a bastardised version of the print output in the sample code.
void GConstraintSolver::compute_path(const std::vector<std::vector<int>>& locations, Path &path, const Image& mask) {
const int num_vehicles = 1;
const operations_research::RoutingIndexManager::NodeIndex depot{0};
operations_research::RoutingIndexManager manager((int)locations.size(), num_vehicles, depot);
operations_research::RoutingModel routing(manager);
const auto distance_matrix = ComputeEuclideanDistanceMatrix(locations);
const int transit_callback_index = routing.RegisterTransitCallback(
[&distance_matrix, &manager](int64_t from_index, int64_t to_index) -> int64_t {
// Convert from routing variable Index to distance matrix NodeIndex.
auto from_node = (size_t)manager.IndexToNode(from_index).value();
auto to_node = (size_t)manager.IndexToNode(to_index).value();
return distance_matrix[from_node][to_node];
});
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index);
operations_research::RoutingSearchParameters searchParameters = operations_research::DefaultRoutingSearchParameters();
searchParameters.set_first_solution_strategy(
operations_research::FirstSolutionStrategy::PATH_CHEAPEST_ARC);
const operations_research::Assignment *solution = routing.SolveWithParameters(searchParameters);
build_path(manager, routing, *solution, locations, path, mask);
}
So here's my question:
Are there any parameters to the algorithm that I can use to guarantee that the calculated route doesn't traverse any of the objects we are trying to avoid? We haven't seen it yet, but I can see that there could easily be a situation whereby the route traverses a white object.
Can we pass areas to avoid to the algorithm? I've tried to look through the documentation but not found anything similar.
We could sample connections along the route that is generated, and if they traverse a white area then plot a route around the edge of the area but if there is anything I can supply as input to the route calculation I'd prefer that.
The solution we came up with is similar to what I posted at the end of the question.
In the ComputeEuclideanDistanceMatrix method we check if any node -> node cross a white area, and if they do we set the distance between nodes to be very high (10000) this ensures that those two node are not connected directly when the TSP is performed.
I am quite new to xBim and I am struggeling to find the information I need. I have been able to iterate through all the IFCSpaces for each storey, and I would like to find each space's IfcPolyline so that I will know its boundaries. But how?
using (IfcStore model = IfcStore.Open(filename, null))
{
List<IfcBuildingStorey> allstories = model.Instances.OfType<IfcBuildingStorey>().ToList();
for (int i=0;i<allstories.Count;i++)
{
IfcBuildingStorey storey = allstories[i];
var spaces = storey.Spaces.ToList();
for (int j=0;j<spaces.Count;j++)
{
var space = spaces[j];
var spaceBoundaries=space.BoundedBy.ToList();
for (int u=0;u<spaceBoundaries.Count;u++)
{
//IfcPolyline from here??
}
}
}
}
This is quite old question, but in case you are still looking for the answer: IfcSpace.BoundedBy is an inverse relation and will give you a list of IfcRelSpaceBoundaries. This has RelatedBuildingElement attribute which will give you bounding building element such as a wall, door etc. It also has ConnectionGeometry, which is essentially an interface as the geometry of this connection might be curve, point, surface or volume. If you drill further down in the object model, you will see that the boundary can be any kind of curve, not just a polyline.
Entirely different approach could be to access the space geometry Space.Representation. This could have a 2D representation which would likely be a polygon, or it might be a 3D extrusion with profile. That would again be what you are looking for. But be aware, that it can be any other kind of geometry representation depending on the authoring software and model author.
I'm trying to create a program, using Qt (c++), which can record audio from my microphone using QAudioinput and QIODevice.
Now, I want to visualize my signal
Any help would be appreciated. Thanks
[Edit1] - copied from your comment (by Spektre)
I Have only one Buffer for both channel
I use Qt , the value of channel are interlaced on buffer
this is how I separate values
for ( int i = 0, j = 0; i < countSamples ; ++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
after I plot YvectorRight and YvectorLeft. I don't see how to trigger only one channel
hehe done this few years back for students during class. I hope you know how oscilloscopes works so here are just the basics:
timebase
fsmpl is input signal sampling frequency [Hz]
Try to use as big as possible (44100,48000, ???) so the max frequency detected is then fsmpl/2 this gives you the top of your timebase axis. The low limit is given by your buffer length
draw
Create function that will render your sampling buffer from specified start address (inside buffer) with:
Y-scale ... amplitude setting
Y-offset ... Vertical beam position
X-offset ... Time shift or horizontal position
This can be done by modification of start address or by just X-offsetting the curve
Level
Create function which will emulate Level functionality. So search buffer from start address and stop if amplitude cross Level. You can have more modes but these are basics you should implement:
amplitude: ( < lvl ) -> ( > lvl )
amplitude: ( > lvl ) -> ( < lvl )
There are many other possibilities for level like glitch,relative edge,...
Preview
You can put all this together for example like this: you have start address variable so sample data to some buffer continuously and on timer call level with start address (and update it). Then call draw with new start address and add timebase period to start address (of course in term of your samples)
multichannel
I use Line IN so I have stereo input (A,B = left,right) therefore I can add some other stuff like:
Level source (A,B,none)
render mode (timebase,Chebyshev (Lissajous curve if closed))
Chebyshev = x axis is A, y axis is B this creates famous Chebyshev images which are good for dependent sinusoidal signals. Usually forming circles,ellipses,distorted loops ...
miscel stuff
You can add filters for channels emulating capacitance or grounding of input and much more
GUI
You need many settings I prefer analog knobs instead of buttons/scrollbars/sliders just like on real Oscilloscope
(semi)Analog values: Amplitude,TimeBase,Level,X-offset,Y-offset
discrete values: level mode(/,),level source(A,B,-),each channel (direct on,ground,off,capacity on)
Here are some screenshots of my oscilloscope:
Here is screenshot of my generator:
And finally after adding some FFT also Spectrum Analyser
PS.
I started with DirectSound but it sucks a lot because of buggy/non-functional buffer callbacks
I use WinAPI WaveIn/Out for all sound in my Apps now. After few quirks with it, is the best for my needs and has the best latency (Directsound is too slow more than 10 times) but for oscilloscope it has no merit (I need low latency mostly for emulators)
Btw. I have these three apps as linkable C++ subwindow classes (Borland)
and last used with my ATMega168 emulator for my sensor-less BLDC driver debugging
here you can try my Oscilloscope,generator and Spectrum analyser If you are confused with download read the comments below this post btw password is: "oscill"
Hope it helps if you need help with anything just comment me
[Edit1] trigger
You trigger all channels at once but the trigger condition is checked usually just from one Now the implementation is simple for example let the trigger condition be the A(left) channel rise above level so:
first make continuous playback with no trigger you wrote it is like this:
for ( int i = 0, j = 0; i < countSamples ; ++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
// here draw or FFT,draw buffers YVectorRight,YVectorLeft
Add trigger
To add trigger condition you just find sample that meets it and start drawing from it so you change it to something like this
// static or global variables
static int i0=0; // actual start for drawing
static bool _copy_data=true; // flag that new samples need to be copied
static int level=35; // trigger level value datatype should be the same as your samples...
int i,j;
for (;;)
{
// copy new samples to buffer if needed
if (_copy_data)
for (_copy_data=false,i=0,j=0;i<countSamples;++j)
{
YVectorRight[j]=Samples[i++];
YVectorLeft[j] =Samples[i++];
}
// now search for new start
for (i=i0+1;i<countSamples>>1;i++)
if (YVectorLeft[i-1]<level) // lower then level before i
if (YVectorLeft[i]>=level) // higher then level after i
{
i0=i;
break;
}
if (i0>=(countSamples>>1)-view_samples) { i0=0; _copy_data=true; continue; }
break;
}
// here draw or FFT,draw buffers YVectorRight,YVectorLeft from i0 position
the view_samples is the viewed/processed size of data (for one or more screens) it should be few times less then the (countSamples>>1)
this code can loose one screen on the border area to avoid that you need to implement cyclic buffers (rings) but for starters is even this OK
just encode all trigger conditions through some if's or switch statement
I'm having an issue with sets and how transforms are applied. I'm coming from a graphics background, so I'm familiar with scene graphs as well as the normal SVG group syntax, but Raphael is confusing me. Say I have a circle and a set, on which I want to apply a transform.
circle = paper.circle(0,0.5)
set = paper.set()
If I add the circle first, and then transform, it works.
set.push circle
set.transform("s100,100")
To make a 50 radius circle. If I reverse the order, however,
set.transform("s100,100")
set.push circle
The transform is not applied.
This seems as though it will break many, many rendering and animation type algorithms, where your groups/transforms hold your articulation state, and you add or remove objects to them instead of recreating the entire transform every time. Is there an option somewhere in the documentation that I am not seeing that addresses this, or was this functionality discarded in favor of simplicity? It seems very odd to be missing, given that it is supported directly and easily in the group hierarchy of SVG itself... do I need to manually apply the transform from the set to any children added after the set is transformed?
Sets in Raphael are just simple Arrays.
When you perform some actions on set, Raphael goes through all members via for(...){} loop.
Raphael doesn't support SVG groups <g></g>
UPDATE Raphael's code:
// Set
var Set = function (items) {
this.items = [];
this.length = 0;
this.type = "set";
if (items) {
for (var i = 0, ii = items.length; i < ii; i++) {
if (items[i] && (items[i].constructor == elproto.constructor || items[i].constructor == Set)) {
this[this.items.length] = this.items[this.items.length] = items[i];
this.length++;
}
}
}
},
As you can see, all items are stored in this.items which is array.
Raphaël's sets are merely intended to provide a convenient way of managing groups of shapes as unified sets of objects, by aggregating element related actions and delegating them (by proxying the corresponding methods in the set level) to each shape sequentially.
It seems very odd to be missing, given that it is supported directly
and easily in the group hierarchy of SVG itself...
Well, Raphaël is not an attempt to elevate the SVG specs to a JavaScript based API, but rather to offer an abstraction for vector graphics manipulation regardless of the underlying implementation (be it SVG in modern browsers, or VML in IE<9). Thus, sets are by no means representations of SVG groups.
do I need to manually apply the transform from the set to any
children added after the set is transformed?
Absolutely not, you only need to make sure to add any shapes to the set before applying transformations.
I've had to completely revamp this question as I don't think I was explicit enough about my problem.
I'm attempting to learn the ropes of Box2D Web. I started having problems when I wanted to learn how to put multiple shapes in one rigid body (to form responsive concave bodies). One of the assumptions I made was that this kind of feature would only really be useful if I could change the positions of the shapes (so that I can be in control of what the overall rigid body looked like). An example would be creating an 'L' body with two rectangle shapes, one of which was positioned below and to-the-right of the first shape.
I've gotten that far in so-far-as I've found the SetAsOrientedBox method where you can pass the box its position in the 3rd argument (center).
All well and good. But when I tried to create two circle shapes in one rigid body, I found undesirable behaviour. My instinct was to use the SetLocalPosition method (found in the b2CircleShape class). This seems to work to an extent. In the debug draw, the body responds physically as it should do, but visually (within the debug) it doesn't seem to be drawing the shapes in their position. It simply draws the circle shapes at the centre position. I'm aware that this is probably a problem with Box2D's debug draw logic - but it seems strange to me that there is no online-patter regarding this issue. One would think that creating two circle shapes at different positions in the body's coordinate space would be a popular and well-documented phenomina. Clearly not.
Below is the code I'm using to create the bodies. Assume that the world has been passed to this scope effectively:
// first circle shape and def
var fix_def1 = new b2FixtureDef;
fix_def1.density = 1.0;
fix_def1.friction = 0.5;
fix_def1.restitution = .65;
fix_def1.bullet = false;
var shape1 = new b2CircleShape();
fix_def1.shape = shape1;
fix_def1.shape.SetLocalPosition(new b2Vec2(-.5, -.5));
fix_def1.shape.SetRadius(.3);
// second circle def and shape
var fix_def2 = new b2FixtureDef;
fix_def2.density = 1.0;
fix_def2.friction = 0.5;
fix_def2.restitution = .65;
fix_def2.bullet = false;
var shape2 = new b2CircleShape();
fix_def2.shape = shape2;
fix_def2.shape.SetLocalPosition(new b2Vec2(.5, .5));
fix_def2.shape.SetRadius(.3);
// creating the body
var body_def = new b2BodyDef();
body_def.type = b2Body.b2_dynamicBody;
body_def.position.Set(5, 1);
var b = world.CreateBody( body_def );
b.CreateFixture(fix_def1);
b.CreateFixture(fix_def2);
Please note that I'm using Box2D Web ( http://code.google.com/p/box2dweb/ ) with the HTML5 canvas.
It looks like you are not actually using the standard debug draw at all, but a function that you have written yourself - which explains the lack of online-patter about it (pastebin for posterity).
Take a look in the box2dweb source and look at these functions for a working reference:
b2World.prototype.DrawDebugData
b2World.prototype.DrawShape
b2DebugDraw.prototype.DrawSolidCircle
You can use the canvas context 'arc' function to avoid the need for calculating points with sin/cos and then drawing individual lines to make a circle. It also lets the browser use the most efficient way it knows of to render the curve, eg. hardware support on some browsers.
Since it seems like you want to do custom rendering, another pitfall to watch out for is the different call signatures for DrawCircle and DrawSolidCircle. The second of these takes a parameter for the axis direction, so if you mistakenly use the three parameter version Javascript will silently use the color parameter for the axis, leaving you with an undefined color parameter. Hours of fun!
DrawCircle(center, radius, color)
DrawSolidCircle(center, radius, axis, color)