Get complete current transform state of a Raphael element (as object or string) - raphael

Simple question that I can't find a simple answer to.
How do I look up the complete current human-readable (non-matrix) transform state of a Raphael element, regardless of whether or how that element's transform state was set?
For example, using element.transform() doesn't give you a complete transform state:
1: If something has been set by matrix, it doesn't give you the non-matrix state. E.g. here the element has been scaled equivalent to s2,2 but there's no s data when we parse the result:
circ = paper.circle(50,50,50);
circ.transform('m2 0 0 2 0 0');
console.log(circ.transform(''));
2: If something hasn't been set, it's undefined rather than giving us the default numeric value. E.g. here there's no s data, whereas I'm hoping for something that would tell us the scale state is equivalent to applying s1,1:
circ = paper.circle(50,50,50);
circ.transform('t100,100');
console.log(circ.transform(''));

Here's the closest I can find - recording it here because it's not obvious. Tested on paths, circles, ellipses, rectangles. Doesn't work on sets (as sets aren't transformed directly, they're just glorified arrays that apply transforms to their contents).
To get a Raphael element's complete current transform state as an object:
element.matrix.split();
The contents of that object are (with defaults shown for a non-transformed element):
dx: 0
dy: 0
isSimple: true
isSuperSimple: true
noRotation: true
rotate: 0
scalex: 1
scaley: 1
shear: 0
So, to look up a Raphael element's x scale condition, you could use element.matrix.split().scalex;. To look up an element's rotation state independent of what method set it, you could use element.matrix.split().rotate; etc. dx and dy are equivalent to translate values.
circle = paper.circle(5,5,5).attr('transform','s2,2');
alert(circle.matrix.split().scalex); // alerts 2
alert(circle.matrix.split().dx); // alerts 0
circle = paper.circle(5,5,5).attr('transform','m2 0 0 2 0 0');
alert(circle.matrix.split().scalex); // alerts 2
alert(circle.matrix.split().dx); // alerts 0
circle = paper.circle(5,5,5).attr('transform','t100,100');
alert(circle.matrix.split().scalex); // alerts 1
alert(circle.matrix.split().dx); // alerts 100
To get a Raphael element's current transform state as a transform string, the closest seems to be:
element.matrix.toTransformString();
...however, this only includes transformations that have been applied. e.g. if there has been no scaling, there is no s segment in the string rather than any default scaling transformation string like s1,1,0,0.
Likewise if you do...
Raphael.parseTransformString( element.matrix.toTransformString() );
...you get an array with unset values missing, rather than an object with all values present.
There doesn't seem to be any convenient function to turn the output of element.matrix.split(); into a transform string (although it's probably not needed).

Related

Using the mask returned by _mm_cmplt_epi16() to conditionally _mm_set_epi16 using SSE 1 .. SSE4.2

I'm adding offsets to x- and y-coordinates to then get the color values at the new (x;y), but I have to make sure the coordinates are not out of bounds. So I check if the values are greater than -1 using _mm_cmplt_epi16(lane, minus_one). and I get back a mask.
Now I want to set the values that weren't greater than -1 to 0 so I don't run into access violations and I'm able to get color values.
Once I fetched the color values I want to use the mask again to set the color values to a specific value to not mess up the process, but it seems like there's no _mm_maskmove_epi16()
I can only use SSE1-4.2
Is there anything I can do to avoid branches?

QPainterPath percentage/t value for Element

I have a QPainterPath that has two Elements, both of which are cubic Bezier curves, like this:
If I want to find a particular point along the path, I can use the pointAtPercent method. The documentation states that
When curves are present the percentage argument is mapped to the t parameter of the Bezier equations.
When I get the percentage, it's from 0 to 1 along the length of the entire path. That middle control point, for example, is at t = 0.46, when actually it's the end of the left Element (t = 1.0) and the start of the next (t = 0). So in my image if I get the percentage at the green circle, it'll be around 0.75. What I'd like is to get something like 0.5 for the green circle, i.e. the percentage of just the second Bezier.
So my question is, is there a way in Qt to determine the percentage value of a given Element instead of relative to the entire path length? In my example I happen to know the percentage value for the middle control point, but in general I won't know it, so I can't just scale the percentages or assume even distribution.
I'm using PyQt4 (Qt 4.8) if that matters. Thanks!
t scales along the total length(), but you can also know the length of individual segments, and thus adjust t accordingly. The path's element is a rather specific term: there are 3 elements per each cubicTo, assuming no intervening position changes. An arbitrary path like yours with consist of a MoveToElement, CurveToElement, two CurveToDataElements, another CurveToElement, another two CurveToDataElements. You have to iterate the elements and extract the length of the first cubic, to adjust the t.
A function extracting the first cubic, determining its length, and then using that to compute t2 from t would look similar to (untested):
def t2(path, t):
if path.elementCount() != 7:
raise ValueError('invalid path element count')
path1 = QPainterPath()
path1.moveTo(path.elementAt(0))
path1.cubicTo(path.elementAt(2), path.elementAt(3), path.elementAt(1))
l = path.length()
l1 = path1.length()
l2 = l - l1
t2 = (t*l - l1)/l2
return t2

what does m_gridMaps[0] means in icl-slam project of mrpt?

I am confused with this:
CPosePDFPtr pestPose = ICP.Align(
matchWith, // Map 1
&sensedPoints, // Map 2
initialEstimatedRobotPose, // a first gross estimation of map 2 relative to map 1.
&runningTime, // Running time
&icpReturn // Returned information
);
sensedPoints is a frame point data. I am not sure whether the matchWith is a frame point data before sensedPoints or a whole map data. If I want to align two adjacent frame point datas, how should I do it?
Checkout the reference docs of mrpt::maps::CMultiMetric.
As shown there, m_gridMaps[i] means the i'th occupancy gridmap in the set of metric maps.

Some light on COLLADA's logic

I'm working on a 3d map generator platform on C++/OpenGL and, after finishing with Perlin's Noise, I needed to load some 3d models into my screen. I never tried it before and after read about it I decided to use COLLADA's model format. The first thing I did was to read the XML file through TinyXML and convert it to understandable classes inside my code. I can access everything with no problem. So far all was well, but the problem to me appeared when I tried to properly convert the XML's information in 3d static models. I read many tutorials about, but I think I didn't catch the "essence" of COLLADA and then I'm here asking for help. My ".dae" file consists of a simple sphere created on Blender. It doesn't matter what I do, whenever I try to load it into my screen what I get is always something as a "thorny thing", like this image:
http://s2.postimg.org/4fdz2fpl4/test.jpg
Surely I'm not taking the correct coordinates or at least I'm not implementing them correctly.
Here is the exactly COLLADA file that I'm testing. In short, what I'm doing is the following:
1 - First I access "polylist" and get the values of "p", also the ID whose semantic is VERTEX, in this case "ID2-mesh-vertices"
2 - I access "vertices" and get the source ID whose semantic is POSITION, in this case "#ID2-mesh-positions"
3 - I access the source "#ID2-mesh-positions" and take the float values
4 - After that I started to loop through the "p" values from three to three (accordingly to "technique_common") to get, respectively, the indexes of vertices X, Y and Z located within the float values of the source. For example, what the code does =>
0 0 1 = {X -> 0.4330127;Y -> 0.4330127; Z -> 0.25}
1 2 2 = {X -> 0.25;Y -> 0; Z -> 0}
1 1 0 = {X -> 0.25;Y -> 0.25; Z -> 0.4330127}
Obviously I'm doing something very wrong, because I cannot get a simple sphere.
*
<input semantic="VERTEX" source="#ID2-mesh-vertices" offset="0"/>
<input semantic="NORMAL" source="#ID2-mesh-normals" offset="1"/>
This tells you that for each vertex, you have 2 indices poking into the referenced sources. 0 0 is the first set, 1 1 is the second, 2 2 is the third. since your first polylist value is 3 (really, all of them are), that makes up your first triangle.
Now, those indices are going through the source accessor for the float array...
<accessor source="#ID2-mesh-normals-array" count="266" stride="3">
<param name="X" type="float"/>
<param name="Y" type="float"/>
<param name="Z" type="float"/>
</accessor>
This tells you that to read the normal associated with an index, you have to stride the array by 3 elements, and each vector is made up of 3 floats (X, Y, Z). Note that stride does not have to be the number of elements in each vertex, though it is often the case.
So, to conclude that example, to read the index 2 of the normal array, you need to go read the elements indexed with X_index=index*stride=6, Y_index=X_index+1=7, Z_index=X_index+2=8, to find the normal (X,Y,Z) = (-0.2561113 0 -0.8390759 -0.4953154)
And yes, this means that you have multiple indices per vertex, something that OpenGL does not support natively. See those various questions as reference material.
Rendering meshes with multiple indices
How to use different indices for tex coords array and vertex array using glDrawElements
3 index buffers
Use the collada de-indexer to pre-process the .dae and eliminate multiple indices per vertex. While you are at it, convert to triangles in the pre-process to simplify even further your loader.
https://collada.org/mediawiki/index.php/COLLADA_Refinery

The camera's up, focus and position values after interaction

After interacting with the XTK camera in some way -- translation, rotation, zooming -- is there a way to retrieve from the camera the new values of the position, the focus and the up vector? It looks like the getter and setters are defined in the camera javascript, but the attributes corresponding to these are not updated during the interaction. The value returned by camera.position, for instance, is not updated even following a translation.
Is there either a mechanism that can provide these values, or a way to add an additional watcher to all interactions that modify the camera?
the position, up vector and focus are used to configure the 3d space at first. then, all interactions just modify the created view matrix.
You can query the view matrix like this
ren = new X.renderer3D()
console.log(ren.camera.view + "") // prints the view matrix as a string
I see a few solutions depending of who does the job.
First : adding a option to the cameras to enable tracking (disabled by default), an option that would update up/position/focus would be easy no ? Since it's just to multiply the previous vector by the transformation matrix at the same moment than we multiply the view matrix by the transformation matrix. But it may bring an additionnal cost in operations. Or we can compute it like in my "Second"
Second : if my memories are good, the transformation matrix T in a base B(O(x,y,z),i,*j*,k) has a well known-structure no ? Something like this (maybe I forget a transposition) :
i1 j1 k1 u1
i2 j2 k2 u2
i3 j3 k3 u3
0 0 0 1
Where :
([u1,u2,u3])=T(0(x,y,z)) i.e. gives the translation in the base B
([i1,i2,i3])=T(**i**)
([j1,j2,j3])=T(**j**)
([k1,k2,k3])=T(**k**)
Then if you want the 3 angles it is very harsh (see euler's calculations), but if you want something else it could be easier. For example you want the up vector which is the image of the "k" vector of the base B by our transformation T, so it is [j1,j2,j3] no ? Then, you cannot get easy the focus point, but you can easy get the focus vector : it is [k1,k2,k3] ! (actualy maybe it is -1*[k1,k2,k3]). If you look well the LookAt_ method of X.camera3D, it does not give a focus point to webgl but a normalized focus vector : the point position doesn't matter, you just need 1 point on the focus vector, and you can compute it now, no ? It's just a sum of currentposition & currentfocusvector coordinates.
I wish my memories are well and I am not saying total sh*t.
Just one question : there is a setter for the view matrix, so why do you want to store & set up/focus/position instead of directly storing and setting view ?
PS : caution, There could be a scale operator k, so the matrix would be different different, but I don't think there is in xtk.
PS 2 : in bold are vectors. [number,number,number] is a 3D vector shown by his coordinates.