How to draw half an ellipse in Raphael JS? - raphael

I've been trying to understand the following code, but I don't seem to gain control over what the parameters are doing. I have to draw half an ellipse at a certain location. Could anyone explain to me what the parameters of the path mean in order to master this shape. Thanks.
var curve4 = paper.path("M150,150 A100,70 0 1,1 150,10")
.attr({"stroke-width": 2, stroke: "red"});

Ok I solved the syntax:
paper.path("M x_start,y_start A width,height rotation 1,direction x_end,y_end")
About that 1 between rotation and direction I'm not sure. It's probably also involved in the direction bit. I created a fiddle to play around with the ellipse parameters: https://jsfiddle.net/ansjovis/vgw3vdc8/4/

Related

Box2d: How to get cursor position to apply a velocity to a dynamic body in that direction?

I want to apply a velocity vector to a dynamic body in the cursor direction:
void Game::mousePressEvent(QMouseEvent *e){
double angle = atan2(realBall->GetPosition().y - e->pos().y(), realBall->GetPosition().x - e->pos().x());
realBall->SetLinearVelocity(b2Vec2(-cos(angle) * 50, -sin(angle) * 50));
}
But the dynamic body has an incorrect direction, so i think that the cursor position it's wrong.
Thank you for the help!
First, you must know that in order for your code to work, the coordinates of your screen and the coordinates of box2d must match. Be aware that if you use screen coordinates in pixels, it means that the size of one pixel matches an 1 meter in box2d. But let’s assume that you have already taken all this into account. Then I would not advise you to use trigonometry for calculations. So you can easily make a mistake. In this case, simple vector operations will be enough for you: substraction, scaling and normalizing a vector. You can try this: velocity = (cursor_position - real_ball_position).normalize().scale(50f). In box2d there is a b2Vec class for vector operations. You can read about it in detail in the documentation.

Rotating image around a point

I'm trying to solve this one for hours and I can't figure out where I am going wrong..
On my page there is an image and a "selection frame". This frame can be moved and resized.
I am trying to make the image turn with the center point of the turn being the center of the frame.
I created a small handle at the top for rotation.
Here's the fiddle: http://jsfiddle.net/8PhqX/7/ (give it a minute to load)
The code in the fiddle is very long because I couldn't isolate the specific area relevant to my question. As you play around with it you'll see that the first rotation usually works fine, but then, things go crazy.
Here's the codeline for the rotation:
//selfRotator.handle.angle is the angle(clockwise) at which the rotation handle was rotated
//selfSelector.rotator.ox/oy is the position of the middle of the selection frame
//selfDefaults.imageArea.y is the y position of the section with the image (because of the red stripe in the top)
//selfImageArea.page.startX/Y is starting position of the image storing its position when the drag begins
//rotating by angle, at center point of selection
selfImageArea.page.transform(
['r', -selfRotator.handle.angle, selfSelector.rotator.ox - selfImageArea.page.startX, selfSelector.rotator.oy - (selfImageArea.page.startY - selfDefaults.imageArea.y)]
)
//tracking the image's start position and compensating
selfImageArea.page.attr({
transform: "...T" + (selfImageArea.page.startX) + "," + (selfImageArea.page.startY - selfDefaults.imageArea.y)
});
It looks like it gets messed up because of the getBBox values that don't follow the picture outlines.
I've added gridlines to illustrate the problem
also, iv'e came across this code(https://groups.google.com/forum/#!topic/raphaeljs/b8YG8DfI__g) for "getBBoxRotated()" function that should solve my issue but I can't seem to implement it.

What is wrong with this attempt to render rotated ellipses in Qt?

1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.

Kinect 3D to 2D bias

I am struggling with the interpretation of kinect depth data.
In order to obtain real world distance from kinect, i used the following formula :
if(i<2047){
depthToMeterTable[i] = i * -0.0030711016 + 3.3309495161;
}
else{
depthToMeterTable[i] = 0;
}
This formula gives something pretty good as a distance estimator.
However i do obtain strange output from a 90° wall corner visualisation.
On the following image is two different information. First, the violet lines represent the wall as i SHOULD see it. A 90° corner. The red dots represent the wall seen from the kinect. As you can see, the angle of the two planes is now bigger.
http://img843.imageshack.us/img843/4061/kinectbias.jpg
Do you have any idea where i could correct this bias, and how to do it ?
Thank you for reading,
Al_th
I'm not familiar with that conversion formula (also not sure how your depthToMeterTable gets filled - what formula is used there).
There's a built-in function in libfreenect for that though: freenect_camera_to_world
Before that utility function was added I used Matt Fischer's conversion functions(RawDepthToMeters and DepthToWorld).
HTH

Cocos2D 2.0 - trying to understand this puzzle with layers and sprites

Consider this: you create a new project on Cocos2D 2.0. You have the traditional Helloworld layer. You add a layer to it with the following structure:
Helloworld (cclayer)
│
┕━ baseLayer (cclayer)
│
┕━ myReducedNode [CCSprite node]
│
┕━ myFullSprite (ccsprite)
│
┕━ smallSprite (ccsprite)
myReducedNode is a node inside baseLayer, created using [CCSprite node] and has a scale applied to it, so, when I apply that scale I reduce myFullSprite and all smallSprites at the same time.
myFullSprite is a 1024x768 points sprite inside myReducedNode.
smallSprites are 230x348 points sprite inside myFullSprite.
Consider this craziness:
first I apply a scale of 1 to myReducedNode. When I drag smallSprite and check its coordinates, everything is fine. If I position smallSprite on the top left corner of myFullSprite, I read the center coordinate of smallSprite as being (115,594) which is the correct value.
I apply a 0.8 scale to myReducedNode. Dragging smallSprite to the same top left corner of myFullSprite, cocos is now reporting the center of smallSprite to be (17,641) ?????????!!!!!!
I am talking about local coordinates, I mean, the position smallSprite is inside myFullSprite.
What is causing this? There's no apparent logic on this number... This number has no relation with the scale applied to the top node.
What am I missing here? I am banging my head on the wall for days, trying to figure this puzzle!!! thanks.
More information. I hope this helps figure out why the coordinates have those values...
baseLayer position is (612, 389) on Helloworld.
myReducedNode position is (0,0) on baseLayer.
myFullSprite position is (0,0) on myReducedNode
I think you should take a look at convertToWorldSpace:, since you are scaling and nesting things, transformations most likely apply to those coordinates.
Here you have a question that might be useful and this post on cocos2d too
Try this:
CGPoint smallSpriteLocalPosition;
smallSpriteLocalPosition =
[smallSprite.parent convertToNodeSpace:smallSprite.position];
Then print out those coordinates and see if they register properly. That should give you the node (local) coordinates of the smallSprite relative to its parent, the fullSprite. You should also be able to convertToWorldSpace for coordinates within the window bounds.
This is what has worked for me in the past when working with child sprites; it can be a bit tricky. Make sure you use the proper variables in the convert call, otherwise you won't get the right data. Let me know if that works as I haven't tried it with layers that are three deep.
after a few changes in code and a several days of research and tries, I conclude this is a bug of Cocos2D or a lack of consistency between how Layers, Sprites and Nodes work (as suggested by LearnCocos2d) , as there's no way to explain the obtained values. I will try to fill a bug report on that.