Rotating image around a point - raphael

I'm trying to solve this one for hours and I can't figure out where I am going wrong..
On my page there is an image and a "selection frame". This frame can be moved and resized.
I am trying to make the image turn with the center point of the turn being the center of the frame.
I created a small handle at the top for rotation.
Here's the fiddle: http://jsfiddle.net/8PhqX/7/ (give it a minute to load)
The code in the fiddle is very long because I couldn't isolate the specific area relevant to my question. As you play around with it you'll see that the first rotation usually works fine, but then, things go crazy.
Here's the codeline for the rotation:
//selfRotator.handle.angle is the angle(clockwise) at which the rotation handle was rotated
//selfSelector.rotator.ox/oy is the position of the middle of the selection frame
//selfDefaults.imageArea.y is the y position of the section with the image (because of the red stripe in the top)
//selfImageArea.page.startX/Y is starting position of the image storing its position when the drag begins
//rotating by angle, at center point of selection
selfImageArea.page.transform(
['r', -selfRotator.handle.angle, selfSelector.rotator.ox - selfImageArea.page.startX, selfSelector.rotator.oy - (selfImageArea.page.startY - selfDefaults.imageArea.y)]
)
//tracking the image's start position and compensating
selfImageArea.page.attr({
transform: "...T" + (selfImageArea.page.startX) + "," + (selfImageArea.page.startY - selfDefaults.imageArea.y)
});
It looks like it gets messed up because of the getBBox values that don't follow the picture outlines.
I've added gridlines to illustrate the problem
also, iv'e came across this code(https://groups.google.com/forum/#!topic/raphaeljs/b8YG8DfI__g) for "getBBoxRotated()" function that should solve my issue but I can't seem to implement it.

Related

Mapping mouse position from full screen to smaller frame

I have been stuck on remapping my mouse position to a new frame for a few days and I am unsure what to do. I will provide images to describe my issue. The main problem is I want to click on an object in my program and the program will highlight the object I select (in 3d space.) I have this working perfectly when my application is in full screen mode. I recently started rendering my scene into a smaller frame so that I can have editor tools on the sides(like unity.) Here is the transition (graphically) i made from working to not working:
So essentially the mouse coordinates go from (0,0) to (screenWidth, screenHeight).. I want to map these coordinates to be from (frameStartX, frameStartY) to (frameStartX + frameWidth, frameStartY + frameHeight). I did some research on linearly transforming a number to scale it to a new range so I thought I could do this:
float frameMousePosX = (mousePosX - 0) / (screenWidth - 0) * ((frameWidth + frameStartX) - frameStartX ) + frameStartX ;
float frameMousePosY = (mousePosY - 0) / (screenHeight - 0) * ((frameHeight +frameStartY) - frameStartY ) + frameStartY;
I assumed this would work but it doesn't. It's not even close. I am really unsure what to do to get this transformation.
Assuming the transformation works, I would want it to read 0,0 at the bottom left of the frameStart which is (x,y) in the image attached and then reach its peak at the top right of the framed scene.
Any help would be extremely appreciated.

What is wrong with this attempt to render rotated ellipses in Qt?

1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.

Cocos2D 2.0 - trying to understand this puzzle with layers and sprites

Consider this: you create a new project on Cocos2D 2.0. You have the traditional Helloworld layer. You add a layer to it with the following structure:
Helloworld (cclayer)
│
┕━ baseLayer (cclayer)
│
┕━ myReducedNode [CCSprite node]
│
┕━ myFullSprite (ccsprite)
│
┕━ smallSprite (ccsprite)
myReducedNode is a node inside baseLayer, created using [CCSprite node] and has a scale applied to it, so, when I apply that scale I reduce myFullSprite and all smallSprites at the same time.
myFullSprite is a 1024x768 points sprite inside myReducedNode.
smallSprites are 230x348 points sprite inside myFullSprite.
Consider this craziness:
first I apply a scale of 1 to myReducedNode. When I drag smallSprite and check its coordinates, everything is fine. If I position smallSprite on the top left corner of myFullSprite, I read the center coordinate of smallSprite as being (115,594) which is the correct value.
I apply a 0.8 scale to myReducedNode. Dragging smallSprite to the same top left corner of myFullSprite, cocos is now reporting the center of smallSprite to be (17,641) ?????????!!!!!!
I am talking about local coordinates, I mean, the position smallSprite is inside myFullSprite.
What is causing this? There's no apparent logic on this number... This number has no relation with the scale applied to the top node.
What am I missing here? I am banging my head on the wall for days, trying to figure this puzzle!!! thanks.
More information. I hope this helps figure out why the coordinates have those values...
baseLayer position is (612, 389) on Helloworld.
myReducedNode position is (0,0) on baseLayer.
myFullSprite position is (0,0) on myReducedNode
I think you should take a look at convertToWorldSpace:, since you are scaling and nesting things, transformations most likely apply to those coordinates.
Here you have a question that might be useful and this post on cocos2d too
Try this:
CGPoint smallSpriteLocalPosition;
smallSpriteLocalPosition =
[smallSprite.parent convertToNodeSpace:smallSprite.position];
Then print out those coordinates and see if they register properly. That should give you the node (local) coordinates of the smallSprite relative to its parent, the fullSprite. You should also be able to convertToWorldSpace for coordinates within the window bounds.
This is what has worked for me in the past when working with child sprites; it can be a bit tricky. Make sure you use the proper variables in the convert call, otherwise you won't get the right data. Let me know if that works as I haven't tried it with layers that are three deep.
after a few changes in code and a several days of research and tries, I conclude this is a bug of Cocos2D or a lack of consistency between how Layers, Sprites and Nodes work (as suggested by LearnCocos2d) , as there's no way to explain the obtained values. I will try to fill a bug report on that.

Animate CCSprite on Each Update

I have a CCSprite object of which I need to update the on screen (x,y) position as quickly as possible. It is an augmented reality app so the on screen position needs to appear fixed to a real world location.
Currently, during each update I check the heading and attitude of the device then move the sprite accordingly by determining the new x and y positions
[spriteObject setPosition:ccp(newX, newY)];
Each degree change in heading corresponds to 10 pixels in on screen position, so by setting the position this way the sprite jumps around in intervals of 10 pixels which looks stupid. I'd like to animate it smoothly, maybe by using
[spriteObject runAction:[CCMoveTo actionWithDuration:0.2f position:ccp(newX, newY)]];
but the problem here is that a new position update comes in while the sprite is animating and it sort of screws the whole thing up. Anyone know of a nice solution to this problem? Any help is much appreciated as I've tried numerous failed solutions to this point.
You can try to just animate your sprite movement to the point. I mean, you can several times during one second run animated position correction with duration of 1/numberOfUpdates in one second. Something like
- (void) onEnter
{
[super onEnter];
[self schedule:#selector(updatePositionAnimated) interval:0.2f];
}
- (void) updatePositionAnimated
{
[spriteObject runAction:[CCMoveTo actionWithDuration:0.2f position:ccp(newX, newY)]];
}
I suppose, you will have smooth enough animation in this case

CCSprite children coordinates transform fails when using CCLayerPanZoom and CCRenderTexture?

Thanks for reading.
I'm working on a setup in Cocos2D 1.x where I have a huge CCLayerPanZoom in a scene with free panning and zooming.
Every frame, I have to additionally draw a CCRenderTexture on top to create "darkness" (I'm cutting out the light). That works well.
Now I've added single sprites to the surface, and they are managed by Box2D. That works as well. I can translate to the RenderTexture where the light sources ought to be, and they render fine.
And then I wanted to add a HUD layer on top, by adding a CCLayer to the scene. That layer needs to contain several sprites stacked on top of each other, as user interface elements.
Only, all of these elements fail to draw where I need them to be: Exactly in the center of screen. The Sprites added onto the HUD layer are all off, and I have iterated through pretty much every variation "convertToWorldSpace", "convertToNodeSpace", etc.
It is as if the constant scaling by the CCPanZoomLayer in the background throws off anchor points in the layer above each frame, and resetting them doesn't help. They all seem to default into one of the corners of the node bounding box they are attached to, as if their transform is blocked or set to zero when it comes to the drawing.
Has anyone run into this problem? Is this a known issue when using CCLayerPanZoom and drawing a custom CCRenderTexture on top each frame?
Ha! I found the culprit! There's a bug in Cocos2D' way of using Zwoptex data. (I'm using Cocos2D v 1.0.1).
It seems that when loading in Zwoptex v3 data, sprite frames' trim offset data is ignored when the actual sprite frame anchor point is computed. The effect is that no anchor point on a sprite with trim offset in its definition (eg in the plist) has its anchor point correctly set. Really strange... I wonder whether this has occurred to anybody else? It's a glaring issue.
Here's how to reproduce:
Create any data for a sprite frame in zwoptex v3 format (the one that uses the trim data). Make sure you actually have a trimmed sprite, i.e. offset must be larger than zero, and image size must be larger than source.
Load in sprite, and try to position it at center of screen. You'll see it's off. Here's how to compute your anchor point correctly:
CCSprite *floor = [CCSprite spriteWithSpriteFrameName:#"Menu_OmeFloor.png"]; //create a sprite
CCSpriteFrame *frame=[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"Menu_OmeFloor.png"]; //get its frame to access frame data
[floor setTextureRectInPixels:frame.rect rotated:frame.rotated untrimmedSize:frame.originalSizeInPixels]; //re-set its texture rect
//Ensure that the coordinates are right: Texture frame offset is not counted in when determining normal anchor point:
xa = 0.5 + (frame.offsetInPixels.x / frame.originalSizeInPixels.width);
ya = 0.5 + (frame.offsetInPixels.y / frame.originalSizeInPixels.height);
[floor setAnchorPoint:ccp(xa,ya)];
floor.position=(where you need it);
Replace the 0.5 in the xa/ya formula with your required anchor point values.