Cocos2D 2.0 - trying to understand this puzzle with layers and sprites - cocos2d-iphone

Consider this: you create a new project on Cocos2D 2.0. You have the traditional Helloworld layer. You add a layer to it with the following structure:
Helloworld (cclayer)
│
┕━ baseLayer (cclayer)
│
┕━ myReducedNode [CCSprite node]
│
┕━ myFullSprite (ccsprite)
│
┕━ smallSprite (ccsprite)
myReducedNode is a node inside baseLayer, created using [CCSprite node] and has a scale applied to it, so, when I apply that scale I reduce myFullSprite and all smallSprites at the same time.
myFullSprite is a 1024x768 points sprite inside myReducedNode.
smallSprites are 230x348 points sprite inside myFullSprite.
Consider this craziness:
first I apply a scale of 1 to myReducedNode. When I drag smallSprite and check its coordinates, everything is fine. If I position smallSprite on the top left corner of myFullSprite, I read the center coordinate of smallSprite as being (115,594) which is the correct value.
I apply a 0.8 scale to myReducedNode. Dragging smallSprite to the same top left corner of myFullSprite, cocos is now reporting the center of smallSprite to be (17,641) ?????????!!!!!!
I am talking about local coordinates, I mean, the position smallSprite is inside myFullSprite.
What is causing this? There's no apparent logic on this number... This number has no relation with the scale applied to the top node.
What am I missing here? I am banging my head on the wall for days, trying to figure this puzzle!!! thanks.
More information. I hope this helps figure out why the coordinates have those values...
baseLayer position is (612, 389) on Helloworld.
myReducedNode position is (0,0) on baseLayer.
myFullSprite position is (0,0) on myReducedNode

I think you should take a look at convertToWorldSpace:, since you are scaling and nesting things, transformations most likely apply to those coordinates.
Here you have a question that might be useful and this post on cocos2d too

Try this:
CGPoint smallSpriteLocalPosition;
smallSpriteLocalPosition =
[smallSprite.parent convertToNodeSpace:smallSprite.position];
Then print out those coordinates and see if they register properly. That should give you the node (local) coordinates of the smallSprite relative to its parent, the fullSprite. You should also be able to convertToWorldSpace for coordinates within the window bounds.
This is what has worked for me in the past when working with child sprites; it can be a bit tricky. Make sure you use the proper variables in the convert call, otherwise you won't get the right data. Let me know if that works as I haven't tried it with layers that are three deep.

after a few changes in code and a several days of research and tries, I conclude this is a bug of Cocos2D or a lack of consistency between how Layers, Sprites and Nodes work (as suggested by LearnCocos2d) , as there's no way to explain the obtained values. I will try to fill a bug report on that.

Related

How to draw half an ellipse in Raphael JS?

I've been trying to understand the following code, but I don't seem to gain control over what the parameters are doing. I have to draw half an ellipse at a certain location. Could anyone explain to me what the parameters of the path mean in order to master this shape. Thanks.
var curve4 = paper.path("M150,150 A100,70 0 1,1 150,10")
.attr({"stroke-width": 2, stroke: "red"});
Ok I solved the syntax:
paper.path("M x_start,y_start A width,height rotation 1,direction x_end,y_end")
About that 1 between rotation and direction I'm not sure. It's probably also involved in the direction bit. I created a fiddle to play around with the ellipse parameters: https://jsfiddle.net/ansjovis/vgw3vdc8/4/

Python 3 Graphics Programming: how can I get a mouse click within a polygon shape?

So I'm working on a project for a class and I'm still trying to figure out how to go about doing something.
I am making a game where there is a board of squares or hexagons, they are either black or white, each being a state of being "Flipped", and when you click one square/hexagon, it flips all the adjacent shapes too.
Here is an image of what I am aiming to create.
Assignment images
I have gotten it running with squares, but now I need to do it with Hexagons. With the squares I registered a mouseclick as being within a square parameters of the x and y location of the click, and the state changes are assigned to a list of values assigned similarly to how the shapes were assigned within a list.
I will include a quick recording of the square program running in a folder I'm going to link.
Now, I believe I can't apply this kind of system to hexagons since they don't really line up like the squares did.
So how would I go about making a click register within a single hexagon on a grid? I have already drawn out the grid, but I am stuck on what to do to register a click to allow a hexagon to change it's state from un-flipped to flipped. I'm pretty sure I know what to do for the state change itself, but I don't know how to go about this, would it involve something with making a separate Class or something? I would appreciate any help with this.
I'll put a dropbox link here for the progress I made so far, and a pdf manual for graphics.py.
Dropbox: Python files
You can view the python code in your web-browser with dropbox too, I don't really want to fill this page pull of an entire thing of code..
Any help and feedback would be wonderful, thank you c:
so, TL;DR: How do you register a click within a polygon shape in python that allows it to change a value (within a list?) and change its visual appearance.
Just for the general side of your question, you can use a test to check if a point (x, y) is inside a polygon (formed by a list of x, y pairs).
Here's one such solution: http://www.ariel.com.au/a/python-point-int-poly.html
# determine if a point is inside a given polygon or not
# Polygon is a list of (x,y) pairs.
def point_inside_polygon(x,y,poly):
n = len(poly)
inside =False
p1x,p1y = poly[0]
for i in range(n+1):
p2x,p2y = poly[i % n]
if y > min(p1y,p2y):
if y <= max(p1y,p2y):
if x <= max(p1x,p2x):
if p1y != p2y:
xinters = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
if p1x == p2x or x <= xinters:
inside = not inside
p1x,p1y = p2x,p2y
return inside
This can be used in a way that is quite symmetrical to your drawing code, as you also form polygons in the same way for drawing as you would to test to see if the cursor is inside a hex.
You can modify the above implementation also to work with this Point type you are using to draw the polygons.
The rest you should be able to figure out, especially considering that you managed the input handling and drawing for the square grid.

Rotating image around a point

I'm trying to solve this one for hours and I can't figure out where I am going wrong..
On my page there is an image and a "selection frame". This frame can be moved and resized.
I am trying to make the image turn with the center point of the turn being the center of the frame.
I created a small handle at the top for rotation.
Here's the fiddle: http://jsfiddle.net/8PhqX/7/ (give it a minute to load)
The code in the fiddle is very long because I couldn't isolate the specific area relevant to my question. As you play around with it you'll see that the first rotation usually works fine, but then, things go crazy.
Here's the codeline for the rotation:
//selfRotator.handle.angle is the angle(clockwise) at which the rotation handle was rotated
//selfSelector.rotator.ox/oy is the position of the middle of the selection frame
//selfDefaults.imageArea.y is the y position of the section with the image (because of the red stripe in the top)
//selfImageArea.page.startX/Y is starting position of the image storing its position when the drag begins
//rotating by angle, at center point of selection
selfImageArea.page.transform(
['r', -selfRotator.handle.angle, selfSelector.rotator.ox - selfImageArea.page.startX, selfSelector.rotator.oy - (selfImageArea.page.startY - selfDefaults.imageArea.y)]
)
//tracking the image's start position and compensating
selfImageArea.page.attr({
transform: "...T" + (selfImageArea.page.startX) + "," + (selfImageArea.page.startY - selfDefaults.imageArea.y)
});
It looks like it gets messed up because of the getBBox values that don't follow the picture outlines.
I've added gridlines to illustrate the problem
also, iv'e came across this code(https://groups.google.com/forum/#!topic/raphaeljs/b8YG8DfI__g) for "getBBoxRotated()" function that should solve my issue but I can't seem to implement it.

What is wrong with this attempt to render rotated ellipses in Qt?

1. Goal
My colleague and I have been trying to render rotated ellipsoids in Qt. The typical solution approach, as we understand it, consists of shifting the center of the ellipsoids to the origin of the coordinate system, doing the rotation there, and shifting back:
http://qt-project.org/doc/qt-4.8/qml-rotation.html
2. Sample Code
Based on the solution outlined in the link above, we came up with the following sample code:
// Constructs and destructors
RIEllipse(QRect rect, RIShape* parent, bool isFilled = false)
: RIShape(parent, isFilled), _rect(rect), _angle(30)
{}
// Main functionality
virtual Status draw(QPainter& painter)
{
const QPen& prevPen = painter.pen();
painter.setPen(getContColor());
const QBrush& prevBrush = painter.brush();
painter.setBrush(getFillBrush(Qt::SolidPattern));
// Get rectangle center
QPoint center = _rect.center();
// Center the ellipse at the origin (0,0)
painter.translate(-center.x(), -center.y());
// Rotate the ellipse around its center
painter.rotate(_angle);
// Move the rotated ellipse back to its initial location
painter.translate(center.x(), center.y());
// Draw the ellipse rotated around its center
painter.drawEllipse(_rect);
painter.setBrush(prevBrush);
painter.setPen(prevPen);
return IL_SUCCESS;
}
As you can see, we have hard coded the rotation angle to 30 degrees in this test sample.
3. Observations
The ellipses come out at wrong positions, oftentimes outside the canvas area.
4. Question
What is wrong about the sample code above?
Best regards,
Baldur
P.S. Thanks in advance for any constructive response?
P.P.S. Prior to posting this message, we searched around quite a bit on stackoverflow.com.
Qt image move/rotation seemed to reflect a solution approach similar to the link above.
In painter.translate(center.x(), center.y()); you shift your object by the amount of current coordinate which makes (2*center.x(), 2*center.y()) as a result. You may need:
painter.translate(- center.x(), - center.y());
The theory of moving an object back to its origin, rotating and then replacing the object's position is correct. However, the code you've presented is not translating and rotating the object at all, but translating and rotating the painter. In the example question that you've referred to, they're wanting to rotate the whole image about an object, which is why they move the painter to the object's centre before rotating.
The easiest way to do rotations about a GraphicsItem is to initially define the item with its centre in the centre of the object, rather than in its top left corner. That way, any rotation will automatically be about the objects centre, without any need to translate the object.
To do this, you'd define the item with a bounding rect for x,y,width,height with (-width/2, -height/2, width, height).
Alternatively, assuming your item is inherited from QGraphicsItem or QGraphicsObject, you can use the function setTransformOriginPoint before any rotation.

CCSprite children coordinates transform fails when using CCLayerPanZoom and CCRenderTexture?

Thanks for reading.
I'm working on a setup in Cocos2D 1.x where I have a huge CCLayerPanZoom in a scene with free panning and zooming.
Every frame, I have to additionally draw a CCRenderTexture on top to create "darkness" (I'm cutting out the light). That works well.
Now I've added single sprites to the surface, and they are managed by Box2D. That works as well. I can translate to the RenderTexture where the light sources ought to be, and they render fine.
And then I wanted to add a HUD layer on top, by adding a CCLayer to the scene. That layer needs to contain several sprites stacked on top of each other, as user interface elements.
Only, all of these elements fail to draw where I need them to be: Exactly in the center of screen. The Sprites added onto the HUD layer are all off, and I have iterated through pretty much every variation "convertToWorldSpace", "convertToNodeSpace", etc.
It is as if the constant scaling by the CCPanZoomLayer in the background throws off anchor points in the layer above each frame, and resetting them doesn't help. They all seem to default into one of the corners of the node bounding box they are attached to, as if their transform is blocked or set to zero when it comes to the drawing.
Has anyone run into this problem? Is this a known issue when using CCLayerPanZoom and drawing a custom CCRenderTexture on top each frame?
Ha! I found the culprit! There's a bug in Cocos2D' way of using Zwoptex data. (I'm using Cocos2D v 1.0.1).
It seems that when loading in Zwoptex v3 data, sprite frames' trim offset data is ignored when the actual sprite frame anchor point is computed. The effect is that no anchor point on a sprite with trim offset in its definition (eg in the plist) has its anchor point correctly set. Really strange... I wonder whether this has occurred to anybody else? It's a glaring issue.
Here's how to reproduce:
Create any data for a sprite frame in zwoptex v3 format (the one that uses the trim data). Make sure you actually have a trimmed sprite, i.e. offset must be larger than zero, and image size must be larger than source.
Load in sprite, and try to position it at center of screen. You'll see it's off. Here's how to compute your anchor point correctly:
CCSprite *floor = [CCSprite spriteWithSpriteFrameName:#"Menu_OmeFloor.png"]; //create a sprite
CCSpriteFrame *frame=[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"Menu_OmeFloor.png"]; //get its frame to access frame data
[floor setTextureRectInPixels:frame.rect rotated:frame.rotated untrimmedSize:frame.originalSizeInPixels]; //re-set its texture rect
//Ensure that the coordinates are right: Texture frame offset is not counted in when determining normal anchor point:
xa = 0.5 + (frame.offsetInPixels.x / frame.originalSizeInPixels.width);
ya = 0.5 + (frame.offsetInPixels.y / frame.originalSizeInPixels.height);
[floor setAnchorPoint:ccp(xa,ya)];
floor.position=(where you need it);
Replace the 0.5 in the xa/ya formula with your required anchor point values.