EDIT: If the question is badly written have a look at the video (3), same link as in the bottom of this page.
I am trying to draw a very simple bezier curve using ccBezierConfig and Cocos2D. Reading on Wikipedia I tried to understand a bit controls points and found this image:
http://upload.wikimedia.org/wikipedia/commons/thumb/b/bf/Bezier_2_big.png/240px-Bezier_2_big.png
If you look on the wikipedia page from which I took the image there is a cool animation. Have a look here.
This is the code I used:
CCSprite *r = [CCSprite spriteWithFile:#"hi.png"];
r.anchorPoint = CGPointMake(0.5f, 0.5f);
r.position = CGPointMake(0.0f, 200.0f);
ccBezierConfig bezier;
bezier.controlPoint_1 = CGPointMake(0.0f, 200.0f);
bezier.controlPoint_1 = CGPointMake(180.0f, 330.0f);
bezier.endPosition = CGPointMake(320.0f,200.0f);
id bezierForward = [CCBezierBy actionWithDuration:1 bezier:bezier];
[r runAction:bezierForward];
[self addChild:r z:0 tag:77];
The app runs on Portrait mode and my speculation matching the control points of 1 and the ones in my code was that:
sprite.position should correspond to P0
bezier.controlPoint_1 should correspond to P0
bezier.controlPoint_2 should correspond to P1
bezier.endPosition should correspond to P2
I did try two approaches. By setting the position of the sprite and by not setting it.
I assumed that position should be the same as controlPoint_1 as in the wikipedia schema 1 there are only three points.
I get an output I don't quiet understand.. I made a little video of it, is a private youtube video:
to see the video click here
OK, the answer is rather simple...
The quadratic Bézier curve is not the one drawn by cocos2d. Instead, check the same wiki page for cubic Bézier curve. That's what you should be looking at.
Initial position is the position of the sprite (P0)
Control points 1 & 2 are P1 & P2, respectively.
End point is the end point.
Nice video, btw, I enjoyed it xD.
Evidence from the CCActionInterval.h file in the cocos2d library:
/** An action that moves the target with a cubic Bezier curve by a certain distance.
*/
#interface CCBezierBy : CCActionInterval <NSCopying>
{
ccBezierConfig config;
CGPoint startPosition;
}
This link should help you construct your cubic Bézier curves visually, instead of tweaking the values and running the app.
Related
I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3
I'm actually realising a C++ raytracer and I'm confronting a classic problem on raytracing. When putting a high vertical FOV, the shapes get a bigger distortion the nearer they are from the edges.
I know why this distortion happens, but I don't know to resolve it (of course, reducing the FOV is an option but I think that there is something to change in my code). I've been browsing different computing forums but didn't find any way to resolve it.
Here's a screenshot to illustrate my problem.
I think that the problem is that the view plane where I'm projecting my rays isn't actually flat, but I don't know how to resolve this. If you have any tip to resolve it, I'm open to suggestions.
I'm on a right-handed oriented system.
The Camera system vectors, Direction vector and Light vector are normalized.
If you need some code to check something, I'll put it in an answer with the part you ask.
code of ray generation :
// PixelScreenX = (pixelx + 0.5) / imageWidth
// PixelCameraX = (2 ∗ PixelScreenx − 1) ∗
// ImageAspectRatio ∗ tan(fov / 2)
float x = (2 * (i + 0.5f) / (float)options.width - 1) *
options.imageAspectRatio * options.scale;
// PixelScreeny = (pixely + 0.5) / imageHeight
// PixelCameraY = (1 − 2 ∗ PixelScreeny) ∗ tan(fov / 2)
float y = (1 - 2 * (j + 0.5f) / (float)options.height) * options.scale;
Vec3f dir;
options.cameraToWorld.multDirMatrix(Vec3f(x, y, -1), dir);
dir.normalize();
newColor = _renderer->castRay(options.orig, dir, objects, options);
There is nothing wrong with your projection. It produces exactly what it should produce.
Let's consider the following figure to see how all the quantities interact:
We have the camera position, the field of view (as an angle) and the image plane. The image plane is the plane that you are projecting your 3D scene onto. Essentially, this represents your screen. When you are viewing your rendering on the screen, your eye serves as the camera. It sees the projected image and if it is positioned at the right point, it will see exactly what it would see if the actual 3D scene was there (neglecting effects like depth of field etc.)
Obviously, you cannot modify your screen (you could change the window size but let's stick with a constant-size image plane). Then, there is a direct relationship between the camera's position and the field of view. As the field of view increases, the camera moves closer and closer to the image plane. Like this:
Thus, if you are increasing your field of view in code, you need to move your eye closer to the screen to get the correct perception. You can actually try that with your image. Move your eye very close to the screen (I'm talking about 3cm). If you look at the outer spheres now, they actually look like real balls again.
In summary, the field of view should approximately match the geometry of the viewing setup. For a given screen size and average watch distance, this can be calculated easily. If your viewing setup does not match your assumptions in code, 3D perception will break down.
I've looked around quite a bit and haven't been able to find a concise answer to this question. For my Cocos2D game I have integrated the Chipmunk physics engine. On initialization, I setup the boundaries of the 'playing field' by establishing a series of bodies and static shapes as follows:
- (void)initPlayingField {
// Physics parameters
CGSize fieldSize = _model.fieldSize;
CGFloat radius = 2.0;
CGFloat elasticity = 0.3;
CGFloat friction = 1.0;
// Bottom
CGPoint lowerLeft = ccp(0, 0);
CGPoint lowerRight = ccp(fieldSize.width, 0);
[self addStaticBodyToSpace:lowerLeft finish:lowerRight radius:radius elasticity:elasticity friction:friction];
// Left
CGPoint topLeft = ccp(0, fieldSize.height);
[self addStaticBodyToSpace:lowerLeft finish:topLeft radius:radius elasticity:elasticity friction:friction];
// Right
CGPoint topRight = ccp(fieldSize.width, fieldSize.height);
[self addStaticBodyToSpace:lowerRight finish:topRight radius:radius elasticity:elasticity friction:friction];
// Top
[self addStaticBodyToSpace:topLeft finish:topRight radius:radius elasticity:elasticity friction:friction];
}
This setup works, except for one thing-- the positions of the boundaries appear to be off by a significant amount of pixels to the right. After initializing a player, and sending him towards the boundary walls, it does not bounce off the edges of the screen (as I would expect by setting up the bounding box at 0,0), but rather some number of pixels (maybe 30-50) inwards from the edges of the screen. What gives?
I initially thought the issue came from where I was rendering the player sprite, but after running it through the debugger, everything looks good. Are there special protocols to follow when using chipmunk's positions to render sprites? Do I need to multiply everything by some sort of PIXELS_PER_METER? Would it matter if they are in a sprite sheet? A sprite batch node? Here is how I set the sprite position in my view:
Player *player = [[RootModel sharedModel] player];
self.playerSprite.position = player.position;
self.playerSprite.rotation = player.rotation;
And here is how I set the position in my Model:
- (void)update:(CGFloat)dt {
self.position = self.body->p;
self.rotation = self.body->a;
}
(Yes, I know it's redundant to store the position because chipmunk does that inherently.)
Also, bonus question for for those who finished reading the post. How might you move a body at a constant velocity in whatever direction it is facing, only allowing it to turn left or right at any given time, but never slow down?
What's the shape of the physics body? Perhaps you simply forgot to consider making the shape the same size of the sprite. If you only consider the position, then the sprite will be able to move halfway outside the screen at any border because its position is at the center of the sprite.
To move a body at constant velocity, set its velocity while disabling friction (set to 0?) and allowing it to rebound off of collisions with its impact speed (in Box2D that would be restitution = 1, don't know about Chipmunk).
Have you tried debug drawing the collision shapes? Cocos2D 2.1 has my CCPhysicsDebugNode class in it that makes it pretty easy to add. That will let you know if your geometry is lined up with your graphics. How thick are your boundary shapes? Are you possibly adding the sprites to a parent node that might be offsetting them?
Chipmunk doesn't require you to tune it for any real units. You can just use pixels or whatever.
Lastly, #LearnCocos2D has the right idea about the constant velocity. Set friction to 0 and elasticity to 1.0. You will also probably want to check it's velocity every frame and accelerate it back towards it's normal velocity each frame. Colliding with non-static geometry will cause it to lose speed.
I am struggling with the interpretation of kinect depth data.
In order to obtain real world distance from kinect, i used the following formula :
if(i<2047){
depthToMeterTable[i] = i * -0.0030711016 + 3.3309495161;
}
else{
depthToMeterTable[i] = 0;
}
This formula gives something pretty good as a distance estimator.
However i do obtain strange output from a 90° wall corner visualisation.
On the following image is two different information. First, the violet lines represent the wall as i SHOULD see it. A 90° corner. The red dots represent the wall seen from the kinect. As you can see, the angle of the two planes is now bigger.
http://img843.imageshack.us/img843/4061/kinectbias.jpg
Do you have any idea where i could correct this bias, and how to do it ?
Thank you for reading,
Al_th
I'm not familiar with that conversion formula (also not sure how your depthToMeterTable gets filled - what formula is used there).
There's a built-in function in libfreenect for that though: freenect_camera_to_world
Before that utility function was added I used Matt Fischer's conversion functions(RawDepthToMeters and DepthToWorld).
HTH
I am using enableRetinaDisplay in my project and it is working very well except when I use this code.
//+++VRope
//create batchnode to render rope segments
CCSpriteBatchNode *ropeSegmentSprite = [CCSpriteBatchNode batchNodeWithFile:#"rope.png" ];
[game addChild:ropeSegmentSprite];
//Create two cgpoints for start and end point of rope
CGPoint pointA = ccp(73, 330); //Top
CGPoint pointB = ccp(self.position.x +5, self.position.y +30); //Bottom
//create vrope using initWithPoints method
verletRope = [[VRope alloc] initWithPoints:pointA pointB:pointB spriteSheet:ropeSegmentSprite];
Instead of drawing one high-res image of the rope, this code is drawing two rope images. I know that it is the retina display that is causing this because I tested it on an iphone 3gs and the simulator and it works great... until I test it on my iphone 4 then it draws two ropes instead of one. Am I doing something wrong?
I know it's too late but I found this question on the first page when searching google so I'll post this answer for others to find in the future.
In VRope.mm search for
[[[spriteSheet textureAtlas] texture] pixelsHigh]
and replace with
[[[spriteSheet textureAtlas] texture] pixelsHigh]/CC_CONTENT_SCALE_FACTOR()
That's it.