I was looking at a basic Box2D program, more specifically this one.
Everything is fairly simple and makes sense, except for this line:
Shape.SetAsBox((32.f/2)/SCALE, (32.f/2)/SCALE); // SCALE = 30
Now I know we divide by SCALE to scale 1m->30px but why is 32.f divided by 2? I don't understand why we divide by 2, if my box texture is 32x32 pixels.
from the manual :
groundBox.SetAsBox(50.0f, 10.0f);
The SetAsBox function takes the half-width and half-height (extents)
It is because the box is created around the center (0,0).
So,
x = (32.f/2)/SCALE;
y = (32.f/2)/SCALE
SetAsBox(x,y);
will create box with corners at (-x, -y), (-x, y), (x, -y), (x, y), so it will be of expected size.
If you read the manual section 2.2 : http://www.box2d.org/manual.html#_Toc258082968
The SetAsBox function takes the half-width and half-height (extents)
The consider the extend ("50 m on each direction") and not the width ("100m wide"). Hence the factor 2.
Related
i've been looking around to find the best way to have wrap arround edges of a world for the bodies.
I managed to use this topic to do it :
How do I make a Box2D wrap around world?
Using SetTransform() i could make it appear on the other X/Z side.
Now Let's say i have an object for example a simple box 10x10.
If half the box goes beyond upper Y edge i want the portion that goes beyond to appear below with the other part of the box that is still visibile on the upper part to stay there.
To summarize i want a "real wrap arround edge like this used to be done in old games.
I hope i was clear enough...
Edit :
I've added a picture to explain what i mean :
Thanks
I have not used Box2d but I have resolved this problem before in Processing, hopefully the logic translates easily
In Processing a rectangle is drawn as rect(x, y, rectangleWidth, rectangleHeight) at position x, y which represents the top left corner of the rectangle. The rectangle's width and height point right and down from x and y respectively
The idea is to draw the rectangle normally unless the bottom would be off the bottom edge of the viewport (because the rectangle's height points down based on y). If the rectangle is off the bottom edge of the viewport then you instead draw two partial rectangles
In the code below height is the height of your viewport
if (y < height-10)
rect(x, y, 10, 10); // Normal condition, entire rectangle
else {
rect(x, 0, 10, 10-(height-y)); // Top partial rectangle
rect(x, y, 10, height-y); // Bottom partial rectangle
}
Finally i googled a bit found this article on Unity :
http://gamedevelopment.tutsplus.com/articles/create-an-asteroids-like-screen-wrapping-effect-with-unity--gamedev-15055
On Box2D I created 8 ghosts bodies which were positionned as defined in the article.
During Box2D Steps i added some logics to check where my original body is positionned.
When it goes to the edges the Ghost Body appears on the other edge. It works also when the original Body goes on a corder. 4 ghosts bodies will appear on each edges.
Assume that I took two panoramic image with vertical offset of H and each image is presented in equirectangular projection with size Xm and Ym. To do this, I place my panoramic camera at position say A and took an image, then move camera H meter up and took another image.
I know that a point in image 1 with coordinate of X1,Y1 is the same point on image 2 with coordinate X2 and Y2(assuming that X1=X2 as we have only vertical offset).
My question is that How I can calculate the range of selected of point (the point that know its X1and Y1 is on image 1 and its position on image 2 is X2 and Y2 from the Point A (where camera was when image no 1 was taken.).
Yes, you can do it - hold on!!!
Key thing y = focal length of your lens - now I can do it!!!
So, I think your question can be re-stated more simply by saying that if you move your camera (on the right in the diagram) up H metres, a point moves down p pixels in the image taken from the new location.
Like this if you imagine looking from the side, across you taking the picture.
If you know the micron spacing of the camera's CCD from its specification, you can convert p from pixels to metres to match the units of H.
Your range from the camera to the plane of the scene is given by x + y (both in red at the bottom), and
x=H/tan(alpha)
y=p/tan(alpha)
so your range is
R = x + y = H/tan(alpha) + p/tan(alpha)
and
alpha = tan inverse(p/y)
where y is the focal length of your lens. As y is likely to be something like 50mm, it is negligible, so, to a pretty reasonable approximation, your range is
H/tan(alpha)
and
alpha = tan inverse(p in metres/focal length)
Or, by similar triangles
Range = H x focal length of lens
--------------------------------
(Y2-Y1) x CCD photosite spacing
being very careful to put everything in metres.
Here is a shot in the dark, given my understanding of the problem at hand you want to do something similar to computer stereo vision, I point you to http://en.wikipedia.org/wiki/Computer_stereo_vision to start. Not sure if this is still possible to do in the manner you are suggesting, it sounds like you may need some more physical constraints but I do remember being able to correlate two 2d points in images after undergoing a strict translation. Think :
lambda[x,y,1]^t = W[r1, tx;r2, ty;ry, tz][x; y; z; 1]^t
Where lamda is a scale factor, W is a 3x3 matrix covering the intrinsic parameters of your camera, r1, r2, and r3 are row vectors that make up the 3x3 rotation matrix (in your case you can assume the identity matrix since you have only applied a translation), and tx, ty, tz which are your translation components.
Since you are looking at two 2d points at the same 3d point [x,y,z] this 3d point is shared by both 2d points. I cannot say if you can rationalize the actual x,y, and z values particularly for your depth calculation but this is where I would start.
I'm using the Kinect for Windows SDK (v1.8)
I'm successfully reading motion from the Kinect but I'm now wondering how to get the absolute position of the joint I'm tracking (right hand)
I'm using the following function, NuiTransformSkeletonToDepthImage, but the values range from 100-200 in both x and y coordinates.
Any suggestions for how to transform the returned coordinates to screen coordinates?
Got a feeling I'm missing something really obvious though...
Thank you in advance
Ok, after quite a bit of looking around and trial and error; I found the solution.
When you call NuiTransformSkeletonToDepthImage(); you can specify a Kinect resolution using the NUI_IMAGE_RESOLUTION enum. I believe the values are; 80x60, 320x240, 640x480, 1280x960
Depending on what resolution you decide to use; you'll need to divide the returned x and y coords by the corresponding resolution. Then multiply those by your window size and you should have the coordinates in your window.
I believe you can also specify the resolution when creating an instance of the Kinect Sensor interface.
Example as my explanation isn't too concise:
float x, y
const Vector4 fromPos = skeleton.SkeletonPositions[NUI_SKELETON_POSITION_HAND_RIGHT];
NuiTransformSkeletonToDepthImage(fromPos, &x, &y, NUI_IMAGE_RESOLUTION_320x240);
x = x * ((float)WINDOW_WIDTH / 320.0f);
y = y * ((float)WINDOW_HEIGHT / 240.0f);
Any questions; feel free to ask.
I am creating a menu system for my game engine and want to know how to be able to detect when the mouse is over a button. This is simple enough to do when the button is a square, rectangle or circle but I was wondering how to handle irregular shaped buttons.
Is this possible and if it is, does the complexity mean that it is better to simply use a bounding area (square or circle)?
Make a bitmask out of the texture or surface data. Decide on a rule; for example where the image is 100% transparent or a certain color the bitmask pixel is set to 0 otherwise set it to 1. Do the same for your cursor. When you check for collision simply check if the bitmask bits set to 1 overlap.
First what comes to my mind is to use mathematical functions. If you know the equation of the curve you can calculate if the point is under or over it by simply checking if right side of the equation is greater or less than the "y".
So if you have simple y = x*x and want to check point (2,1), you substitute it and check:
y = 2
x = 1*1 = 1
y > 1, point is over the curve. For opposite situation, taking the point (1,2), we get:
y = 1
x = 2*2 = 4
y < x, point is under the curve.
I have a scene which contains objects located anywhere in space and I'm making a trackball-like interface.
I'd like to make it so that I can move 2 separate sliders to rotate it in x and y axes respectively:
glRotatef(drawRotateY,0.0,1.0f,0);
glRotatef(drawRotateX,1.0f,0.0,0.0);
//draw stuff in space
However, the above code won't work because X rotation will then be dependent on the Y rotation.
How can I achieve this without using gluLookAt()?
Edit:
I'd like to say that my implementation is even simpler than a trackball interface. Basically, if the x slider value is 80 and y slider is 60, rotate vertically 80 degrees and horizontally 60 degrees. I just need to make them independent of each other!
This code should get you started: http://www.cse.ohio-state.edu/~crawfis/Graphics/VirtualTrackball.html
It explains how to implement a virtual trackball using the current mouse position in the GL window.
You could probably use something like this:
Vector v = new Vector(drawRotateX, drawRotateY, 0);
float length = v.length();
v.normalize();
glRotatef(length, v.x, v.y, v.z);
When you say rotate vertically and horizontally, do you mean like an anti-aircraft gun - rotate around the vertical Z axis, to face in a particular compass heading (yaw) and then rotate to a particular elevation (pitch)?
If this is the case, then you just need to do your two rotations in the right order, and all will be well. In this example, you must do the 'pitch' rotation first, and then the 'yaw' rotation. All will work out fine.
If you mean something more complicated (eg. like the 'Cobra' spaceship in Elite) then you will need a more fiddly solution.