I am trying to implement a 3d sound system for my space shooter game . I have everything ready (playng sound with different volume in each side etc.) but i cant find the corrent formula to calculate the correct volume for each side.
The general idea is that every time the player(ship) kill's an enemie (the cammera is always on the top of the ship) , an explotion will be heard with the correct left and right volume . So if the enemie is right of the ship then the right channel will be heard more , same with the left case.
so i have
vector ship
vector enemie
and
playSound(left ? ,right ?)
How does game engines calculate the left and right channels?
Finally i solved it. I used something similar to what Ameoo said.
Here it is:
void Play3D(int id,vector3d ship,vector3d pos,float arenaWidth,float power)
{
float disright=calculate_distance(ship.x+0.2f,ship.y,pos.x,pos.y);
float disleft=calculate_distance(ship.x-0.2f,ship.y,pos.x,pos.y);
sf.Play(2,1-disleft/arenaWidth*power,1-disright/arenaWidth*power);
}
Related
I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3
I am using OpenGL to create the 3D space.
I have a spaceship which can fire lasers.
Up until now I have had it so that the lasers will simply to deeper into the Z-axis once fired.
But I am attempting to make a proper aiming system with crosshairs so that you can aim and shoot in any direction, but I have not been successfull in trying to update the laser's path.
I have a directional vector based off the lasers end tip and start tip, which is gotten from the aiming.
How should I update the laser's X,Y,Z values (or vectors) properly so that it looks natural?
I think I see.
Let's say you start with the aiming direction as a 3D vector, call it "aimDir". Then in your update loop add all 3 (x, y and z) to the projectile "position". (OK, at the speed of light you wouldn't actually see any movement, but I think I see what you're going for here).
void OnUpdate( float deltaT )
{
// "move" the laser in the aiming direction, scaled by the amount of time elapsed
// since our last update (you probably want another scale factor here to control
// how "fast" the laser appears to move)
Vector3 deltaLaser = deltaT * aimDir; // calc 3d offset for this frame
laserEndpoint += deltaLaser; // add it to the end of the laser
}
then in the render routine draw the laser from the firing point to the new endpoint:
void OnRender()
{
glBegin(GL_LINES);
glVertex3f( gunPos.x, gunPos.Y, gunPos.z );
glVertex3f( laserEndPoint.x, laserEndPoint.y, laserEndPoint.z );
glEnd();
}
I'm taking some liberties because I don't know if you're using glut, sdl or what. But I'm sure you have at least an update function and a render function.
Warning, just drawing a line from the gun to the end of the laser might be disappointing visually, but it will be a critical reference for adding better effects (particle systems, bloom filter, etc.). A quick improvement might be to make the front of the laser (line) a bright color and the back black. And/or make multiple lines like a machine gun. Feel free to experiment ;-)
Also, if the source of the laser is directly in front of the viewer you will just see a dot! So you may want to cheat a bit and fire from just below or to the right of the viewer and then have in fire slightly up or in. Especially if you have one one each side (wing?) that appear to converge as in conventional machine guns.
Hope that's helpful.
I am struggling with the interpretation of kinect depth data.
In order to obtain real world distance from kinect, i used the following formula :
if(i<2047){
depthToMeterTable[i] = i * -0.0030711016 + 3.3309495161;
}
else{
depthToMeterTable[i] = 0;
}
This formula gives something pretty good as a distance estimator.
However i do obtain strange output from a 90° wall corner visualisation.
On the following image is two different information. First, the violet lines represent the wall as i SHOULD see it. A 90° corner. The red dots represent the wall seen from the kinect. As you can see, the angle of the two planes is now bigger.
http://img843.imageshack.us/img843/4061/kinectbias.jpg
Do you have any idea where i could correct this bias, and how to do it ?
Thank you for reading,
Al_th
I'm not familiar with that conversion formula (also not sure how your depthToMeterTable gets filled - what formula is used there).
There's a built-in function in libfreenect for that though: freenect_camera_to_world
Before that utility function was added I used Matt Fischer's conversion functions(RawDepthToMeters and DepthToWorld).
HTH
I have got a ball which bounces of walls. This bounce is simple, i just do this, ( code snippet )
if ( x - moveSpeed < 0 ) // Ball hit left wall
xVel *= -1;
However i also got a rectangle which the player moves. The bounce on this practically works as the bounce on walls.
But i figured out that when a ball got similar movement as the picture, its impossible for me to make it go straight up again. I therefore need some kind of calculation regarding the rectangles movement to influence the outcoming angle of the ball. The rectangle always got a constant movement speed when moving. This picture shows a rectangle moving to the left and the ball hitting it during its movement, which results in a 90 degree angle.
( Which shouldn't always be 90 ).
Sorry about my crappy pictures i hope they make sense. My math is rusty thats why i really could need a push in the right direction.
Here is a tutorial on some physics (which is what you need to know) and you need to learn about vectors. The tutorial doesn't go over exactly what you are looking for (the reflection of the bounce and angles) but this is a GREAT start for beginning, because you'll need to know all this to finish your project.Game Physics 101
If you want to do it the easy way, here is code in c++ that describes exactly how to do what your looking for.
Edit
You should actually check out the second link first, its a tutorial on exactly what you need to know. But if you are looking to do more than just make the ball bounce around, say include other moving objects or something like that, check out the first link.
No need for any fancy math here. My understanding of these types of games is that the angle the ball comes off of the paddle is determined by where on the paddle it bounces. If it bounces in the middle, then the current angle is preserved. As it bounces closer to the edge of the paddle, the angle is adjusted in the direction of that side of the paddle. Think of the paddle as a rounded surface.
Going the route of simulating actual physics (as opposed to #Treebranche's answer, which is how I think those sort of games really work) can get very complicated. You can consider friction, spin, duration of contact, etc. Here are a couple links that discuss this.
https://physics.stackexchange.com/questions/11686/finding-angular-velocity-and-regular-velocity-when-bouncing-off-a-surface-with-f
https://physics.stackexchange.com/questions/1142/is-there-a-2d-generalization-of-the-coefficient-of-restitution/
This code demonstrates how to bounce the ball back or in another direction by reversing the ball's X or Y heading with ball.headingX=-ball.headingX and ball.headingY=-ball.headingY .
Putting theory to practice :
/* update ball's x heading */
ball.x+=ball.headingX;
/* update ball's y heading */
ball.y+=ball.headingY;
/* if ball at most right of screen then reverse ball's x heading */
if( (ball.x>PONG_SCREEN_RIGHT) )
{
ball.headingX=-ball.headingX;
}
/* check if ball's location at top or bottom of screen,if true reverse ball's y heading */
if( (ball.y<PONG_SCREEN_TOP) || (ball.y>PONG_SCREEN_BOTTOM-2) )
{
ball.headingY=-ball.headingY;
}
/* check if ball lands on pad, if true bounce back */
if ( (ball.y>= PlayersPad.LEFT) && (ball.y<= PlayersPad.RIGHT) && (ball.x==PlayersPad.x))
{
ball.headingX=-ball.headingX;
playersScore+=10;
}
/* let computer track ball's movement */
if (ball.x>PONG_SCREEN_RIGHT-18) computersPad.y=ball.y;
/* check if ball misses pad, if true display you missed */
if (ball.x<PONG_SCREEN_LEFT)
{
displayYouMissed();
ball.x=ball_Default_X;
ball.y=ball_Default_Y;
}
I have a maze game that using cocos2d
I have one main sprite that can save "friend" sprite
Once "friend" sprite collide with main sprite, the "friend" sprite will follow main sprite everywhere.
Now I dont know how to make "friend" sprite follow main sprite with static distance and smooth movement.
I mean if main sprite going up, "friend" will be at behind the main sprite.
If main sprite going left, "friend" sprite will be at right of main sprite.
Please help me and share me some code...
You can implement the following behaviour by using the position of your main sprite as the target for the friend sprite. This would involve implementing separation (maintaining a min distance), cohesion (maintaining max distance) and easing (to make the movement smooth).
The exact algorithms (and some more) are detailed in a wonderful behavior animation paper by Craig Reynolds. There are also videos of the individual features and example source code (in C++).
The algorithm you need (it is a combination of multiple simpler ones) is Leader following
EDIT : I have found two straightforward implementations of the algorithms mentioned in the paper with viewable source code here and here. You will need to slightly recombine them from flocking (which is mostly following a centroid) to following a single leader. The language is Processing, resembling java-like pseudcode, so I hope the comprehension should be no problem. The C++ sourcecode I mentioned earlier is also downloadable but does not explicitly feature leader following.
I am not aware of any cocos2d implementations out there.
I have a simple solution kind of working fine. Follow the cocos2d document getting started lesson 2, Your first game. After implement the touch event. Use the following code to set seeker1 to follow cocosGuy:
- (void) nextFrame:(ccTime)dt {
float dx = cocosGuy.position.x - seeker1.position.x;
float dy = cocosGuy.position.y - seeker1.position.y;
float d = sqrt(dx*dx + dy*dy);
float v = 100;
if (d > 1) {
seeker1.position = ccp( seeker1.position.x + dx/d * v *dt,
seeker1.position.y + dy/d * v *dt);
} else {
seeker1.position = ccp(cocosGuy.position.x, cocosGuy.position.y);
}
}
The idea is at every step, the follower just need to move towards the leader at a certain speed. The direction towards the leader can be calculated by shown in the code.