Swinging/Bobbing Camera - c++

I have been searching on the internet for a while now, for a solution with nothing. What I want to know is how to implement a swing/bobbing motion in a 3D Camera in OpenGL(or DirectX) like you find in Minecraft, Call of Duty, etc. I tried cycloids, while they work I can't get the direction to work correctly.

What do you think about the following.
compute cam_pos, cam_dest, cam_up as usual.
compute cam_right as cross (cam_pos, cam_up)
create a float camera_time (if walking, camera_time += delta_time; )
compute offset_factor = sin(camera_time);
Then you can call gluLookAt or similar function as follows.
gluLookAt(cam_pos + cam_right * offset_factor, cam_des + cam_right * offset_factort, cam_up)
This will make the camera swing from right to left. You can add the same for the cam_up vector with some tweaking.

Related

Getting 3D world coordinate from (x,y) pixel coordinates

I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3

Project point from point cloud to Image in OpenCV

I'm trying to project a point from 3D to 2D in OpenCV with C++. At the Moment, I'm using cv::projectPoints() but it's just not working out.
But first things first. I'm trying to write a program, that finds an intersection between a point cloud and a line in space. So I calibrated two cameras, did rectification and matching using SGBM. Finally I projected the disparity map to 3d using reprojectTo3D(). That all works very well and in meshlab, I can visualize my point cloud.
After that I wrote an algorithm to find the intersection between the point cloud and a line which I coded manually. That works fine, too. I found a point in the point cloud about 1.5 mm away from the line, which is good enough for the beginning. So I took this point and tried to project it back to the image, so I could mark it. But here is the problem.
Now the point is not inside the image anymore. As I took an intersection in the middle of the image, this is not possible. I think the problem could be in the coordinate systems, as I don't know in which coordinate system the point cloud is written (left camera, right camera or something else).
My projectPoints function looks like:
projectPoints(intersectionPoint3D, R, T, cameraMatrixLeft, distortionCoeffsLeft, intersectionPoint2D, noArray(), 0);
R and T are the rotation and translation from one camera to another (got that from stereoCalibrate). Here might be my mistake, but how can I fix it? I also tried to set these to (0,0,0) but it doesn't work either. Also I tried to transform the R Matrix using Rodrigues to a vector. Still same problem.
I'm sorry if this has been asked before, but I'm not sure how to search for this problem. I hope my text is clear enought to help me... if you need more information, I will gladly provide it.
Many thanks in advance.
You have a 3D point and you want to get the corresponding 2D location of it right? If you have the camera calibration matrix (3x3 matrix), you will be able to project the point to the image
cv::Point2d get2DFrom3D(cv::Point3d p, cv::Mat1d CameraMat)
{
cv::Point2d pix;
pix.x = (p.x * CameraMat(0, 0)) / p.z + CameraMat(0, 2);
pix.y = ((p.y * CameraMat(1, 1)) / p.z + CameraMat(1, 2));
return pix;
}

How to rotate object using the 3D graphics pipeline ( Direct3D/GL )?

I have some problems with trying to animate the rotation of mesh objects.
If to make the rotation process once all is fine. Meshes are rotated normally and the final image from the WebGL buffer looks pretty fine.
http://s22.postimg.org/nmzvt9zzl/311.png
But if to use the rotation in loop (with each new frame record) then the meshes are starting to look very weird, look the next screenshot:
http://s22.postimg.org/j2dpecga9/312.png
I won't provide here the programming code, because the issue is depend on incorrect 3D graphics handling.
I think some OpenGL/Direct3D developers may give an advice how to fix it, because this question relates to the 3D-programming subject and to the some specific GL or D3D function/method. Also I think the way of work with rotation is the same both in OpenGL and Direct3D because of linear algebra and affine affine transformations.
If you are really interested what I'm using, so the answer is WebGL.
Let me describe how do I use the rotation of object.
The simple rotation is made using the quaternions. Any mesh object I define has its quaternion property.
If to rotate the object then the method rotate() is doing the next:
// Some kind of pseudo-code
function rotateMesh( vector, angle ) {
var tempQuatenion = math.convertRotationToQuaternion( vector, angle );
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
}
I use the next piece of code in Renderer class to handle the mesh translation and rotation:
// Some kind of pseudo-code
// for each mesh, which is added to scene
modelViewMatrix = new IdentityMatrix()
translateMatrixByVector( modelViewMatrix, mesh.position )
modelViewMatrix.multiplyByMatrix( mesh.quaternion.toMatrix() )
So... I want to ask you if the logic above is correct then I provide some sources of math functions which are used for quaternions, rotations, etc...
If the logic above is incorrect so I think it makes no sense to provide something else. Because it's required to fix the main logic.
Quaternion multiplication is not commutative, i.e., if A and B are quaternions then A * B != B * A. If you want to rotate quaternion A by quaternion B, you need to do A = B * A, so this:
this.quaternion = math.multiplyQuaternions( this.quaternion, tempQuatenion );
should have its arguments reversed.
In addition, as mentioned by #ratchet-freak in comments, you should make sure your quaternions are always normalized, otherwise transformations other than rotation may happen.

C++ Projectile Trajectory

I am using OpenGL to create the 3D space.
I have a spaceship which can fire lasers.
Up until now I have had it so that the lasers will simply to deeper into the Z-axis once fired.
But I am attempting to make a proper aiming system with crosshairs so that you can aim and shoot in any direction, but I have not been successfull in trying to update the laser's path.
I have a directional vector based off the lasers end tip and start tip, which is gotten from the aiming.
How should I update the laser's X,Y,Z values (or vectors) properly so that it looks natural?
I think I see.
Let's say you start with the aiming direction as a 3D vector, call it "aimDir". Then in your update loop add all 3 (x, y and z) to the projectile "position". (OK, at the speed of light you wouldn't actually see any movement, but I think I see what you're going for here).
void OnUpdate( float deltaT )
{
// "move" the laser in the aiming direction, scaled by the amount of time elapsed
// since our last update (you probably want another scale factor here to control
// how "fast" the laser appears to move)
Vector3 deltaLaser = deltaT * aimDir; // calc 3d offset for this frame
laserEndpoint += deltaLaser; // add it to the end of the laser
}
then in the render routine draw the laser from the firing point to the new endpoint:
void OnRender()
{
glBegin(GL_LINES);
glVertex3f( gunPos.x, gunPos.Y, gunPos.z );
glVertex3f( laserEndPoint.x, laserEndPoint.y, laserEndPoint.z );
glEnd();
}
I'm taking some liberties because I don't know if you're using glut, sdl or what. But I'm sure you have at least an update function and a render function.
Warning, just drawing a line from the gun to the end of the laser might be disappointing visually, but it will be a critical reference for adding better effects (particle systems, bloom filter, etc.). A quick improvement might be to make the front of the laser (line) a bright color and the back black. And/or make multiple lines like a machine gun. Feel free to experiment ;-)
Also, if the source of the laser is directly in front of the viewer you will just see a dot! So you may want to cheat a bit and fire from just below or to the right of the viewer and then have in fire slightly up or in. Especially if you have one one each side (wing?) that appear to converge as in conventional machine guns.
Hope that's helpful.

Inverse Interpolation with angles greater than Pi?

I'm making an API for skeletal animation. Right now it works fine, except Lets say you want to go from 2.0f to 1.0f. It will end up doing almost a full circle when it should only do about 1/6th of one.
I think I've got a way to find it it should go counter clockwise but I'm not sure how to use it with this:
bool CCW = fmod( (endKeyFrame->getAngle() -
startKeyFrame->getAngle() + TWO_PI), TWO_PI) > 3.141592;
remainingInterpolationFrames = endKeyFrame->getFrame() - startKeyFrame->getFrame();
//Linear interpolation
curIncreaseAngle = (endKeyFrame->getAngle() -
startKeyFrame->getAngle()) / remainingInterpolationFrames;
Thanks
I think this may help. Especially sections 8,9 and 30.