About Cocos2d Camera? - cocos2d-iphone

I am try to figure out the zoom in/out effect. I am using tile-mapping background.
[self.Camera setEyeX:eyeY:eyeZ:]
[self.Camera setCenterX:centerY:centerZ:]
[self.Camera setUpX:upY:upZ:]
I don't know what differences among these methods and what these coordinate arguments represents.
Can someone help me please.
Thanks and sorry for my English.

It'll be a lot easier to just set tilemap.scale rather than using CCCamera.
The eye coordinates are the vector pointing in the direction the camera "looks at". The center coordinates are nothing more than the position of the camera in space. You can ignore the up vector, unless you want to rotate the camera's view. If I'm not mistaken, setting the up vector to (0, 1, 0) will ensure that the camera is not rotated and the coordinate system is what it should be (X extends to the right, Y extends upwards).

Related

Get screen position of the center of a mesh

My goal is to create an intuitive 3D manipulator to handle rotations of meshes displayed in my 3D editor, made with Qt / QML.
To do that, when the user clicks on an entity, 3 tori are spawned around the mesh, representing the euler angles the user can act on. If the user then clicks on one torus, I want him to be able to rotate the mesh by dragging the mouse. The natural way users seem to do that is by dragging the mouse around the torus in the direction they want the mesh to rotate.
I therefore need a way to know how the user is rotating his mouse. I thought of a way: when the user clicks on the torus, I retrieve the position of the center of the torus. Then, I translate this world position to its screen position. Then, I monitor the angle between the cursor of the mouse and the center of the torus. The evolution of this angle should tell me everything I need: if the angle increases clockwise, the mesh should rotate clockwise and vice versa. This solution should yield a result good enough for my application, since it won't depend on the angle of the camera, or only very minimally.
However, I can't find a way to translate a world position to its screen position with Qt. I found the function QVector3D::project(const QMatrix4x4 &modelView, const QMatrix4x4 &projection, const QRect &viewport), but its documentation is very scarce and I couldn't find anyone using it... I might have found what to feed in for the projection argument (the projectionMatrix property from QCamera, here https://doc.qt.io/qt-5/qml-qt3d-render-camera.html), but that's it. What is the modelView ? And viewport ? Is it simply QRect(0, 0, 1920, 1080) ?
If anyone have any kind of lead, it would be amazing, I can't find anything anywhere and I'm kind of losing hope now. Or maybe another, simpler, solution to my problem ? Please note that the user can also freely move the camera around the mesh, which adds in complexity.
Thanks a lot for your time, and have a nice day !
Yes, you should be able to translate from world position to screen position using the mentioned function. You are correct about the projection argument. As for the modelView argument, you should use viewMatrix property from QCamera, which is missing from the official documentation, but it works for me. The viewport parameter represents the dimensions of the part of the screen, you are projecting on. You could use QRect(0, 0, 1920, 1080) if you use full screen Full HD projection, otherwise use something like QRect(QPoint(0, 0), view->size()), where view is the wigdet or window with your 3D image. Be careful, that the resulting screen position will have y = 0 being down and positive values being above, which the opposite to usual screen coordinates.

What are we trying to achieve by creating a right and down vectors in this ray tracing depiction?

Often, we see the following picture when talking about ray tracing.
Here, I see the Z axis as the sort of direction if the camera pointed straight ahead, and the XY grid as the grid that the camera is seeing here. From the camera's point of view, we see the usual Cartesian grid me and my classmates are used to.
Recently I was examining code that simulates this. One thing that is not obvious from this picture to me is the requirement for the "right" and "down" vectors. Obviously we have look_at, which shows where the camera is looking. And campos is where the camera is located. But why do we need camright and camdown? What are we trying to construct?
Vect X (1, 0, 0);
Vect Y (0, 1, 0);
Vect Z (0, 0, 1);
Vect campos (3, 1.5, -4);
Vect look_at (0, 0, 0);
Vect diff_btw (
campos.getVectX() - look_at.getVectX(),
campos.getVectY() - look_at.getVectY(),
campos.getVectZ() - look_at.getVectZ()
);
Vect camdir = diff_btw.negative().normalize();
Vect camright = Y.crossProduct(camdir);
Vect camdown = camright.crossProduct(camdir);
Camera scene_cam (campos, camdir, camright, camdown);
I was searching about this question recently and found this post as well: Setting the up vector in the camera setup in a ray tracer
Here, the answerer says this: "My answer assumes that the XY plane is the "ground" in world space and that Z is the altitude. Imagine standing on the floor holding a camera. It's position has a positive Z component, and it's view vector is nearly parallel to the ground. (In my diagram, it's pointing slightly upwards.) The plane of the camera's film (the uv grid) is perpendicular to the view grid. A common approach is to transform everything until the film plane coincides with the XY plane and the camera points along the Z axis. That's equivalent, but not what I'm describing here."
I'm not entirely sure why "transformations" are necessary.. How is this point of view different from the picture at the top? Here they also say that they need an "up" vector and "right" vector to "construct an image plane". I'm not sure what an image plane is..
Could someone explain better the relationship between the physical representation and code representation?
How do you know that you always want the camera's "up" to be aligned with the vertical lines in the grid in your image?
Trying to explain it another way: The grid is not really there. That imaginary grid is the result of the calculations of camera's directional vectors and the resolution you are rendering in. The grid is not what decides the camera angle.
When you are holding a physical camera in your hand, like the camera in the cell phone, don't you ever rotate the camera little bit for effect? Or when filming, you may want to slowly rotate the camera? Have you not seen any movies where the camera is rotated?
In the same way, you may want to rotate the "camera" in your ray traced world. And rotating only the camera is much easier than rotating all your objects in the scene(may be millions!)
Check out the example of rotating the camera from the movie Ice Age here:
https://youtu.be/22qniGtZhZ8?t=61
The (up or down) and right vectors constructs the plane you project the scene onto. Since the scene is in 3D you need to project the scene onto a 2D scene in order to render a picture to display on your screen.
If you have the camera position and direction you still don't know whether you're holding the camera right-side up, upside down, or tilted to the left and right.
Using camera position, lookat, up (down) and right vectors we can uniquely define the 3D scene is projected into a 2D picture.
Concretely, if you look at the code and the picture. The 3D scene are the objects displayed. The image/projection plane is the grid infront of the camera. It's orientation is defined by the the camright and camdir vectors (because we are assuming the cameras line of sight is perpendicular to camdir, camdown is uniquely defined by the other two).
The placement of the grid is based on the camera's position and intrinsic properties (it's not being displayed here, but the camera will have a specific field of view).

How to move an object depending on the camera in OpenGL.

As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.

How do I implement basic camera operations in OpenGL?

I'm trying to implement an application using OpenGL and I need to implement the basic camera movements: orbit, pan and zoom.
To make it a little clearer, I need Maya-like camera control. Due to the nature of the application, I can't use the good ol' "transform the scene to make it look like the camera moves". So I'm stuck using transform matrices, gluLookAt, and such.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt), but I'm not quite sure how to implement the other two, pan and orbit. Has anyone ever done this?
I can't use the good ol' "transform the scene to make it look like the camera moves"
OpenGL has no camera. So you'll end up doing exactly this.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt),
This is not a Zoom, this is a Dolly. Zooming means varying the limits of the projection volume, i.e. the extents of a ortho projection, or the field of view of a perspective.
gluLookAt, which you've already run into, is your solution. First three arguments are the camera's position (x,y,z), next three are the camera's center (the point it's looking at), and the final three are the up vector (usually (0,1,0)), which defines the camera's y-z plane.*
It's pretty simple: you just glLoadIdentity();, call gluLookAt(...), and then draw your scene as normally. Personally, I always do all the calculations in the CPU myself. I find that orbiting a point is an extremely common task. My template C/C++ code uses spherical coordinates and looks like:
double camera_center[3] = {0.0,0.0,0.0};
double camera_radius = 4.0;
double camera_rot[2] = {0.0,0.0};
double camera_pos[3] = {
camera_center[0] + camera_radius*cos(radians(camera_rot[0]))*cos(radians(camera_rot[1])),
camera_center[1] + camera_radius* sin(radians(camera_rot[1])),
camera_center[2] + camera_radius*sin(radians(camera_rot[0]))*cos(radians(camera_rot[1]))
};
gluLookAt(
camera_pos[0], camera_pos[1], camera_pos[2],
camera_center[0],camera_center[1],camera_center[2],
0,1,0
);
Clearly you can adjust camera_radius, which will change the "zoom" of the camera, camera_rot, which will change the rotation of the camera about its axes, or camera_center, which will change the point about which the camera orbits.
*The only other tricky bit is learning exactly what all that means. To clarify, because the internet is lacking:
The position is the (x,y,z) position of the camera. Pretty straightforward.
The center is the (x,y,z) point the camera is focusing at. You're basically looking along an imaginary ray from the position to the center.
Now, your camera could still be looking any direction around this vector (e.g., it could be upsidedown, but still looking along the same direction). The up vector is a vector, not a position. It, along with that imaginary vector from the position to the center, form a plane. This is the camera's y-z plane.

Better modifying the render or moving the camera?

Since I gogled for it without finding anything interesting, I would like to ask you for some suggestions regarding if it is better to scale/translate the render itself keeping the camera position fixed or maybe moving closer/further or rotate the camera keeping the render position fixed?
I need a zooming out/in, rotation in all the 3 axes and also this kind of rotation
http://www.reknow.de/downloads/opengl/video.mp4
that is, if I first translate my render and then I apply a rotation, this rotation should consider the center always the windows center, and not the translated one
I need a zooming out/in, rotation in all the 3 axes and also this kind of rotation
What you mean is probably not "zooming" but "panning". And in OpenGL you place the "camera" by moving the scene around, because there is no camera.
Zooming is a change in the focal length, and would be implemented by changeing the FOV of the perspective.