In coin 3D, I have a scene setup, and the user can view the scene from various directions using a X,Y and Z and this will move the camera.
when these buttons are selected, I would like the viewer to Basically crop the output and the camera needs to zoom is as close as possible so that I can see the rendering in full at the current angle of viewing.
I have tried various things, including setting up the Camera->heightangle
Setting up Camera->Viewall, but the viewall will make sure the whole scene is visible in all 3 dimensions.
I need a better solution.
please assit.
Related
Is it possible to "look at" an OpenGL scene which was rendered at e.g. 60 degrees vertical FoV through a frustum/corridor/hole that has a smaller FoV - and have that fill the resulting projection plane?
I thinks that's not the same thing as "zooming in".
Background:
I'm struggling with the OpenGL transform pipeline for an optical see-through AR display here.
It could be I have my understanding of what the OpenGL transform pipeline of my setup really needs mixed up...
I'm creating graphics that are meant to appear properly located in the real world when being overlaid through AR glasses. The glasses are properly tracked in 3D space.
For rendering the graphics, I'm using OpenGL's legacy fixed function pipeline. Results are good but I keep struggling with registration errors that seem to have their root in my combination of glFrustum() plus glLookAt() not recreating the "perspective impression" correctly.
These AR displays usually don't fill the entire field of view of the wearer but the display area appears like a smaller "window" floating in space, usually ~3-6 feet in front of the user, pinned to head movement.
In OpenGL, I use a layout very similar to Bourke's where (I hope I summarize it correctly) the display's aspect ratio (e.g. 4:3) with windowwidth and windowheight defines the vertical Field of View. So FoV forms a fixed link with window dimensions and the "transform frustum" used by OpenGL - while I need to combine two frustums (?):
My understanding is that the OpenGL scene must be rendered with parameters equivalent to "parameters" of the human eye in order to match up - as the AR glasses allow the user to look through.
Let's assume the focal length of the human eye is 22mm (Clark, R.N. Notes on the Resolution and Other Details of the Human Eye. 2007.) and the eyes' "sensor size" is 16mm w x 13mm h (my estimate). The calculated vertical FoV is ~33 degrees then - which we feed into the OpenGL pipeline.
The output of such a pipeline would be that I get either the application window filled with this "view" or I can get a scaled down version of if, depending on my glViewport settings.
But as the AR glasses need input for only a sub-section, a "smaller window", of the whole field of view of the human wearer, I think I need a way to "look at" a smaller sub-area of the whole rendered scene - as if I was looking through a tiny hole onto the scene.
These glasses, with their "display window", provide a vertical field of view of around under 20 degrees - but feeding that into the OpenGL pipeline would be wrong. So, how can I combine these conflicting FoVs? ...or am I on the wrong track here?
I have a problem and I saw it also in the game Candy Crush Saga, where they successfully dealt with it. I would like the sprite to show only when it is in the board (see image link below). The board can be of different shapes, like the levels in the mentioned game.
Has anyone some ideas how can this be achieved with Cocos2d?
I will be very glad if someone has some tips.
Thank you in advance.
image link: http://www.android-games.fr/public/Candy-Crush-Saga/candy-crush-saga-bonus.jpg
In Cocos2d you can render sprites at different z levels. Images at a lower z-level will be drawn first by the graphic card and the images (sprites) with a higher z-value will be drawn later. Hence if an image (say A) is at the same position of the other but has a higher z-value you will see only the pixels of image A where the two images intersect.
Cocos2d uses also layers so you can decide to add Sprites to a layer and set the layer to a specific z value. I expect that they used a layer for the board (say at z=1) with a PNG image containing transparent bits in the area where you can see the sprites, and a second layer at z=0 for the sprites. In this way you can only see the sprites when they are in the transparent area.
Does this help?
I found out Cocos2d has a class CCClippingNode which does exatclly what I wanted. First I thought it can clip only rectangular areas, but after some research I found it can clip also paths.
I am working on a simple mesh viewer implementation in C++ with basic functionality such as translation, rotation, scaling.
I'm stuck with with implementing the rotation of the object along z-axis using the mouse. What I want to implement is the following:
Click and drag the mouse vertically (almost vertical will do, as I use a simple threshold to filter slight deviations along the horizontal axis) to rotate the object along y-axis (this part is done).
Click and drag the mouse horizontally just as described above to rotate the object along x-axis (this part is done too).
For z-axis rotation, I want to detect a circular (or along an arc) mouse movement. I'm stuck with this part, and don't know how to implement this.
For the above two, i just use atan2() to determine the angle of movement. But how do I detect circular movements?
The only way to deal with this is to have a delay between the user starting to make the motion and the object rotating:
When user clicks and begins to move the mouse you need to determine if its going to become a straight line movement, or a circular one. This will require a certain amount of data to be collected before that judgement can be made.
The most extreme case would be requiring the user to make one complete circle first, then the rotation begins (in reality you could do much better than this). Just how small you are able to cut this period down to will depend on a) how precise you dictate your users actions must be, and b) how good you are with pattern recognition algorithms.
To get you started heres an outline of an extremely poor algorithm:
On user click store the x and y coordinates.
Every 1/10 of a second store the new coordinates and process_for_pattern.
in process_for_pattern you're looking for:
A period where the x coordinates and the y coordinates regularly both increase, both decrease, or one increases and one decreases. Over time if this pattern changes such that either the x or the y begins to reverse whilst the other continues as it was, then at that moment you can be fairly sure you've got a circle.
This algorithm would require the user to draw a quarter circle before it was detected, and it does not account for size, direction, or largely irregular movements.
If you really want to continue with this method you can get a much better algorithm, but you might want to reconsider your control method.
Perhaps, you should define a screen region (e.g. at window boundaries), which, when was clicked, will initiate arc movement - or use some other modifier, a button or whatever.
Then at a mouse click you capture the coordinates and center of rotation (mesh axis) in 2D screen space. This gets you a vector (mesh center, button down pos)
On every mouse move you calculate a new vector (mesh center, mouse pos) and the angle between the two vectors is the angle of rotation.
I don't think it works like that...
You could convert mouse wheel rotation to z-axis, or use quaternion camera orientation, which is able to rotate along every axis almost intuitively...
The opposite is true for quarternion camera: if one tries to rotate the mesh along a straight line, the mesh appears to rotate slightly around some other weird axis -- and to compensate that, one intuitively tries to follow some slightly curved trajectory.
It's not exactly what you want, but should come close enough.
Choose a circular region within which your movements numbered 1 and 2 work as described (in the picture this would be some region that is smaller than the red circle. However, when the user clicks outside the circular region, you save the initial click position (shown in green). This defines a point which has a certain angle relative to the x-axis of your screen (you can find this easily with some trig), and it also defines the radius of the circle on which the user is working (in red). The release of the mouse adds a second point (blue). You then find the angle this point has relative to the center of the screen and the x-axis (just like before). You then project that angle onto your circle with the radius determined by the first click. The dark red arc defines the amount of rotation of the model.
This should be enough to get you started.
That will not be a good input method, I think. Because you will always need some travel distance to discriminate between a line and a curve, which means some input delay. Here is an alternative:
Only vertical mouse having their line crossing the center of the screen are considered vertical. Same for horizontal. In other cases it's considered a rotation, and to calculate its amplitude, calculate the angle between the last mouse location and the current location relatively to the center of the screen.
Alternatively you could use the center of the selected mesh if your application works like that.
You can't detect the "circular, along an arc" mouse movement with anywhere near the precision needed for 3d model viewing. What you want is something like this: http://thetechartist.com/?p=80
You nominate an axis (x, y, or z) using either keyboard shortcuts or on-screen axis indicators that you can grab with the mouse.
This will be much more precise than trying to detect an "arc" gesture. Any "arc" recognition would necessarily involve a delay while you accumulate enough mouse samples to decide whether an arc gesture has begun or not. Gesture recognition like this is non-trivial (I've done some gesture work with the Wii-mote). Similarly, even your simple "vertical" and "horizontal" mouse movement detection will require a delay for the same reason. Any "simple threshold to filter slight deviations" will make it feel dampened and weird.
For 3d viewing you want 1:1 mouse responsiveness, and that means just explicitly nominating an axis with a shortcut key or UI etc. For x-axis rotation, just restrict it to mouse x, y-axis to mouse y if you like. For z you could similarly restrict to x or y mouse input, or just take the total 2d mouse distance travelled. It depends what feels nicest to you.
As an alternative, you could try coding up support for a 3D mouse like the 3dConnexion SpaceExplorer.
I'm creating a simulator coded in python and based on ODE (Open Dynamics Engine). For visualization I chose VTK.
For every object in the simulation, I create a corresponding source (e.g. vtkCubeSource), mapper and actor. I am able to show objects correctly and update them as the simulation runs.
I want to add axes to have a point of reference and to show the direction of each axis. Doing that I realized that, by default, X and Z are in the plane of the screen and Y points outwards. In my program I have a different convention.
I've been able to display axes in 2 ways:
1) Image
axes = vtk.vtkAxes()
axesMapper = vtk.vtkPolyDataMapper()
axesMapper.SetInputConnection(axes.GetOutputPort())
axesActor = vtk.vtkActor()
axesActor.SetMapper(axesMapper)
axesActor.GetProperty().SetLineWidth(4)
2) Image (colors do not match with the first case)
axesActor = vtk.vtkAxesActor()
axesActor.AxisLabelsOn()
axesActor.SetShaftTypeToCylinder()
axesActor.SetCylinderRadius(0.05)
In the second one, the user is allowed to set many parameters related to how the axis are displayed. In the first one, I only managed to set the line width but nothing else.
So, my questions are:
Which is the correct way to define and display axes in a 3D scene? I just want them in a fixed position and orientation.
How can I set a different convention for the axes orientation, both for their display and the general visualization?
Well, if you do not mess with objects' transformation matrix for display
purposes, it could probably be sufficient to just put your camera into a
different position while using axes approach 2. The easy methods to adjust
your camera position are: Pitch(), Azimuth() and Roll().
If you mess with object transforms, then apply the same transform to the
axes.
Dženan Zukić kindly answered this question in vtkusers#vtk.org mail list.
http://www.vtk.org/pipermail/vtkusers/2011-November/119990.html
situation
I'm implementing a height field editor, with two views. The main view displays the height field in 3D enabling trackball navigation. The edit view shows the height field as a 2D image.
On top of this height field, new images can be applyed, that alter its appearence (cut holes, lower, rise secific areas). This are called patches.
Bouth the height field and the patches are one channel grayscale png images.
For visualisation I'm using the visualisation library framework (c++) and OpenGL 4.
task
Implement a drawing tool, available in the 2D edit view (orthographic projection), that creates this patches (as seperate images) at runtime.
important notes / constrains
the image of the height field may be scaled, rotated and transposed.
the patches need to have the same scale as the height field, so one pixel in the patch covers exactly a pixel in the height field.
as a result of the scaling the size of a framebuffer pixel may be bigger or smaller than the size of the height field/patch image pixel.
the scene contains objects (example: a pointing arrow) that should not appear in the patch.
question
What is the right approach to this task? So far I had the following ideas:
Use some kind of QT canvas to create the patch, then map it to the height field image proposions and save it as a new patch. This will be done everytime the user starts drawing, this way implementing undo will be easy (remove the last patch created).
Use an neutral colored image in combination with textre buffer objects to implement some kind of canvas myself. This way every time the user stops drawing the contents of the canvas is mapped to the height field and saved as a patch. Reseting the canvas for the next drawing.
Thre are some examples using a frame buffer object. However I'm not sure if this approach fits my needs. When I use open gl to draw a sub image into the frame buffer, woun't the resultig image contain all data?
Here is what I ende up:
I use the PickIntersector of the Visualisation Library to pick agains the Height Field Image in the edit view.
This yealds local coords of the image.
There are transformed to uv coords, wich in turn get transformed into pixel coords.
This is done when the user presses a mouse button and continues to happen when the mouse moves as long as its over the image.
I have a PatchCanvas class, that collects all this points. On commands it uses the Anti-Grain Geometry library to accually rasterize the lines that can be constructes from the points.
After that is done the rasterized image is divied up into a grid of fixed size. Every tile is scanned for a color different then the neutral one. Tiles that only contain the neutral color are dropped. The other are saved following the appropied naming schema, and can be loaded in the next frame.
Agg supports lines of different size. This issn't implemented jet, but the idea is to pick to adjacened points in screen space, get those uv coords, convert them to pixels and use this as the line thickness. This should result in broader strockes for zoom out views.