In OpenGL I have to rotate a cube (and translate it) so that it looks like in these two images.
Without any transformations only the front facing red face is visible. I just don't understand how I can rotate it (so that the top and right sides are visible like in the images) and keep the red face perfectly square.
I've thought about translating it to the bottom left, but that only moves the red square around, it doesn't make the other faces visible.
I'm using glFrustum(-20, 20, -20, 20, -1, -10);
If you are using a perspective projection (which you are) and the front face of your cube is parallel to the x-y plane, then you will only see the other two faces if the cube is entirely in one quadrant of the eye space; that is, if there were horizontal and vertical lines cutting the window in half, the cube would have to lie entirely within one of the four resulting rectangles.
Other options for making the other two faces show are
use an isometric projection
rotate the cube to bring the other faces into view.
To aid in visualising this, try playing Minecraft (say) and moving around in different ways to see how different sides of different blocks come into view.
That is not a rotation.
The second picture looks like an orthographic projection (glOrtho), but that may be a coincidence.
In either case, you can only get an image like that if the cube is translated away from the origin toward the bottom left, as you suggest.
Related
I have a program displaying planes of cubes, like levels in a house, I have the planes displayed so that the display angle is consistent to the viewport projection plane. I would like to be able to allow the user to select them.
First I draw them relative to each other with the first square drawn at {0,0,0}
then I translate and rotate them, each plane has it's own rotate and translate.
Thanks to this this page I have code that can cast a ray using the user's last touch. If you notice in the picture above, there is a green square and blue square, this is debug graphic displaying the ray intersecting the near and far planes in the projection matrix after clicking in the centre (with z of zero in order to display them), so it appears to be working.
I can get a bounding box of the cube, but it's coordinates will think they are still up in the left corner.
My question is how do I use my ray to check intersections with the objects after they have been rotated and translated? I'm very confused as I once had this working when I was translating and rotating the whole grid as one, now each plane is being moved separately I can't work out how to do it.
So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt
I'm developing an OpenGL application which right now only draws a big tube consisting of several small cylinders (kind of like a slinky). I'm getting an annoying effect when I turn the lighting and normals on, as from certain angles I get these annoying black dots on the cylinders' borders:
I belive this is a byproduct of the fact the cylinders are very thin. Basically I set the normal to (0,0,+/- 1) when setting the top/base, and then side normals are (cos(toRadian(beta)), sin(toRadian(beta)), 0).
Is it possible to remove this effect whitout getting 'fatter' cylinders? Or is there something wrong in the way I define the normals?
Thanks
This appears to be the rendering of the sides of the cylinders. The yellow in the image corresponds to the tops of the cylinders. The sides of the cylinders are at 90 degrees to the tops, so they are not lit (I'm guessing the light is in the same direction as the camera) and appear black. With the cylinders being so thin these will not fill make pixels so they don't show up much.
How to fix it? I've got a couple of ideas:
1) Just draw the tops and bottoms, not the sides - this will definitely fix the problem when viewed from this angle, but will lead to further problems if your camera moves.
2) Disable lighting, then all faces will be drawn the same colour (assuming the sides are the same colour as the top).
3) Use multi-sampling - this means most of the edges won't appear (as each pixel is more likely to be top than side due to the angle).
4) Add more lights around the scene, perpendicular to the current light.
1 & 2 are your best bet depending on what you're trying to achieve.
I am creating a scene where I use a box to represent a room and different models within that box. When I enable lighting, my models light up fine but the room itself (the inside of the box) does not light up, or rather it is darker than it should be. Is it because I am trying to light up the inside of a cube? I am sure the normals are correct. Please let me know what you think!
I suppose the normals aren't correct but how do I go about finding the correct normals for inside of the cube. Currently, I am only passing the center point of each face into the normalf function.
If you pass in the center points your normals will be facing the wrong way.
For example, if your cube is 2 units in size and centered on the origin, the center point of the face on the positive X axis will be (1, 0, 0), and that would also happen to be the correct normal for the outward facing side of that face.
However the face pointing inwards will have a normal that's the inverse of that, i.e. (-1, 0, 0).
What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.