sketch cylinder [duplicate] - c++

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
sketching object near to each other
I want to sketch below graph in the screen ;
|----| sphere
|----|
/ /
/ /
/ / cylinder
/ /
/ / angle = 45
| |
| |
| | cylinder (i)
| |
| |
| |
----------- cylinder
-----------
To sketch the cylinder marked with (i), I have use below code, can you help me what is my mistake because I could not manage to draw (i) ?
glTranslatef(0.0f, 10.0f, 400.0f ) ;
glColor3f ( 0.0f, 1.0f, 1.0f ) ;
glRotatef (90.0f, 1.0f, 1.0f, 0.0f );
gluCylinder(quadric,0.0f,200.0f,100.0f,32,32);
glTranslatef(0.0f, 10.0f, -400.0f ) ;

I don't want to be the bad guy here, so let me explain, why that bit of code is worth nothing without the context and why you need to understand.
Let's go through this snippet line by line. It all starts with
glTranslatef(0.0f, 10.0f, 400.0f ) ;
The first question is: What matrix is this operating on. Probably modelview, but we don't know. And what is the matrix before that call to glTranslatef? OpenGL matrix operations somewhat like x86 assembly, in that they replace the matrix on the stack with the result of the operation.
glColor3f ( 0.0f, 1.0f, 1.0f ) ;
This set the color state. Of course. One normally groups this call with the geometry to be drawn, and not put it somewhere in the middle of the code though.
glRotatef (90.0f, 1.0f, 1.0f, 0.0f );
Rotating about the axis (1, 1, 0), i.e. it's like sticking a axle through the object, passing through the local origin and going toward the point (1, 1, 0), and then rotating this by 90° about this axis.
gluCylinder(quadric,0.0f,200.0f,100.0f,32,32);
Now a cylinder is drawn, which will be first rotated, then translated, and then… only you know, because you omitted the part, where the modelview matrix is reset when beginning rendering the frame.
glTranslatef(0.0f, 10.0f, -400.0f ) ;
And the final glTranslatef does not have any effect on drawing the cylinder whatsoever.
You see the problem now? You're asking a very specific question, that's clearly homework, put some piece of random code in there, just ask "how to draw it" without any idea, what you're actually doing.
There's no way we can help you, if you don't first grip the basics first. We'll gladly help to get you there. Start with drawing something simple, like a triangle centered in the window.

Related

OpenGL - Rotations on different axis independent of eachother

First off, I'm sorry if I confuse anyone because I don't know how to phrase this, exactly.
Anyway, What I want to do is rotate on 3 axis, but independent of eachother. If I have
glRotatef(getPitch(),1f,0,0);
glRotatef(getYaw(),0,1f,0);
glRotatef(getRoll(),0,0,1f);
Then, it rotates my object on the x axis just fine, but the other two axis rotate on the offset of the x rotation. How do I rotate these all independent of eachother? (On the same object)
Again, Sorry if I confused anyone.
You could push and pop the matrix onto and off the stack, so you could do:
glPushMatrix();
glRotatef( getPitch(), 1.0f, 0.0f ,0.0f );
glPopMatrix();
glPushMatrix();
glRotatef( getYaw(), 0.0f, 1.0f, 0.0f);
glPopMatrix();
glPushMatrix();
glRotatef( getRoll(), 0.0f, 0.0f, 1.0f);
glPopMatrix();
so basically, pushing the matrix, saves the transformation matrix in it's current state. You apply the transformations that you want on the object (in your case rotations around an axis), which updates the matrix. Popping it restores it to the original state before the rotation was applied. You can then apply each rotation independently of the other ones.

DirectX, scaling orthgraphic vertices to world space vertices

Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.

How to use glOrtho2D with moving positions?

I'm trying to move the camera based on a player (simple square)'s position (x, y). The scale is relatively small and the character is 0.5f by 0.5f.
How can I focus the camera on the player's x, and y coordinates using glOrtho2D?
I am really confused by how you use left, right, down, and up. It makes absolutely no sense, it apparently defines the screen ratio as well as the position in which it draws?
Any help is EAGERLY appreciated.
I switched from the 3d version (gluLookAt) which was the following:
gluLookAt(jake.px, 0.0f, jake.pz + 20, jake.px, 7.0f, jake.pz, 0.0f, 1.0f, 0.0f );
and on Resize
gluPerspective(45.0f, ratio, 0.1f, 100.0f);

How to Set Camera View using gluLookAt() function?

I have used glOrtho(500, 600, 600, 700, -100, 100) projection with this i want to use camera view settings with gluLookAt() method what should be the parameters for gluLookAt function on this projection..
glOrtho builds a matrix that forms the "lens" of your virtual camera. gluLookAt moves that virtual camera.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd368663%28v=vs.85%29.aspx
eyeX/Y/Z are where the camera is.
centerX/Y/Z are the spot at which the camera is looking.
upX/Y/Z is which way up the camera is.
An example use might be:
gluLookAt
(
0.0f, 2.0f, -16.0f,
0.0f, 0.5f, 0.0f,
0.0f, 1.0f, 0.0f
);
This will put the camera 16 units backwards, raise it slightly, point slightly above 0, 0, 0, with the top of the screen pointing along Y+.
You could change the first value to move the camera.
Change the second to change which part of the scene it's pointed at.
Change the third to roll/bank the camera.
The important question, however, is what do you want to do with it?

C++/OpenGL - Rotating a rectangle

For my project i needed to rotate a rectangle. I thought, that would be easy but i'm getting an unpredictable behavior when running it..
Here is the code:
glPushMatrix();
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0, 0);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(width_sprite_, 0);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(width_sprite_, height_sprite_);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0, height_sprite_);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
The problem with that, is that my rectangle is making a translation somewhere in the window while rotating. In other words, the rectangle doesn't keep the position : vec_vehicle_position_.x and vec_vehicle_position_.y.
What's the problem ?
Thanks
You need to flip the order of your transformations:
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
becomes
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
To elaborate on the previous answers.
Transformations in OpenGL are performed via matrix multiplication. In your example you have:
M_r_ - the rotation transform
M_t_ - the translation transform
v - a vertex
and you had applied them in the order:
M_r_ * M_t_ * v
Using parentheses to clarify:
( M_r_ * ( M_t_ * v ) )
We see that the vertex is transformed by the closer matrix first, which is in this case the translation. It can be a bit counter intuitive because this requires you to specify the transformations in the order opposite of that which you want them applied in. But if you think of how the transforms are placed on the matrix stack it should hopefully make a bit more sense (the stack is pre-multiplied together).
Hence why in order to get your desired result you needed to specify the transforms in the opposite order.
Inertiatic provided a very good response. From a code perspective, your transformations will happen in the reverse order they appear. In other words, transforms closer to the actual drawing code will be applied first.
For example:
glRotate();
glTranslate();
glScale();
drawMyThing();
Will first scale your thing, then translate it, then rotate it. You effectively need to "read your code backwards" to figure out which transforms are being applied in which order. Also keep in mind what the state of these transforms is when you push and pop the model-view stack.
Make sure the rotation is applied before the translation.