flip image in opengl with scale - opengl

Im trying to flip an image with opengl.
I can draw it normally and it looks like this:
There is a pattern.
Translate and scale with -1.
I've done about 50 combinations, but everything I does never appears:
By Translating to the center of the screen negative scale should appear above, but dosent. Considering normal scale appears below, it should work.
GL11.glTranslatef( 0, height/2 , 0.0f );
GL11.glScalef(1f, -1f, 0.0f);
GL11.glTranslatef( 0, -height/2 , 0.0f );
Images.image.draw(200, height/2, 300, 20);

Related

Understanding the window coordinates' interpretation in OpenGL

I was trying to understand OpenGL a bit more deep and I got stuck with below issue.
This segment describes my understanding, and the outputs are as assumed.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -1);
glRotatef(0, 0, 0, 1);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd();
The window coordinates (Wx, Wy, Wz) for the above snippet are
(272.00000286102295, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 111.99999332427979, 5.9604644775390625e-008)
(527.99999713897705, 368.00000667572021, 5.9604644775390625e-008)
(272.00000286102295, 368.00000667572021, 5.9604644775390625e-008)
I did a glReadPixels() and dumped to a bmp file. In the image I get a quad as expected with the (Wx, Wy) mentioned above ( since incase of images, the origin is at the top left, while verifying the bmp image I took care of subtracting the the window height i.e 480). This output was as per my understanding - (Wx, Wy) will be used as a 2D coordinate and Wz will be used for depth purpose.
Now comes the issue. I tried the below code snippet.
glViewport(0, 0 ,800, 480);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(100, 0, -1);
glRotatef(30, 0, 1, 0);
glBegin(GL_QUADS);
glVertex3f(-128, -128, 0.0f);
glVertex3f(128, -128, 0.0f);
glVertex3f(128, 128, 0.0f);
glVertex3f(-128, 128, 0.0f);
glEnd()
The window coordinates for the above snippet are
(400.17224205479812, 242.03174613770986, 1.0261343689191909)
(403.24386530741430, 238.03076912806583, 0.99456100555566640)
(403.24386530741430, 241.96923087193414, 0.99456100555566640)
(400.17224205479812, 237.96825386229017, 1.0261343689191909)
When I dumped output to a bmp file, I expected to have a very small parallelogram(approx like a 4 x 4 square transformed to a parallelogram) based on the above (Wx, Wy). But this was not the case. The image had a different set of coordinates as below
(403, 238)
(499, 113)
(499, 366)
(403, 241)
I have mentioned the coordinates in CW direction as seen on the image.
I got lost here. Can anyone please help in understanding what and why it is happening in the 2nd case??
How come I got a point (499, 113) on the screen when it was no where in the calculated window coordinates?
I used gluProject() to the window coordinates.
Note : I'm using OpenGL 2.0. I'm just trying to understand the concepts here, so please don't suggest to use versions > OpenGL 3.0.
edit
This is an update for the answer posted by derhass
The homogenous coordinates after the projection matrix for the 2nd case is as follows
(-0.027128123630699719, -0.53333336114883423, -66.292930483818054, -63.000000000000000)
(0.52712811245482882, -0.53333336114883423, 64.292930722236633, 65.00000000000000)
(0.52712811245482882, 0.53333336114883423, 64.292930722236633, 65.000000000000000)
(-0.027128123630699719, 0.53333336114883423, -66.292930483818054, 63.000000000000000)
So here for the vertices where z > -1, the vertices will get clipped at the near plane. When this is the case, shouldn't GL use the projected point at z = -1 plane?
The thing you are missing here is clipping.
After this
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-400.0, 400.0, -240.0, 240.0, 1.0, 100.0);
you basically have a camera at origin, looking along the -z direction, and the near plane at z=-1, the far plane at z=-100. Now you draw a 128x128 square rotated at 30 degrees aliong the y (up) axis, and shifted by -1 along z (and 100 along x, but that is not the crucial point here). Since You rotated the square around its center point, the z value for two of the points will be way before the near plane, while the other two should fall into the frustum. (And you can also see that as those two points match your expectations).
Now directly projecting all 4 points to window space is not what GL does. It transforms the points to clip space, intersects the primitives with all 6 sides of the viewing frustum and finally projects the clipped primitives into window space for rasterization.
The projection you did is actually only meaningful for points which lie inside the frustum. Two of your points lie behind the camrea, and projecting points behind the camera will create an mirrored image of these points in front of the camera.

DirectX, scaling orthgraphic vertices to world space vertices

Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.

How to scale an image in only one direction in opengl?

I understand that glScalef(x,0,0) scales on X axis in both directions(+ve and -ve). But how to do the scaling in only one direction?(either +ve or -ve). And what coordinates should y and z have?(0 or 1) Throw some light on this topic.
glTranslatef(x*0.5f, 0.0f, 0.0f);
glScalef(x, 1.0f, 1.0f);
drawObject();
treat this code like this: scale my object (x, 1, 1) then translate (x*0.5, 0, 0)... from bottom to top.
do not scale by 0! glScalef(x, 0, 0) will make your object dissapear!
look here and here
note that you are using old opengl and try to look for "modern" opengl tutorials.

3D object in front of 2D Sprite (background) , how?

I'm new to Direct3D and I was on a project taking pictures from a webcam and draw some 3D objects in front of it.
I was able to render webcam images as background using Orthogonal Projection.
//init matrix
D3DXMatrixOrthoLH(&Ortho, frameWidth, frameHeight, 0.0f, 100.0f);
//some code
D3DXVECTOR3 position = D3DXVECTOR3(0.0f, 0.0f, 100.0f);
g_pSprite->Begin(D3DXSPRITE_OBJECTSPACE);
g_pSprite->Draw(g_pTexture,NULL,&center,&position,0xFFFFFFFF);
g_pSprite->End();
Then I tried to insert a simple triangle in front of it. The Matrices are setup as follow
D3DXMATRIXA16 matWorld;
D3DXMatrixTranslation( &matWorld, 0.0f,0.0f,5.0f );
g_pd3dDevice->SetTransform( D3DTS_WORLD, &matWorld );
D3DXMATRIXA16 matProj;
D3DXMatrixPerspectiveFovLH( &matProj, D3DX_PI / 4, 1.0f, 1.0f, 100.0f );
g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
5.0 should be < 100.0 and the triangle is supposed to be appear in front of the images. However it does not appear unless set the z position to 0. At position 0, i can see the triangle but background is blank.
Do you guys have any suggestions?
I would not draw the webcam image in the object space (D3DXSPRITE_OBJECTSPACE) if you intend to use your image solely for background purpose; something like
D3DXVECTOR3 backPos (0.f, 0.f, 0.f);
pBackgroundSprite->Begin(D3DXSPRITE_ALPHABLEND);
pBackgroundSprite->Draw (pBackgroundTexture,
0,
0,
&backPos,
0xFFFFFFFF);
pBackgroundSprite->End();
should hopefully do what you're looking for.
As a quick fix you could disable depth testing as follows;
g_pd3dDevice->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE);
This way the z-index of the primitives being drawn should reflect the order in which they are drawn.
Also, try using the PIX debugging tool (this is bundled with the DirectX SDK). This is always my first port of call for drawing discrepancies as it allows you to debug each Draw call separately with access to the depth buffer and transformed vertices.

Trying to zoom in on an arbitrary rect within a screen-aligned quad

I've got a screen-aligned quad, and I'd like to zoom into an arbitrary rectangle within that quad, but I'm not getting my math right.
I think I've got the translate worked out, just not the scaling. Basically, my code is the following:
//
// render once zoomed in
glPushMatrix();
glTranslatef(offX, offY, 0);
glScalef(?wtf?, ?wtf?, 1.0f);
RenderQuad();
glPopMatrix();
//
// render PIP display
glPushMatrix();
glTranslatef(0.7f, 0.7f, 0);
glScalef(0.175f, 0.175f, 1.0f);
RenderQuad();
glPopMatrix();
Anyone have any tips? The user selects a rect area, and then those values are passed to my rendering object as [x, y, w, h], where those values are percentages of the viewport's width and height.
Given that your values are passed as [x, y, w, h] I think you want to first translate in the negative direction to get the upper left hand corner at 0,0, then scale by 1/w and 1/h to fill it to the screen. Like this:
//
// render once zoomed in
glPushMatrix();
glTranslatef(-x, -y, 0);
glScalef(1.0/w, 1.0/h, 1.0f);
RenderQuad();
glPopMatrix();
Does this work?
When I've needed to do this, I've always just changed the parameters I passed to glOrtho, glFrustrum, gluPerspective, or whatever (whichever I was using).
From your comment it looks like you want to draw the same quad (RenderQuad()) as full image and in PIP mode.
Assuming you have widthPIP and heightPIP and startXY position of PIP window then use widthPIP/totalWidth, heightPIP/totalHeight to scale original quad and re-render at given startXY.
Is this what you are looking for?