Related
Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.
i want to code a little minecraft clone. Now i tried to insert some simple lighting but my results are very bad. I read much about it and i tried different solutions without any result.
Thats what i got.
Initializing:
GL11.glViewport(0, 0, Config.GAME_WIDTH, Config.GAME_HEIGHT);
GL11.glMatrixMode(GL11.GL_PROJECTION); // Select The Projection Matrix
GL11.glLoadIdentity(); // Reset The Projection Matrix
GL11.glMatrixMode(GL11.GL_MODELVIEW); // Select The Modelview Matrix
GL11.glLoadIdentity(); // Reset The Modelview Matrix
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glEnable(GL11.GL_CULL_FACE);
GL11.glFrontFace(GL11.GL_CCW);
GL11.glLightModeli(GL11.GL_LIGHT_MODEL_LOCAL_VIEWER, GL11.GL_TRUE);
GL11.glEnable(GL11.GL_LIGHTING);
GL11.glEnable(GL11.GL_LIGHT0);
FloatBuffer qaAmbientLight = floatBuffer(0.0f, 0.0f, 0.0f, 1.0f);
FloatBuffer qaDiffuseLight = floatBuffer(1.0f, 1.0f, 1.0f, 1.0f);
FloatBuffer qaSpecularLight = floatBuffer(1.0f, 1.0f, 1.0f, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_AMBIENT, qaAmbientLight);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_DIFFUSE, qaDiffuseLight);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_SPECULAR, qaSpecularLight);
FloatBuffer qaLightPosition = floatBuffer(lightPosition.x, lightPosition.y, lightPosition.z, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_POSITION, qaLightPosition);
So now before each render i tried this:
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT | GL11.GL_STENCIL_BUFFER_BIT);
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glClearColor(0.0f, 100.0f, 100.0f, 1.0f);
GL11.glShadeModel(GL11.GL_FLAT);
GL11.glLoadIdentity();
FloatBuffer qaLightPosition = floatBuffer(lightPosition.x, lightPosition.y, lightPosition.z, 1.0f);
GL11.glLight(GL11.GL_LIGHT0, GL11.GL_POSITION, qaLightPosition);
FloatBuffer ambientMaterial = floatBuffer(0.2f, 0.2f, 0.2f, 1.0f);
FloatBuffer diffuseMaterial = floatBuffer(0.8f, 0.8f, 0.8f, 1.0f);
FloatBuffer specularMaterial = floatBuffer(0.0f, 0.0f, 0.0f, 1.0f);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_AMBIENT, ambientMaterial);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_DIFFUSE, diffuseMaterial);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SPECULAR, specularMaterial);
GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS, 50.0f);
Of course this is not much code but this is all about lighting. Did i make a mistake? I read that OpenGL is not as good as DirectX for lighting and shadowing.
That's what it looks like:
http://img199.imageshack.us/img199/7014/testrender.png
Can someone give me tips to get it a better look?
I found one post with an awesome block landscape.
http://i.imgur.com/zIocp.jpg
That's how it should look like :)
Can someone give me tips to get it a better look?
Neither OpenGL nor DirectX have nothing to do with lighting and shadowing, if you use programmable pipeline. The normals become just another vertex attribute, which can be used for lighting computation. Fixed functionality is old and deprecated, and thus not recommended, if you aren't forced to use it.
Changing to shaders isn't really that hard, and you won't be limited by the fixed pipeline anymore; you have then complete control over how the lighting is computed, you can easily output more debug information (such as coloring surfaces based on their normals).
That's how it should look like :)
The screen you posted has also visible ambient occlusion. Achieving this effect without shaders would be extremely hard and simply not worth the effort.
I happen to be doing a similar project myself; I wouldn't mention it, if it wasn't OpenSource and publicly available. Here's the sample result:
You can find the lighting shader code here.
I'll post an excerpt to prevent links from rotting:
float CalcDirectionalLightFactor(vec3 lightDirection, vec3 normal) {
float DiffuseFactor = dot(normalize(normal), -lightDirection);
if (DiffuseFactor > 0) {
return DiffuseFactor;
}
else {
return 0.0;
}
}
vec3 DiffuseColor = Light0.Color * Light0.DiffuseIntensity * CalcDirectionalLightFactor(Light0.Direction, normal);
Bartek's answer is a good one. You will want to go down the path of writing your own shaders, understanding what OpenGl provides for shadowing and lighting and not relying on older, deprecated lighting models. It is a lot more complex the glEnable(LIGHTING_AND_SHADOWING).
But, if you just want to play with your code to see the colors change from binary black/white, one potential idea is turning off the qaSpecularLight (which creates "glossy" all-white highlights that don't help you get to a "matte" look) and adjusting the glShadeModel setting for SMOOTH shading.
That should help somewhat, but will not get you all the way to your goal. Follow Bartek's suggested path (or google for similar ideas).
I'm trying to put a texture on one surface of a cube (if facing the XY plane the texture would be facing you).
No texture is getting drawn, only the wireframe and I'm wondering what I'm doing wrong. I think it's the vertex coordinates?
Here's some code:
struct paperVertex {
D3DXVECTOR3 pos;
DWORD color; // The vertex color
D3DXVECTOR2 texCoor;
paperVertex(D3DXVECTOR3 p, DWORD c, D3DXVECTOR2 t) {pos = p; color = c; texCoor = t;}
paperVertex() {pos = D3DXVECTOR3(0,0,0); color = 0; texCoor = D3DXVECTOR2(0,0);}
};
D3DCOLOR color1 = D3DCOLOR_XRGB(255, 255, 255);
D3DCOLOR color2 = D3DCOLOR_XRGB(200, 200, 200);
vertices[0] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,0)); // bottom left corner of tex
vertices[1] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,0)); // top left corner of tex
vertices[2] = paperVertex(D3DXVECTOR3( 1.0f, 1.0f, -1.0f), color1, D3DXVECTOR2(0,1)); // top right corner of tex
vertices[3] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, -1.0f), color1, D3DXVECTOR2(1,1)); // bottom right corner of tex
vertices[4] = paperVertex(D3DXVECTOR3(-1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
vertices[5] = paperVertex(D3DXVECTOR3(-1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[6] = paperVertex(D3DXVECTOR3(1.0f, 1.0f, 1.0f), color2, D3DXVECTOR2(0,0));
vertices[7] = paperVertex(D3DXVECTOR3(1.0f, -1.0f, 1.0f), color1, D3DXVECTOR2(0,0));
D3DXCreateTextureFromFile( md3dDev, "texture.bmp", &gTexture);
md3dDev->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);
md3dDev->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
md3dDev->SetTexture(0, gTexture);
md3dDev->SetStreamSource(0, mVtxBuf, 0, sizeof(paperVertex));
md3dDev->SetVertexDeclaration(paperDecl);
md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME);
md3dDev->SetIndices(mIndBuf);
md3dDev->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, 0, VTX_NUM, 0, NUM_TRIANGLES);
disclaimer: I have no Direct3D experience, but solid OpenGL and general computer graphics experience. And since the underlying concepts don't really differ, I attempt an answer, of whose correctness I'm 99% sure.
You call md3dDev->SetRenderState(D3DRS_FILLMODE, D3DFILL_WIREFRAME) immediately before rendering and wonder why only the wireframe is drawn?
Keep in mind that using a texture doesn't magically turn a wireframe model into a solid model. It is still a wireframe model with the texture only applied to the wireframe. You can only draw the whole primitve as wireframe or not.
Likewise does using texture coordinates of (0,0) not magically disable texturing for individual faces. You can only draw the whole primitive textured or not, though you might play with the texture coordinates and the texture's wrapping mode (and maybe the texture border) to make the "non-textured" faces use a uniform color from the texture and thus look like not textured.
But in general to achieve such deviating render styles (like textured/non-textured, but especially wireframe/solid) in a single primitive, you won't get around splitting the primitive into multiple ones and drawing each one with its dedicated render style.
EDIT: According to your comment: If you don't need wireframe, why enable it then? Besides disabling wireframe, with your current texture coordinates the other faces won't just have a single color from the texture but some strange distorted version of the texture. This is because your vertices (and their texture coordinates) are shared between different faces, but the texture coordinates at the moment are created only for the front face to look reasonable.
In such a situation, you won't get around duplicating vertices, so that each face uses a set of 4 unique vertices. In the case of a cube you won't actually need an index array anymore, because each face needs its own vertices. This is due to the fact, that a vertex conceptually represents all of the vertex' attributes (position, color, texCoord, ...) and you cannot have a two vertices sharing a position but having different texture coordinates (you can but you need two distinct vertices). Once you've duplicated the vertices accordingly, you can give each of the corner vertices their respective texture coordinates (which would be the usual [0,1]-quad if you want them textured normally, or all 0s if you want them to have a single color, in this case the color of the bottom left (or top left in D3D?) corner of the texture).
The same problem arises if you want to light the cube and need normals per-face, istead of interpolated per-vertex normals. In this case you also have to introduce duplicate vertices only deviating in their normal attribute. Always keep in mind that a vertex conceptually consists of all the vertex attributes and if two vertices have the same position but a different color/normal/texCoord/... they are conceptually (and practically) different vertices.
Just wondering if someone can help me track down my issue with the following code where the text color is not being set correctly (its just rendering whatever color is in the background)
void RenderText(int x, int y, const char *string)
{
int i, len;
glUseProgram(0);
glLoadIdentity();
glColor3f(1.0f, 1.0f, 1.0f);
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
glDisable(GL_TEXTURE_2D);
for (i = 0, len = strlen(string); i < len; i++)
{
glutBitmapCharacter(GLUT_BITMAP_8_BY_13, (int)string[i]);
}
glEnable(GL_TEXTURE_2D);
}
I've checked all the usual things (I think), disabling texturing, setting color before rasterPos'ing, etc Ive disabled shaders but Im still having issues
Looks like you've forgotten to glDisable(GL_LIGHTING) before drawing your string.
No color is stored with any OpenGL bitmap (which is what glutBitmapCharacter created. The bitmap is monochrome and stores only shape.
When the bitmap is drawn (e.g. glBitmap or maybe glDrawLists), the current raster color is used. The raster color is not always the same as the active color, see http://www.opengl.org/wiki/Coloring_a_bitmap.
Color is usually controlled with the glColor3f function, thus if the text is white and shouldn't be then the following change should help:
glLoadIdentity();
glColor3f(0.5f, 0.5f, 0.5f); //<-- this line controls the color (now text is gray)
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
Also, calling glDisable(GL_TEXTURE_2D) and glEnable(GL_TEXTURE_2D) is unnecessary. Instead you can just call glBindTexture(GL_TEXTURE_2D,0) to disable textures and then use the same function to set the active texture. Just make sure to call glEnable(GL_TEXTURE_2D) in your initialization function.
For my project i needed to rotate a rectangle. I thought, that would be easy but i'm getting an unpredictable behavior when running it..
Here is the code:
glPushMatrix();
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0, 0);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(width_sprite_, 0);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(width_sprite_, height_sprite_);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0, height_sprite_);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
The problem with that, is that my rectangle is making a translation somewhere in the window while rotating. In other words, the rectangle doesn't keep the position : vec_vehicle_position_.x and vec_vehicle_position_.y.
What's the problem ?
Thanks
You need to flip the order of your transformations:
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
becomes
glTranslatef(vec_vehicle_position_.x, vec_vehicle_position_.y, 0);
glRotatef(30.0f, 0.0f, 0.0f, 1.0f);
To elaborate on the previous answers.
Transformations in OpenGL are performed via matrix multiplication. In your example you have:
M_r_ - the rotation transform
M_t_ - the translation transform
v - a vertex
and you had applied them in the order:
M_r_ * M_t_ * v
Using parentheses to clarify:
( M_r_ * ( M_t_ * v ) )
We see that the vertex is transformed by the closer matrix first, which is in this case the translation. It can be a bit counter intuitive because this requires you to specify the transformations in the order opposite of that which you want them applied in. But if you think of how the transforms are placed on the matrix stack it should hopefully make a bit more sense (the stack is pre-multiplied together).
Hence why in order to get your desired result you needed to specify the transforms in the opposite order.
Inertiatic provided a very good response. From a code perspective, your transformations will happen in the reverse order they appear. In other words, transforms closer to the actual drawing code will be applied first.
For example:
glRotate();
glTranslate();
glScale();
drawMyThing();
Will first scale your thing, then translate it, then rotate it. You effectively need to "read your code backwards" to figure out which transforms are being applied in which order. Also keep in mind what the state of these transforms is when you push and pop the model-view stack.
Make sure the rotation is applied before the translation.