I still new with direct 9.0. How to move my object or triangle on runtime?
According, to this tutorial.
http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-5
I understand what it does is move the camera coordinate, set the world coordinate and proj coordinate. What if I have I want to move a triangle position on run-time? let say move x-axis by 1px per frame.
//A structure for our custom vertex type
struct CUSTOMVERTEX
{
FLOAT x, y, z, rhw; // The transformed position for the vertex
DWORD color; // The vertex color
FLOAT tu, tv; // Texture position
};
I have a feeling that I need to shift the position of x,y,z per vertices. But it is impossible for me to release the vertices buffer, repeat the reallocate of memory just because of x,y,z. It will take too much computation. let alone rendering
How can I access invidual vertices on run-time and just modify its content (X, Y, Z) without the need to destroy and copy?
1) However this lead to another question. The coordinates itself is model coordinates. So the question is how do I change the world coordinate or define per object and change it.
LPDIRECT3DVERTEXBUFFER9 g_pVB;
You actually don't need to change your model vertices to achieve model space to world space transformation.
That how it usually done:
You load your model (vertices) once.
You decide how your model must look in current frame: translation (x,
y, z), rotation(yaw, pitch, roll), scale (x, y, z)
of you object
You calculating matrices according this info: mtxTranslation, mtxRotation, mtxScale
You calculating world matrix of this object: mtxWorld = mtxScale * mtxRotation * mtxTranslation. Note, that matrix multiplications is not commutative: result depends on operands order.
Then you applying this matrix (using fixed function or inside vertex shader)
In your tutorial:
D3DXMATRIX matTranslate; // a matrix to store the translation information
// build a matrix to move the model 12 units along the x-axis and 4 units along the y-axis
// store it to matTranslate
D3DXMatrixTranslation(&matTranslate, 12.0f, 4.0f, 0.0f);
// tell Direct3D about our matrix
d3ddev->SetTransform(D3DTS_WORLD, &matTranslate);
So, if you want move your object in runtime, you must change world matrix, then push that new matrix to DirectX (via SetTransform() or via updating shader variable). Usually something like this:
// deltaTime is a difference in time between current frame and previous one
OnUpdate(float deltaTime)
{
x += deltaTime * objectSpeed; // Add to x coordinate
D3DXMatrixTranslation(&matTranslate, x, y, z);
device->SetTransform(D3DTS_WORLD, &matTranslate);
}
float deltaTime = g_Timer->GetGelta(); // Get difference in time between current frame and previous one
OnUpdate(deltaTime);
Or, if you don't have timer (yet) you can simply increment coordinate each frame.
Next, if you have multiple objects (it can be same model or different) every frame you doing something like:
for( all objects )
{
// Tell DirectX what vertexBuffer (model) you want to render;
SetStreamSource();
// Tell DirectX what translation must be applied to that object;
SetTransform();
// Render it
Draw();
}
Related
I am trying to implement a logic where, on mouse click, a shot is fired at an object.To do so, I did the following,
I first considered the .obj file of my model and found the region (list of coordinates) that the shot works on (a particular weak point of the body).
I then considered the least and largest x,y and z values present in the file for that particular region (xmin,ymin,zmin and xmax,ymax,zmax).
To figure out whether the shot has landed on the weak point, I considered the assumption that a shot would land on the weak point, if the coordinates of the shot lie between (xmin,ymin,zmin) and (xmax,ymax,zmax).
I assumed the coordinates from the .obj file to be the actual coordinates of the model, since the assimp code I have directly loads in the coordinates of the model. Considering (xmin,ymin,zmin) and (xmax,ymax,zmax), I converted the coordinates to window coordinates via gluProject().
I then considered the current cursor position and checked if the cursor position lies between (xmin,ymin,zmin) and (xmax,ymax,zmax).
The problems I now face are:
The object coordinates provided in the .obj file range between -4 to 4, which then lie around 1.0 after gluProject(), whereas the cursor position lies between (0,0) and (1280,720).
After gluProject(), (xmin,ymin) and (xmax,ymax) are either (0,1) or (1,0) whereas the zmin and zmax values seem fine.
How can I get my logic working ?
Here is the code:
// Call shader to draw and acquire necessary information for gluProject()
modelShader.use();
modelShader.setMat4("projection", projection);
modelShader.setMat4("view", view);
glm::mat4 model_dragon;
double time=glfwGetTime();
model_dragon=glm::translate(model_dragon, glm::vec3(cos((360.0-time)/2.0)*60.0,cos(((360.0-time)/2.0))*(-2.5),sin((360-time)/1.0)*60.0));
model_dragon=glm::rotate(model_dragon,(float)(glm::radians(30.0)),glm::vec3(0.0,0.0,1.0));
model_dragon=glm::scale(model_dragon,glm::vec3(1.4,1.4,1.4));
modelShader.setMat4("model", model_dragon);
collision_model=model_dragon;collision_view=view;collision_proj=projection; //so that I can provide the view,model and projection required for gluProject()
ourModel.Draw(modelShader);
Mouse button callback
// Note: dragon_min and dragon_max variables hold the constant position of the min and max coordinates.
void mouse_button_callback(GLFWwindow* window,int button,int action,int mods){
if(button==GLFW_MOUSE_BUTTON_LEFT && action==GLFW_PRESS){
Mix_PlayChannel( -1, shot, 0 ); //Play sound
GLdouble x,y,xmin,ymin,zmin,xmax,ymax,zmax,dmodel[16],dproj[16];
GLint dview[16];
float *model = (float*)glm::value_ptr(collision_model);
float *proj = (float*)glm::value_ptr(collision_proj);
float *view = (float*)glm::value_ptr(collision_view);
for (int i = 0; i < 16; ++i){dmodel[i]=model[i];dproj[i]=proj[i];dview[i]=(int)view[i];} // Convert mat4 to double array
glfwGetCursorPos(window,&x,&y);
gluProject(dragon_min_x,dragon_min_y,dragon_min_z,dmodel,dproj,dview,&xmin,&ymin,&zmin);
gluProject(dragon_max_x,dragon_max_y,dragon_max_z,dmodel,dproj,dview,&xmax,&ymax,&zmax);
if((x>=xmin && x<=xmax) && (y>=ymin && y<=ymax)){printf("Hit\n");defense--;}
The .obj coordinates have eg. values as shown:
0.032046 1.533727 4.398055
You are confusing the parameters of gluProject, especially the view parameter. This parameter should contain 4 integers which describe the viewport (x,y,width,height) and not the view matrix.
gluProject (and a lot of other glu functions) are tailored towards the fixed function pipeline and their matrix stacks. Due to this, you have to pass the following information:
model: The modelview matrix, as returned by glGetDoublev( GL_MODELVIEW_MATRIX, ...)).
proj: The projection matrix, as returned by glGetDoublev( GL_PROJECTION_MATRIX, ...).
view: The current viewport, as returned by glGetIntegerv( GL_VIEWPORT, ...)
As you see, the view matrix is packed together with the model matrix and view contains the viewport.
I'd strongly advice not to use glu functions at all when working with modern OpenGL. Especially when the matrices are already stored in glm, it would be better to use glm::project.
Note1: Converting a floating point matrix to an integer matrix by casting each element almost never results in anything meaningful.
Note2: When projecting a bounding rectangle to screenspace, the result will in general not be a rectangle anymore. During projection, angles are not preserved, thus the result is a general four cornered polygon and not a rectangle anymore. Same goes for bounding boxes: You can't even guarantee that the projected box is contained in the screen-space rectangle defined by projecting [x_min, y_min, z_min] and [x_max, y_max, z_max].
I'm having difficulty getting the right orientation from my objects within a scene. The objects are defined in standard Cartesian coordinates in the same units as I define the scene.
I then define my scenes matrix with the following code:
void SVIS_SetLookAt (double eyePos[3], double center[3], double up[3])
{
// Determine the new n
double vN[3] = {eyePos[0] - center[0], eyePos[1] - center[1], eyePos[2] - center[2]};
// Don't I need to normalize the above?
// Determine the new up by crossing witht he Up Vector.
double vU[3];
MATH_crossproduct(up, vN, vU);
MATH_NormalizeVector(vN);
MATH_NormalizeVector(vU);
// Determine V by crossing n and u...
double vV[3];
MATH_crossproduct(vN, vU, vV);
MATH_NormalizeVector(vV);
// Create the model view matrix.
double modelView[16] = {
vU[0], vV[0], vN[0], 0,
vU[1], vV[1], vN[1], 0,
vU[2], vV[2], vN[2], 0,
// -MATH_Dotd(eyePos, vU), -MATH_Dotd(eyePos, vV), -MATH_Dotd(eyePos, vN), 1
0, 0, 0, 1
};
// Load the modelview matrix. The model view matrix shoudl already be active.
glLoadMatrixd(modelView);
}
I am attempting to display n-1 objects such that each object is facing the object in front of it, excluding the first object which is not displayed. So for each object, I define the up, right, and forward vectors as such:
lal_to_ecef(curcen, pHits->u); // up vector is our position normalized
MATH_subtractVec3D((SVN_VEC3D*) prevcenter, (SVN_VEC3D*) curcen, (SVN_VEC3D*) pHits->f);
MATH_NormalizeVector(pHits->u);
MATH_NormalizeVector(pHits->f);
MATH_crossproduct(pHits->u, pHits->f, pHits->r);
MATH_NormalizeVector(pHits->r);
MATH_crossproduct(pHits->f, pHits->r, pHits->u);
MATH_NormalizeVector(pHits->u);
I then go on to display each object with the following code:
double p[3] = {pHits->cen[0] - position[0],
pHits->cen[1] - position[1],
pHits->cen[2] - position[2]};
glPushMatrix();
SVIS_LookAt(pHits->u, pHits->f, pHits->r, p);
glCallList(G_svisHitsListId);
glPopMatrix();
void SVIS_LookAt (double u[3], double f[3], double l[3], double pos[3])
{
double model[16] = {
l[0], u[0], f[0], 0,
l[1], u[1], f[1], 0,
l[2], u[2], f[2], 0,
pos[0], pos[1], pos[2], 1
};
glMultMatrixd(model);
}
I would expect this to work for any object such that the output would be whatever was defined in the Cartesian coordinate system would be present at the given point oriented such that it was pointed at the proceeding object with 0,1,0 and 0,-1,0 from the defined object would be aligned vertically on the screen. What I am seeing instead (by using simple rectangle as the object to be displayed) is that the objects are consistently rotated about the forward axis.
Can anyone point out what I am doing wrong here?
[Edit]
I've displayed an axis grid without translating by taking the three vectors multiplying a scalar and adding/subtracting it to the centre point. Doing this, the axis align up as I would expect. Overlaying the object described above shows the object to not be aligned the same way. Is there a relationship between the object space forward, right, and up vectors and the desired world-space vectors that I am missing? Am I simply completely off the mark with regards to my rotation translation matrix?
You are conflicted here; part of that matrix is transposed and part of
it is correct... you have the 4th column correct but your top-left 3x3
matrix is transposed. Each column of the 3x3 matrix (row in that array
of 16 double) is supposed to be one of your axes. It should be:
l[0],l[1],l[2],0, u[0],u[1],u[2],0, f[0],f[1],f[2],0,
pos[0],pos[1],pos[2],1. – Andon M. Coleman
This was dead on. Thanks Andon.
Building an entirely new modelview matrix from scratch using a 'lookat' implementation for each object is, frankly, crazy (or will at least drive you crazy). What you're doing is tantamount to trying to build a scene by having set of objects which are always in a fixed location, and constantly repositioning a camera to catch them from different angles.
A lookat style function should be called once to set up the camera (the view portion of the modelview matrix) position, and subsequently you should be using the matrix stack to position objects within the scene (the model portion of the modelview matrix). That's why it's called the modelview matrix, and not just the view matrix.
In code terms, it would look something like this
// Set up camera position
SVIS_LookAt(....);
for (int i = 0; i < n; ++i) {
glPushMatrix();
// move the object to it's location relative to the world coordinate system
glTranslate(...);
// rotate the object to have the correct orientation
glRotate(...);
// render the geometry
glCallList(...);
glPopMatrix();
}
Of course this assumes that everything has it's position defined in world coordinates. If you have a hierarchy of objects, then you would need to descend into an objects children between the glCallList and glPopMatrix in order to have their locations applied relative to their parent object.
I have the following issue when trying to map UV-coordinates to a sphere
Here is the code I'm using to get my UV-coordinates
glm::vec2 calcUV( glm::vec3 p)
{
p = glm::normalize(p);
const float PI = 3.1415926f;
float u = ((glm::atan(p.x, p.z) / PI) + 1.0f) * 0.5f;
float v = (asin(p.y) / PI) + 0.5f;
return glm::vec2(u, v);
}
The issue was very well explained at this stackoverflow question, although, I still don't get how can I fix it. From what I've been reading, I have to create a duplicate pair of vertices. Does anyone know some good and effcient way of doing it ?
The problem you have is, that at the seam your texture coordinates "roll" back to 0, so you get the whole texture mapped, mirrored onto the seam. To avoid this you should use GL_WRAP repeat mode and at the seam finish with vertices with texture coordinates >= 1 (don't roll back to 0). Remember that a vertex consists of the whole tuple of all its attributes and vertices with different attribute values are different in the whole, so there's no point in trying to "share" the vertices.
Another way to do it is simply to pass the object coordinates of the sphere into the pixel shader, and calculate the UV in "perfect" spherical space.
Be aware that you will need to pass the local derivatives so that you don't merely reduce your seam from several pixels to one.
Otherwise, yes. You need to duplicate vertices along the same edge as u=0, and likewise repeat the vertices at the poles. In this way, your object topology will become a rectangle: just like your texture.
I'm having an issue with drawing a model and rotating it using the mouse,
I'm pretty sure there's a problem with the mathematics but not sure .
The object just rotates in a weird way.
I want the object to start rotating each click from its current spot and not reset because of the
vectors are now changed and the calculation starts all over again.
void DrawHandler::drawModel(Model * model){
unsigned int l_index;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Modeling transformation
glLoadIdentity();
Point tempCross;
crossProduct(tempCross,model->getBeginRotate(),model->getCurrRotate());
float tempInner= innerProduct(model->getBeginRotate(),model->getCurrRotate());
float tempNormA =normProduct(model->getBeginRotate());
float tempNormB=normProduct(model->getCurrRotate());
glTranslatef(0.0,0.0,-250.0);
glRotatef(acos (tempInner/(tempNormA*tempNormB)) * 180.0 / M_PI,tempCross.getX(),tempCross.getY(),tempCross.getZ());
glColor3d(1,1,1);
glBegin(GL_TRIANGLES);
for (l_index=0;l_index < model->getTrianglesDequeSize() ;l_index++)
{
Triangle t = model->getTriangleByPosition(l_index);
Vertex a1 = model->getVertexByPosition(t.getA());
Vertex a2 = model->getVertexByPosition(t.getB());
Vertex a3 = model->getVertexByPosition(t.getC());
glVertex3f( a1.getX(),a1.getY(),a1.getZ());
glVertex3f( a2.getX(),a2.getY(),a2.getZ());
glVertex3f( a3.getX(),a3.getY(),a3.getZ());
}
glEnd();
}
This is the mouse function which saves the beginning vector of the rotating formula
void Controller::mouse(int btn, int state, int x, int y)
{
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if (btn==GLUT_LEFT_BUTTON){
switch(state){
case(GLUT_DOWN):
if(!_rotating){
_model->setBeginRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
_rotating=true;
}
break;
case(GLUT_UP):
_rotating=false;
break;
}
}
}
and finally the following function which holds the current vector.
(the beginning vector is where the mouse was clicked at
and the curr vector is where the mouse position at the moment )
void Controller::getMousePosition(int x,int y){
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if(_rotating){
_model->setCurrRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
}
}
where sphereradius is the sphere radius O_O of 70 degress
is any calculation wrong ? cant seem to find the problem...
thanks
Why so complicated? Either you change the view matrix or you change the model matrix of your focused object. If you choose to change the model matrix and your object is centered in (0,0,0) of the world coordinate system, computing the rotation around a sphere illusion is trivial - you just rotate into the opposite direction. If you want to change the view matrix (which is actually done when you change the position of the camera) you have to approximate the surface points on the chosen sphere. Therefore, you could introduce two parameters specifying two angles. Everytime you click move your mouse, you update the params and compute the new locations on the sphere. There are some useful equations in [http://en.wikipedia.org/wiki/Sphere].
Without knowing what library (or libraries) you're using your code is rather difficult to read. It seems you're setting up your camera at (0, 0, -250), looking towards the origin, then rotating around the origin by the angle between two vectors, model->getCurrRotate() and model->getBeginRotate().
The problem seems to be that in "mouse down" events you explicitly set BeginRotate to the point on the sphere under the mouse, then in "mouse move" events you set CurrRotate to the point under the mouse, so every time you click somewhere else, you lose the previous state of rotation because BeginRotate and CurrRotate are simply overwritten.
Combining multiple rotations around arbitrary different axes is not a trivially simple task. The proper way to do it is to use quaternions. You may find this primer on quaternions and other 3D math concepts useful.
You might also want a more robust algorithm for converting screen coordinates to model coordinates on the sphere. The one you are using is assuming the sphere appears 70 pixels in radius on the screen and that the projection matrix is orthographic.
Hey so I'm integrating box2d and SFML, and box2D has the same odd, mirrored Y-axis coordinate system as SFML, meaning everything is rendered upside down. Is there some kind of function or short amount of code I can put that simply mirrors the window's render contents?
I'm thinking I can put something in sf::view to help with this...
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
I don't know what is box2d but when I wanted to flip Y axis using openGL, I just applied negative scaling factor to projection matrix, like:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, -1.0f, 1.0f);
If you want to do it independent of openGL simply apply a sf::View with a negative x value.
It sounds like your model uses a conventional coordinate system (positive y points up), and you need to translate that to the screen coordinate system (positive y points down).
When copying model/Box2D position data to any sf::Drawable, manually transform between the model and screen coordinate systems:
b2Vec2 position = body->GetPosition();
sprite.SetPosition( position.x, window.GetHeight() - position.y )
You can hide this in a wrapper class or function, but it needs to sit between the model and renderer as a pre-render transform. I don't see a place to set that in SFML.
I think Box2D has the coordinate system you want; just set the gravity vector based on your model (0, -10) instead of the screen.
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
By properly applying transforms. First, you can apply a transform that sets the window's bottom-left corner as the origin. Then, scale the Y axis by a factor of -1 to flip it as the second transform.
For this, you can use sf::Transformable to specify each transformation individually (i.e., the setting of the origin and the scaling) and then – by calling sf::Transformable::getTransform() – obtain an sf::Transform object that corresponds to the composed transform.
Finally, when rendering the corresponding object, pass this transform object to the sf::RenderTarget::draw() member function as its second argument. An sf::Transform object implicitly converts to a sf::RenderStates which is the second parameter type of the corresponding sf::RenderTarget::draw() overload.
As an example:
#include <SFML/Graphics.hpp>
auto main() -> int {
auto const width = 300, height = 300;
sf::RenderWindow win(sf::VideoMode(width, height), "Transformation");
win.setFramerateLimit(60);
// create the composed transform object
const sf::Transform transform = [height]{
sf::Transformable transformation;
transformation.setOrigin(0, height); // 1st transform
transformation.setScale(1.f, -1.f); // 2nd transform
return transformation.getTransform();
}();
sf::RectangleShape rect({30, 30});
while (win.isOpen()) {
sf::Event event;
while (win.pollEvent(event))
if (event.type == sf::Event::Closed)
win.close();
// update rectangle's position
rect.move(0, 1);
win.clear();
rect.setFillColor(sf::Color::Blue);
win.draw(rect); // no transformation applied
rect.setFillColor(sf::Color::Red);
win.draw(rect, transform); // transformation applied
win.display();
}
}
There is a single sf::RectangleShape object that is rendered twice with different colors:
Blue: no transform was applied.
Red: the composed transform was applied.
They move in opposite directions as a result of flipping the Y axis.
Note that the object space position coordinates remain the same. Both rendered rectangles correspond to the same object, i.e., there is just a single sf::RectangleShape object, rect – only the color is changed. The object space position is rect.getPosition().
What is different for these two rendered rectangles is the coordinate reference system. Therefore, the absolute space position coordinates of these two rendered rectangles also differ.
You can use this approach in a scene tree. In such a tree, the transforms are applied in a top-down manner from the parents to their children, starting from the root. The net effect is that children's coordinates are relative to their parent's absolute position.