Trying draw cylinder in directx through D3DXCreateCylinder - c++

I am very novice in directx and want to know more, I was trying the code from directxtutorial.comI sthere any example\sample for D3DXCreateCylinder? Thanks

Alright then,
D3DXCreateCylinder can be used as such
LPD3DXMESH cylinder; // Define a pointer to the mesh.
D3DXCreateCylinder(d3ddev, 2.0f, 0.0f, 10.0f, 10, 10, &cylinder, NULL);
So what is going on?
d3ddev should be your device context that I will assume you have created.
The radius on the Negative Z.
The radius on the Positive Z.
The length of the shape on the Z axis.
The amount of polygons (or subdivisions) around the Z.
The amount of polygons on the Z axis.
The address of the pointer which holds the created mesh.
Tinker around with the values, experimenting can't hurt.

These resources will help supplement the answer provided:
https://directxtutorial.com/Tutorial11/B-A/BA2.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476880(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/hh780339(v=vs.85).aspx

By default, D3DXCreateCylinder API don't generate the texture coordinates for mapping a texture above the created cylindrical mesh.
Alternate you can formulate your own cylindrical Geometry like below for texture mapping:
for( DWORD i = 0; i < Sides; i++ )
{
FLOAT theta = ( 2 * D3DX_PI * i ) / ( Sides - 1 );
pVertices[2 * i + 0].position = D3DXVECTOR3(radius*sinf( theta ), -height, radius*cosf( theta ) );
pVertices[2 * i + 0].color = 0xffffffff;
pVertices[2 * i + 0].tu = ( ( FLOAT )i ) / ( Sides - 1 );
pVertices[2 * i + 0].tv = 1.0f;
pVertices[2 * i + 1].position = D3DXVECTOR3( radius*sinf( theta ), height, radius*cosf( theta ) );
pVertices[2 * i + 1].color = 0xff808080;
pVertices[2 * i + 1].tu = ( ( FLOAT )i ) / ( Sides - 1 );
pVertices[2 * i + 1].tv = 0.0f;
}

Related

OpenGL Sphere deforms when setting center coordinate to high values

So I am drawing a sphere not using the "subdividing icosahedron" approach but using triangle strips and parameteric equation of the sphere.
Here is my code
glBegin(GL_TRIANGLE_SRIP);
for(float i = -PI/2; i < PI/2; i+= 0.01f)
{
temp = i+0.01f;
for(float j = 0; j < 2*PI; j+=0.01f)
{
temp -= 0.01f;
glVertex3f( cx + rad * cos(j) * cos(temp), cy + rad * cos(temp) * sin(j), cz + rad * sin(temp));
temp += 0.01f;
glVertex3f( cx + rad * cos(j) * cos(temp), cy + rad * cos(temp) * sin(j), cz + rad * sin(temp));
}
}
glEnd();
The approach is as followes. Imagine a Circle in the XY plane. This is drawn using the inner loop. Now imagine the XY plane moved above or below in the Z-axis and the radius changed cause it's a sphere. This is done using the outer loop.
The first triangle coordinate is given for the Circle when XY plane is at its initial position. After temp+=0.01f the plane moved up by 0.01 and the second triangle vertex coordinate is given. This is how the strip is calculated.
The problem is if cx = cy = cz = 0 or any low value like 2 or 3 the sphere seems fine. However if I put for e.g cx = 15, cy = 15, cz = -6 the sphere gets deformed. Here is the picture.
If i use GL_POINTS this is what im getting
Sorry a very stupid mistake, I wasn't converting the values i put in glFrustum correctly hence a weird FOV was being generated. Solved the issue now. Thanks

How to convert mouse position to world coordinates?

I am trying to see which object is clicked for a fixed Z position. I am using perspective projection.
For the rendering of my objects, I use:
glm::mat4 mvp = m_ProjectionView * model; // Inversed
For the calculation of the world position of the mouse, I have tried the following, but it is not converting correctly.
glm::mat4 projViewInverse = glm::inverse( projection * view );
glm::vec4 cursorWorldPosition = glm::vec4( projViewInverse * glm::vec4( ( mousePosX / scrWidth ) * 2.0f - 1.0f, ( mousePosY / scrHeight ) * 2.0f - 1.0f, -1, 1 ) );
cursorWorldPosition.w = 1.0f / cursorWorldPosition.w;
cursorWorldPosition.x *= cursorWorldPosition.w;
cursorWorldPosition.y *= cursorWorldPosition.w;
cursorWorldPosition.z *= cursorWorldPosition.w;

Duplicate OpenGL orthographic projection behaviour without OpenGL

I'm encountering a problem trying to replicate the OpenGL behaviour in an ambient without OpenGL.
Basically I need to create an SVG file from a list of lines my program creates. These lines are created using an othigraphic projection.
I'm sure that these lines are calculated correctly because if I try to use them with a OpenGL context with orthographic projection and save the result into an image, the image is correct.
The problem raises when I use the exactly same lines without OpenGL.
I've replicated the OpenGL projection and view matrices and I process every line point like this:
3D_output_point = projection_matrix * view_matrix * 3D_input_point
and then I calculate it's screen (SVG file) position like this:
2D_point_x = (windowWidth / 2) * 3D_point_x + (windowWidth / 2)
2D_point_y = (windowHeight / 2) * 3D_point_y + (windowHeight / 2)
I calculate the othographic projection matrix like this:
float range = 700.0f;
float l, t, r, b, n, f;
l = -range;
r = range;
b = -range;
t = range;
n = -6000;
f = 8000;
matProj.SetValore(0, 0, 2.0f / (r - l));
matProj.SetValore(0, 1, 0.0f);
matProj.SetValore(0, 2, 0.0f);
matProj.SetValore(0, 3, 0.0f);
matProj.SetValore(1, 0, 0.0f);
matProj.SetValore(1, 1, 2.0f / (t - b));
matProj.SetValore(1, 2, 0.0f);
matProj.SetValore(1, 3, 0.0f);
matProj.SetValore(2, 0, 0.0f);
matProj.SetValore(2, 1, 0.0f);
matProj.SetValore(2, 2, (-1.0f) / (f - n));
matProj.SetValore(2, 3, 0.0f);
matProj.SetValore(3, 0, -(r + l) / (r - l));
matProj.SetValore(3, 1, -(t + b) / (t - b));
matProj.SetValore(3, 2, -n / (f - n));
matProj.SetValore(3, 3, 1.0f);
and the view matrix this way:
CVettore position, lookAt, up;
position.AssegnaCoordinate(rtRay->m_pCam->Vp.x, rtRay->m_pCam->Vp.y, rtRay->m_pCam->Vp.z);
lookAt.AssegnaCoordinate(rtRay->m_pCam->Lp.x, rtRay->m_pCam->Lp.y, rtRay->m_pCam->Lp.z);
up.AssegnaCoordinate(rtRay->m_pCam->Up.x, rtRay->m_pCam->Up.y, rtRay->m_pCam->Up.z);
up[0] = -up[0];
up[1] = -up[1];
up[2] = -up[2];
CVettore zAxis, xAxis, yAxis;
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - position[0];
zAxis[1] = lookAt[1] - position[1];
zAxis[2] = lookAt[2] - position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * position[0]) + (xAxis[1] * position[1]) + (xAxis[2] * position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * position[0]) + (yAxis[1] * position[1]) + (yAxis[2] * position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * position[0]) + (zAxis[1] * position[1]) + (zAxis[2] * position[2])) * -1.0f;
// Set the computed values in the view matrix.
matView.SetValore(0, 0, xAxis[0]);
matView.SetValore(0, 1, yAxis[0]);
matView.SetValore(0, 2, zAxis[0]);
matView.SetValore(0, 3, 0.0f);
matView.SetValore(1, 0, xAxis[1]);
matView.SetValore(1, 1, yAxis[1]);
matView.SetValore(1, 2, zAxis[1]);
matView.SetValore(1, 3, 0.0f);
matView.SetValore(2, 0, xAxis[2]);
matView.SetValore(2, 1, yAxis[2]);
matView.SetValore(2, 2, zAxis[2]);
matView.SetValore(2, 3, 0.0f);
matView.SetValore(3, 0, result1);
matView.SetValore(3, 1, result2);
matView.SetValore(3, 2, result3);
matView.SetValore(3, 3, 1.0f);
The results I get from OpenGL and from the SVG output are quite different, but in two days I couldn't come up with a solution.
This is the OpenGL output
And this is my SVG output
As you can see, it's rotation isn't corrent.
Any idea why? The line points are the same and the matrices too, hopefully.
Pasing the matrices I was creating didn't work. I mean, the matrices were wrong, I think, because OpenGL didn't show anything.
So I tryed doing the opposite, I created the matrices in OpenGL and used them with my code. The result is better, but not perfect yet.
Now I think the I do something wrong mapping the 3D points into 2D screen points because the points I get are inverted in Y and I still have some lines not perfectly matching.
This is what I get using the OpenGL matrices and my previous approach to map 3D points to 2D screen space (this is the SVG, not OpenGL render):
Ok this is the content of the view matrix I get from OpenGL:
This is the projection matrix I get from OpenGL:
And this is the result I get with those matrices and by changing my 2D point Y coordinate calculation like bofjas said:
It looks like some rotations are missing. My camera has a rotation of 30° on both the X and Y axis, and it looks like they're not computed correctly.
Now I'm using the same matrices OpenGL does. So I think that I'm doing some wrong calculations when I map the 3D point into 2D screen coordinates.
Rather than debugging your own code, you can use transform feedback to compute the projections of your lines using the OpenGL pipeline. Rather than rasterizing them on the screen you can capture them in a memory buffer and save directly to the SVG afterwards. Setting this up is a bit involved and depends on the exact setup of your OpenGL codepath, but it might be a simpler solution.
As per your own code, it looks like you either mixed x and y coordinates somewhere, or row-major and column-major matrices.
I've solved this problem in a really simple way. Since when I draw using OpenGL it's working, I've just created the matrices in OpenGL and then retrieved them with glGet(). Using those matrices everything is ok.
You're looking for a specialized version of orthographic (oblique) projections called isometric projections. The math is really simple if you want to know what's inside the matrix. Have a look on Wikipedia
OpenGL loads matrices in column major(opposite of c++).for example this matrix:
[1 ,2 ,3 ,4 ,
5 ,6 ,7 ,8 ,
9 ,10,11,12,
13,14,15,16]
loads this way in memory:
|_1 _|
|_5 _|
|_9 _|
|_13_|
|_2 _|
.
.
.
so i suppose you should transpose those matrices from openGL(if you`re doing it row major)

Draw 2D thick arc using polygon in OpenGL

I want to draw a thick Arc(something like colored segment of analog dial) using polygon. For that i have added vertices in polygon and its working fine for the outer circumference BUT its joining the ends for inner circumference(the concave side).
The same logic works fine if I add those vertices in Line, but that creates an empty/non-filled arc.
My logic of adding vertices is :
for( float i = m_segmentVertex.size() - 1; i < vCount; i++ )
{
float x1 = (m_segmentVertex[ i ].x ) * cosA - m_segmentVertex[ i ].y * sinA;
float y1 = (m_segmentVertex[ i ].x ) * sinA + m_segmentVertex[ i ].y * cosA;
addVertex( vec3( x1, y1, 0.0f ) );
}
Be aware that GL_POLYGON only works with convex polygons.
You'll have to triangulate concave polygons.
Try using a triangle fan and making the center of your dial the first point.
Possibly addVertex( vec3( 0.0f, 0.0f, 0.0f ) ); before your loop.
I'd also recommend making i an int or unsigned int, a float here doesn't make sense.
This is how I created the polygon dynamically by triangulating it :
//create thick colored segments
void CreateArcMesh( float sAngle, float eAngle, vec4 color, int thickness, int radius )
{
ObjectMeshDynamic meshObj = new ObjectMeshDynamic();
vec3 vertex[0];
float dAngle = ( ( eAngle - sAngle ) / ( VERTEX_COUNT / 2.0f ) );
float cosA = cos( DEG2RAD * dAngle );
float sinA = sin( DEG2RAD * dAngle );
meshObj.setMaterial( "material_base", "*" );
meshObj.setProperty( "surface_base", "*" );
meshObj.setMaterialParameter( "diffuse_color", color, 0 );
//Add the material on both side as the indices for Triangle strip start from last vertex added
Material material = meshObj.getMaterialInherit(0);
material.setTwoSided( 1 );
meshObj.addTriangleStrip( VERTEX_COUNT + 2 );
vec3 startPos = vec3( radius * cos( DEG2RAD * sAngle ), radius * sin( DEG2RAD * sAngle ), 0.0f );
vertex.append( startPos );
vec3 secondPos = vec3( ( radius - thickness ) * cos( DEG2RAD * sAngle ), ( radius - thickness ) * sin( DEG2RAD * sAngle ), 0.0f );
vertex.append( secondPos );
float x1 = startPos.x * cosA - startPos.y * sinA;
float y1 = startPos.x * sinA + startPos.y * cosA;
vertex.append( vec3( x1, y1, 0.0f ) );
x1 = secondPos.x * cosA - secondPos.y * sinA;
y1 = secondPos.x * sinA + secondPos.y * cosA;
vertex.append( vec3( x1, y1, 0.0f ) );
forloop( int k = 0 ; VERTEX_COUNT + 2 )
{
x1 = ( vertex[ vertex.size() - 2 ].x ) * cosA - vertex[ vertex.size() - 2 ].y * sinA;
y1 = ( vertex[ vertex.size() - 2 ].x ) * sinA + vertex[ vertex.size() - 2 ].y * cosA;
vertex.append( vec3( x1, y1, 0.0f ) );
meshObj.addVertex( vertex[k] );
}
vertex.clear();
meshObj.updateBounds();
meshObj.flush();
}

Creating spherical meshes with Direct x?

How do you go about creating a sphere with meshes in Direct-x? I'm using C++ and the program will be run on windows, only.
Everything is currently rendered through an IDiRECT3DDEVICE9 object.
You could use the D3DXCreateSphere function.
There are lots of ways to create a sphere.
One is to use polar coordinates to generate slices of the sphere.
struct Vertex
{
float x, y, z;
float nx, ny, nz;
};
Given that struct you'd generate the sphere as follows (I haven't tested this so I may have got it slightly wrong).
std::vector< Vertex > verts;
int count = 0;
while( count < numSlices )
{
const float phi = M_PI / numSlices;
int count2 = 0;
while( count2 < numSegments )
{
const float theta = M_2PI / numSegments
const float xzRadius = fabsf( sphereRadius * cosf( phi ) );
Vertex v;
v.x = xzRadius * cosf( theta );
v.y = sphereRadius * sinf( phi );
v.z = xzRadius * sinf( theta );
const float fRcpLen = 1.0f / sqrtf( (v.x * v.x) + (v.y * v.y) + (v.z * v.z) );
v.nx = v.x * fRcpLen;
v.ny = v.y * fRcpLen;
v.nz = v.z * fRcpLen;
verts.push_back( v );
count2++;
}
count++;
}
This is how D3DXCreateSphere does it i believe. Of course the code above does not form the faces but thats not a particularly complex bit of code if you set your mind to it :)
The other, and more interesting in my opinion, way is through surface subdivision.
If you start with a cube that has normals defined the same way as the above code you can recursively subdivide each side. Basically you find the center of the face. Generate a vector from the center to the new point. Normalise it. Push the vert out to the radius of the sphere as follows (Assuming v.n* is the normalised normal):
v.x = v.nx * sphereRadius;
v.y = v.ny * sphereRadius;
v.z = v.nz * sphereRadius;
You then repeat this process for the mid point of each edge of the face you are subdividing.
Now you can split each face into 4 new quadrilateral faces. You can then subdivide each of those quads into 4 new quads and so on until you get to the refinement level you require.
Personally I find this process provides a nicer vertex distribution on the sphere than the first method.