Oculus Tiny Room, Directx, Loading a 3D Model into scene - c++

I am currently working on Oculus Rift PC SDK. Was trying to start off with something simpler like Tiny Room Demo(DX11). Saw this tutorial online to load a 3D model into the scene from external file (Rastertek
Tutorial 7: 3D Model Rendering)
The way the Tiny Room Demo creates model is to hardcode the coordinates and renders it
TriangleSet walls;
walls.AddSolidColorBox(10.1f, 0.0f, 20.0f, 10.0f, 4.0f, -20.0f, 0xff808080); // Left Wall
walls.AddSolidColorBox(10.0f, -0.1f, 20.1f, -10.0f, 4.0f, 20.0f, 0xff808080); // Back Wall
walls.AddSolidColorBox(-10.0f, -0.1f, 20.0f, -10.1f, 4.0f, -20.0f, 0xff808080); // Right Wall
Add(
new Model(&walls, XMFLOAT3(0, 0, 0), XMFLOAT4(0, 0, 0, 1),
new Material(
new Texture(false, 256, 256, Texture::AUTO_WALL)
)
)
);
void AddSolidColorBox(float x1, float y1, float z1, float x2, float y2, float z2, uint32_t c)
{
AddQuad(Vertex(XMFLOAT3(x1, y2, z1), ModifyColor(c, XMFLOAT3(x1, y2, z1)), z1, x1),
Vertex(XMFLOAT3(x2, y2, z1), ModifyColor(c, XMFLOAT3(x2, y2, z1)), z1, x2),
Vertex(XMFLOAT3(x1, y2, z2), ModifyColor(c, XMFLOAT3(x1, y2, z2)), z2, x1),
Vertex(XMFLOAT3(x2, y2, z2), ModifyColor(c, XMFLOAT3(x2, y2, z2)), z2, x2));
...}
AddQuad(Vertex v0, Vertex v1, Vertex v2, Vertex v3) { AddTriangle(v0, v1, v2); AddTriangle(v3, v2, v1); }
void AddTriangle(Vertex v0, Vertex v1, Vertex v2)
{
VALIDATE(numVertices <= (maxBuffer - 3), "Insufficient triangle set");
for (int i = 0; i < 3; i++) Indices[numIndices++] = short(numVertices + i);
Vertices[numVertices++] = v0;
Vertices[numVertices++] = v1;
Vertices[numVertices++] = v2;
}
Tried to load the model into the scene using a function from the tutorial
TriangleSet models;
models.LoadModel("F:\\cube.txt");
Add(
new OBJModel(&models, XMFLOAT3(0, 0, 0), XMFLOAT4(0, 0, 0, 1),
new OBJMaterial(
new Texture(false, 256, 256, Texture::AUTO_WHITE)
//new Texture(DirectX, L"wallpaper.jpg")
)
)
); //3D Model
void LoadModel(char* filename)
{
ifstream fin;
char input;
// Open the model file.
fin.open(filename);
// Read up to the value of vertex count.
fin.get(input);
while (input != ':')
{
fin.get(input);
}
// Read in the vertex count.
m_vertexCount = 0;
fin >> m_vertexCount;
// Read up to the beginning of the data.
fin.get(input);
while (input != ':')
{
fin.get(input);
}
fin.get(input);
fin.get(input);
// Read in the vertex data.
for (int i = 0; i<m_vertexCount; i++)
{
Indices[numIndices++] = short(numVertices + i);
//numVertices++; deleted
fin >> Vertices[numVertices].Pos.x >> Vertices[numVertices].Pos.y >> Vertices[numVertices].Pos.z;
fin >> Vertices[numVertices].U >> Vertices[numVertices].V;
fin >> Normals[numVertices].Norm.x >> Normals[numVertices].Norm.y >> Normals[numVertices].Norm.z;
Vertices[numVertices].C = ModifyColor(0xffffffff, Vertices[numVertices].Pos);
numVertices+=1; //new statement
}
// Close the model file.
fin.close();
}
I did not use the normal as from the tutorial it was meant for the texture of the object. Instead I defined the color to be solid yellow. Tried to keep the structure of loading the model as similar to Tiny Room Demo as possible.
I have used the same model, material and texture (vertex shader and pixel shader) as how Tiny Room Demo does. However what was rendered onto the scene did not appear as what it is supposed to be.
Did a step by step debugging to see if the coordinates were correctly loading into the Vertices[numVertices]. Seems like there is no issue. The file I tried to load was cube.txt
Vertex Count: 36
Data:
-1.0 1.0 -1.0 0.0 0.0 0.0 0.0 -1.0
1.0 1.0 -1.0 1.0 0.0 0.0 0.0 -1.0
-1.0 -1.0 -1.0 0.0 1.0 0.0 0.0 -1.0
-1.0 -1.0 -1.0 0.0 1.0 0.0 0.0 -1.0
1.0 1.0 -1.0 1.0 0.0 0.0 0.0 -1.0
1.0 -1.0 -1.0 1.0 1.0 0.0 0.0 -1.0
1.0 1.0 -1.0 0.0 0.0 1.0 0.0 0.0
1.0 1.0 1.0 1.0 0.0 1.0 0.0 0.0
1.0 -1.0 -1.0 0.0 1.0 1.0 0.0 0.0
1.0 -1.0 -1.0 0.0 1.0 1.0 0.0 0.0
1.0 1.0 1.0 1.0 0.0 1.0 0.0 0.0
1.0 -1.0 1.0 1.0 1.0 1.0 0.0 0.0
1.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0
-1.0 1.0 1.0 1.0 0.0 0.0 0.0 1.0
1.0 -1.0 1.0 0.0 1.0 0.0 0.0 1.0
1.0 -1.0 1.0 0.0 1.0 0.0 0.0 1.0
-1.0 1.0 1.0 1.0 0.0 0.0 0.0 1.0
-1.0 -1.0 1.0 1.0 1.0 0.0 0.0 1.0
...
What was suppose to show up (except without the texture)
3D cube
What actually showed up was just fragments of triangle
TinyRoomDemo + 3D cube
Unsure what went wrong. Please do advice! Thank you very much :)
Vertex and Index buffer
struct OBJModel
{
XMFLOAT3 Pos;
XMFLOAT4 Rot;
OBJMaterial * Fill;
DataBuffer * VertexBuffer;
DataBuffer * IndexBuffer;
int NumIndices;
OBJModel() : Fill(nullptr), VertexBuffer(nullptr), IndexBuffer(nullptr) {};
void Init(TriangleSet * t)
{
NumIndices = t->numIndices;
VertexBuffer = new DataBuffer(DIRECTX.Device, D3D11_BIND_VERTEX_BUFFER, &t->Vertices[0], t->numVertices * sizeof(Vertex));
IndexBuffer = new DataBuffer(DIRECTX.Device, D3D11_BIND_INDEX_BUFFER, &t->Indices[0], t->numIndices * sizeof(short));
}
...
DIRECTX.Context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
---------------------------------------------------------------------------------------------------------------------------------
06/06/2017 edited:
3D Model data:
Vertex Count: 798
Data:
28.3005 0.415886 -45.8282 0.7216 0.720211 0 0 -1
28.3005 -0.809079 -45.8282 0.732222 0.720211 0 0 -1
-27.7441 -0.809079 -45.8282 0.732222 0.847836 0 0 -1
28.3005 0.415886 68.1056 0.459891 0.720286 0 1 -0
28.3005 0.415886 -45.8282 0.719341 0.720286 0 1 -0
-27.7441 0.415886 -45.8282 0.719341 0.847911 0 1 -0
28.3005 -0.809079 68.1056 0.721603 0.720211 0 0 1
28.3005 0.415886 68.1056 0.732225 0.720211 0 0 1
-27.7441 0.415886 68.1056 0.732225 0.847836 0 0 1
28.3005 -0.809079 -45.8282 0.459891 0.720298 0 -1 -0
28.3005 -0.809079 68.1056 0.719341 0.720298 0 -1 -0
-27.7441 -0.809079 68.1056 0.719341 0.847923 0 -1 -0
28.3005 0.415886 68.1056 0.719341 0.70683 1 0 -0
...

From the data u provided for the house, it seems like 1 triangle is facing in one way and second is facing the opposite way.
Use rasterizer without back culling to draw this object

Related

why is the transformation matrices behaving differently

I get different results when scaling objects.
The objects have four different glm::vec3 values
1) Position , Rotation , Scaling , Center Point
This is the Transformation Matrix of the object
TransformationMatrix = PositionMatrix() * RotationMatrix() * ScalingMatrix();
The rotation and scaling Matrix looks like this.
glm::vec3 pivotVector(pivotx, pivoty, pivotz);
glm::mat4 TransPivot = glm::translate(glm::mat4x4(1.0f), pivotVector);
glm::mat4 TransPivotInverse = glm::translate(glm::mat4x4(1.0f), -pivotVector);
glm::mat4 TransformationScale = glm::scale(glm::mat4(1.0), glm::vec3(scax, scay, scaz));
return TransPivot * TransformationScale * TransPivotInverse;
In the first case.
I move the rectangle object to 200 units in x.
Than i scale the group which is at position x = 0.0
so the final matrix for the rectangle object is
finalMatrix = rectangleTransformationMatrix * groupTransformationMatrix
The result i what i expected.The rectangle scales and moves towards the center of the screen.
Now if i do the same thing with three containers.
Here i move the group container to 200 and scale the Top container which is at position 0.0
finalMatrix = rectangleTransformationMatrix * groupTransformationMatrix * TopTransformationMatrix
the rectangle scales at its own position as if the center point of the screen has also moved 200 units.
If i add -200 units to the pivot point x of the top container than i get the result what i expected.
where rectangle moves towards the center of the screen and scales.
If someone can please explain me why i need to add -200 units to the center point of the Top container.Whereas in the first case i did not need to add any value to the pivot point of the scaling container.
when both the operations are identical in nature.
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
First case
Rectangle - > position( x = 200 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
glm::mat4 PositionMatrix = glm::position( // fill the values);
glm::mat4 ScalingMatrix = glm::scaling( // fill the values);
glm::mat4 RotationMatrix = glm::rotate( // fill the values);
RectangleMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for group
froup - > position( x = 0.0 , y = 0, z = 0) , scaling( 0.5 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
final result
finalMatrix = RectangleMatrix * groupMatrix
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Second case
Rectangle - > position( x = 0 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
glm::mat4 PositionMatrix = glm::position( // fill the values);
glm::mat4 ScalingMatrix = glm::scaling( // fill the values);
glm::mat4 RotationMatrix = glm::rotate( // fill the values);
RectangleMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for group
group - > position( x = 200.0 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for Top
Top - > position( x = 0.0 , y = 0, z = 0) , scaling( 0.5 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
TopMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
final result
finalMatrix = RectangleMatrix * groupMatrix * TopMatrix
Matrix operations are not Commutative. scale * translate is not the same as translate * scale
If you have a translation of 200 and and a scale of 0.2, then
translate(200) * scale(0.2)
gives object scaled by 0.2 and translated by 200. But
scale(0.2) * translate(200)
gives object scaled by 0.2 and translated by 40 (0.2*200).
If you have 2 matrices:
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
TopMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
Then groupMatrix * TopMatrix is the same as
groupPositionMatrix * groupRotationMtrix * groupScalingMatrix * topPositionMatrix * topRotationMtrix * topScalingMatrix
The result is different if the scale is encoded in groupScalingMatrix or topScalingMatrix respectively the translation is encoded in groupPositionMatrix or topPositionMatrix.

Eigen Sparse matrix. Sorting non-zero elements with respect to rows

I have a sparse matrix and I need to store the non-zero elements and respective row,column in ascending order with respect to rows
following this https://eigen.tuxfamily.org/dox/group__TutorialSparse.html
I tried
// mat =
// 1.0, 0.0, 1.0, 0.0, 0.0
// 1.0, 0.0, 0.0, 0.0, 1.0
// 0.0, 1.0, 0.0, 0.0, 0.0
// 0.0, 0.0, 1.0, 0.0, 1.0
// 1.0, 0.0, 0.0, 1.0, 0.0
// create and fill the sparse matrix
SparseMatrix<double> mat(5,5);
mat.insert(0,0) = 1.0;
mat.insert(0,2) = 1.0;
mat.insert(1,0) = 1.0;
mat.insert(1,4) = 1.0;
mat.insert(2,1) = 1.0;
mat.insert(3,2) = 1.0;
mat.insert(3,4) = 1.0;
mat.insert(4,0) = 1.0;
mat.insert(4,3) = 1.0;
//matrix where to store the row,col,value
Eigen::MatrixXd mat_map(mat.nonZeros(),3);
int index_mat;
index_mat=-1;
for (int k=0; k<mat.outerSize(); ++k)
for (SparseMatrix<double>::InnerIterator it(mat,k); it; ++it)
{
index_mat++;
mat_map(index_mat,0) = it.row(); // row index
mat_map(index_mat,1) = it.col(); // col index
mat_map(index_mat,2) = it.value();
}
cout << mat_map << endl;
what I get is the following
0 0 1.0
1 0 1.0
4 0 1.0
2 1 1.0
0 2 1.0
3 2 1.0
4 3 1.0
1 4 1.0
3 4 1.0
while what I want is
0 0 1.0
0 2 1.0
1 0 1.0
1 4 1.0
2 1 1.0
3 2 1.0
3 4 1.0
4 0 1.0
4 3 1.0
Any help will be appreciated
thanks!
SparseMatrix has an optional template parameter that defines the storage order (cf. eigen documentation). By default, SparseMatrix uses column-major. So instead of Eigen::SparseMatrix<double> use Eigen::SparseMatrix<double, Eigen::RowMajor>.
It would be beneficial to use an alias for this
using MyMatrix = Eigen::SparseMatrix<double, Eigen::RowMajor>;
MyMatrix mat(5,5);

Trouble with pre-multiplied alpha blending

I created a composite renderer that simply alpha-blends two textures, one on top of the other. The blending is just not working correctly.
Here is the code for the renderer, just rendering the background:
RenderFunction layer_render = [&]() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(layer_shader_.get_program_id());
// ----- Fetch shader uniform locations
layer_shader_.SetUniform("tex", 0);
layer_shader_.SetUniform("create_alpha_mask", false);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, background_texture_id_);
layer_shader_.SetUniform("model", background_model_);
background_mesh_.Bind();
glDrawElements(GL_TRIANGLES, background_mesh_.GetIndicesCount(), GL_UNSIGNED_INT, (void*)0);
background_mesh_.Unbind();
};
layer_framebuffer_.RenderToTexture(layer_render);
The background texture i'm passing is a full rgba -> {1.0, 0.0, 0.0, 0.5}. What comes out is -> {0.5, 0.0, 0.0, 0.5}. The alpha blending is not properly accounting for the source alpha when computing the blending for some reason that i'm failing to see.
Some pseudo code of what i was expecting:
source_alpha = 0.5
dest_alpha = 0.0 * (1.0 - source_alpha) = 0.0
output_alpha = source_alpha + dest_alpha = 0.5
out_r = (source_r * source_alpha + dest_r * dest_alpha) / output_alpha = (1.0 * 0.5 + 0.0 * 0.0) / 0.5 = 1.0
out_g = (source_g * source_alpha + dest_g * dest_alpha) / output_alpha = (0.0 * 0.5 + 0.0 * 0.0) / 0.5 = 0.0
out_b = (source_b * source_alpha + dest_b * dest_alpha) / output_alpha = (0.0 * 0.5 + 0.0 * 0.0) / 0.5 = 0.0
out_a = output_alpha = 0.5
I don't know where you got that pseudo code from, but that's not how it works. Why are you dividing by (source_alpha + dest_alpha)?
If you want to use premultiplied alpha what you do is set the blend function to:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
So it doesn't multiply source with alpha and then in your fragment shader at the end you premultiply rgb with the alpha, which means:
color.rgb *= color.a;
That's it.

How translation a matrix(4x4) in Eigen?

How translation a matrix(4x4) in Eigen?
//identity matrix 4x4
/*type=*/Eigen::Matrix<float, 4, 4> /*name=*/result = Eigen::Matrix<float, 4, 4>::Identity();
//translation vector
// 3.0f
// 4.0f
// 5.0f
Translation<float, 3> trans(3.0f, 4.0f, 5.0f);
ie, I have matrix:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
And I want get this:
1.0 0.0 0.0 3.0
0.0 1.0 0.0 4.0
0.0 0.0 1.0 5.0
0.0 0.0 0.0 1.0
Right? How I can do this?
I can do this:
result(0, 3) = 3.0f;
result(1, 3) = 4.0f;
result(2, 3) = 5.0f;
But it's not elegant. =) What you advice?
Like this:
Affine3f transform(Translation3f(1,2,3));
Matrix4f matrix = transform.matrix();
Here is the doc with more details.
Some alternative to catscradle answer:
Matrix4f mat = Matrix4f::Identity();
mat.col(3).head<3>() << 1, 2, 3;
or
mat.col(3).head<3>() = translation_vector;
or
Matrix4f mat;
mat << Matrix3f::Identity, Vector3f(1, 2, 3),
0, 0, 0, 1;
or
Affine3f a;
a.translation() = translation_vector;
Matrix4f mat = a.matrix();

GL_CULL_FACE makes all objects disappear

I am trying to create some simple polygons in openGL3.3. I have 2 types of objects with the following properties :
Object 1 - 10 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use GL_TRIANGLE_FAN
v x y z w
v 0.0 0.0 1.0 1.0
v 0.0 1.0 0.1 1.0
v 0.71 0.71 0.1 1.0
v 1.0 0.0 0.1 1.0
v 0.71 -0.71 0.1 1.0
v 0.0 -1.0 0.1 1.0
v -0.71 -0.71 0.1 1.0
v -1.0 0.0 0.1 1.0
v -0.71 0.71 0.1 1.0
v 0.0 1.0 0.1 1.0
Object 2 - 4 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use GL_TRIANGLE_STRIP
v x y z w
v 0.0 0.0 0.0 1.0
v 0.0 1.0 0.0 1.0
v 1.0 0.0 0.0 1.0
v 1.0 1.0 0.0 1.0
I load 64 Object1's and 4 Object2's. The Object2's are scaled using glm::scale(20, 20, 0).
When I try to render these with GL_CULL_FACE disabled but GL_DEPTH_TEST enabled with glDepthFunc(GL_LESS) everything works fine. As soon as I try to enable GL_CULL_FACE all I get is a blank window.
Some other useful information :
- Rendering order = 4 Object2's followed by 64 Object1's
- Camera - glm::lookAt(glm::vec3(0.0f, 0.0f, 50.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
- Perspective - glm::perspective(45.0f, 16.0f / 9.0f, 0.01f, 1000.0f);
I have been trying to figure out why GL_CULL_FACE doesn't work for the past couple of days but have no idea. Any help is much appreciated.
This is almost always due to polygon normal direction (right-hand rule of the three vertices of each triangle)
Try switching the direction using:
glCullFace(GL_BACK or GL_FRONT)
Some more reading here
Which culling direction have you specified? When you create a polygon, you 'wind' the vertices either clockwise or counter-clockwise relative to the camera's view point, and you have to tell OpenGL which type it should cull.
As you're probably aware, back face culling is for hiding polys facing away from the camera, so if you have clockwise-wound faces, that means the verts are clockwise in order when facing the camera and counter-clockwise facing away, so you tell OpenGL to cull counter-clockwise polys.
I suspect that you're culling the polys you want to see: GL_BACK is the default so switching the mode to GL_FRONT should work. See the documentation for more info: http://www.opengl.org/sdk/docs/man/xhtml/glCullFace.xml