When importing a model in .obj format a lot of the polygons share vertices and thus consume memory in vain, what I want to do is remove the duplicates by only saving unique vertices.
Hashing of Vertex
/// Template specialization for hashing of a Vec3
namespace std {
template<typename T>
struct hash<Vec3<T>> {
void hash_combine(size_t &seed, const size_t &hash) const {
seed ^= hash + 0x9e3779b9 + (seed << 6) + (seed >> 2);
}
size_t operator() (const Vec3<T> &vec) const {
auto hasher = hash<float>{};
auto hashed_x = hasher(vertex.position.x);
auto hashed_y = hasher(vertex.position.y);
auto hashed_z = hasher(vertex.position.z);
auto hashed_color_r = hasher(vertex.color.r);
auto hashed_color_g = hasher(vertex.color.g);
auto hashed_color_b = hasher(vertex.color.b);
auto hashed_color_a = hasher(vertex.color.a);
auto hashed_texcoord_x = hasher(vertex.texCoord.x);
auto hashed_texcoord_y = hasher(vertex.texCoord.y);
auto hashed_normal_x = hasher(vertex.normal.x);
auto hashed_normal_y = hasher(vertex.normal.y);
auto hashed_normal_z = hasher(vertex.normal.z);
size_t seed = 0;
hash_combine(seed, hashed_x);
hash_combine(seed, hashed_y);
hash_combine(seed, hashed_z);
hash_combine(seed, hashed_texcoord_x);
hash_combine(seed, hashed_texcoord_y);
hash_combine(seed, hashed_normal_x);
hash_combine(seed, hashed_normal_y);
hash_combine(seed, hashed_normal_z);
return seed;
}
};
}
Importing mesh with tinyobjcloader
Mesh Renderer::load_mesh_from_file(std::string filepath) {
tinyobj::attrib_t attrib;
std::vector<tinyobj::shape_t> shapes;
std::vector<tinyobj::material_t> materials;
std::string err;
auto success = tinyobj::LoadObj(&attrib, &shapes, &materials, &err, filepath.c_str());
if (!success) { SDL_Log("Failed loading mesh %s: %s", filepath.c_str(), err.c_str()); return Mesh(); }
std::unordered_map<Vertex<float>, size_t> unique_vertices{};
Mesh mesh{};
for (auto shape : shapes) { // Shapes
size_t index_offset = 0;
for (auto face : shape.mesh.num_face_vertices) { // Faces (polygon)
for (auto v = 0; v < face; v++) {
tinyobj::index_t idx = shape.mesh.indices[index_offset + v];
Vertex<float> vertex{};
float vx = attrib.vertices[3 * idx.vertex_index + 0];
float vy = attrib.vertices[3 * idx.vertex_index + 1];
float vz = attrib.vertices[3 * idx.vertex_index + 2];
vertex.position = {vx, vy, vz};
float tx = attrib.vertices[3 * idx.texcoord_index + 0];
float ty = attrib.vertices[3 * idx.texcoord_index + 1];
vertex.texCoord = {tx, ty};
float nx = attrib.normals[3 * idx.normal_index + 0];
float ny = attrib.normals[3 * idx.normal_index + 1];
float nz = attrib.normals[3 * idx.normal_index + 2];
vertex.normal = {nx, ny, nz};
// These two lines work just fine (includes all vertices)
// mesh.vertices.push_back(vertex);
// mesh.indices.push_back(mesh.indices.size());
// Check for unique vertices, models will contain duplicates
if (unique_vertices.count(vertex) == 0) {
unique_vertices[vertex] = mesh.indices.size();
mesh.vertices.push_back(vertex);
mesh.indices.push_back(mesh.indices.size());
} else {
mesh.indices.push_back(unique_vertices.at(vertex));
}
}
index_offset += face;
}
}
SDL_Log("Number of vertices: %lu for model %s", mesh.vertices.size(), filepath.c_str());
return mesh;
}
The first image is when all the vertices are included.
This one is when I am only using unique vertices.
Any ideas?
if (unique_vertices.count(vertex) == 0) {
unique_vertices[vertex] = mesh.vertices.size();
mesh.indices.push_back(mesh.vertices.size());
mesh.vertices.push_back(vertex);
}
Explanation: indices are "pointers" to vertex locations. For that you need to get index where you write vertex data and not index for index data.
From the shown images, it seems to be a triangle-vertex reference problem.
Normally, obj format collects a list of unique vertices and each triangle is just a set of three indices corresponding to its three vertices. Let us assume that, for some reason, you do have a repetition of vertex A and vertex B and you decide to eliminate vertex B. In this case, you need to modify the references of all triangles containing B and substitute them with A.
Its not good idea to eliminate repeated coordinates. Coordinates will repeat for example in the stitching areas between meshes to form a closed 3D mesh structure. In 3D gaming, low polygon mesh structure is adopted to allow fast rendering, but unless otherwise , dealing with a these point clouds is not longer a big issue as powerful GPU and multi-core CPUs system are making real life like animations possible nowadays.
Related
I have two different methods for adding elements to a vector.
GUI_Vertices.emplace_back();
GUI_Vertices.back().pos.x = ((float)x / 400) - 1.f;
GUI_Vertices.back().pos.y = ((float)y / 300) - 1.f;
GUI_Vertices.back().texCoord.x = u;
GUI_Vertices.back().texCoord.y = v;
GUI_Vertices.back().color.r = m_Color.r / 128;
GUI_Vertices.back().color.g = m_Color.g / 128;
GUI_Vertices.back().color.b = m_Color.b / 128;
GUI_Vertices.back().color.a = m_Color.a / 128;
The above code works, however I am forced to add a new element to the GUI_Vertices vector.
Vertex NewVertex;
NewVertex.pos.x = ((float)x / 400) - 1.f;
NewVertex.pos.y = ((float)y / 300) - 1.f;
NewVertex.texCoord.x = u;
NewVertex.texCoord.y = v;
NewVertex.color.r = m_Color.r / 128;
NewVertex.color.g = m_Color.g / 128;
NewVertex.color.b = m_Color.b / 128;
NewVertex.color.a = m_Color.a / 128;
GUI_Vertices.emplace_back(NewVertex);
The above code works sometimes and I can conditionally add the NewVertex into the GUI_Vertices vector if needed.
Here is the definition of Vertex:
struct Vertex {
glm::vec3 pos;
glm::vec4 color;
glm::vec2 texCoord;
static VkVertexInputBindingDescription getBindingDescription() {
VkVertexInputBindingDescription bindingDescription = {};
bindingDescription.binding = 0;
bindingDescription.stride = sizeof(Vertex);
bindingDescription.inputRate = VK_VERTEX_INPUT_RATE_VERTEX;
return bindingDescription;
}
static std::array<VkVertexInputAttributeDescription, 3> getAttributeDescriptions() {
std::array<VkVertexInputAttributeDescription, 3> attributeDescriptions = {};
attributeDescriptions[0].binding = 0;
attributeDescriptions[0].location = 0;
attributeDescriptions[0].format = VK_FORMAT_R32G32B32_SFLOAT;
attributeDescriptions[0].offset = offsetof(Vertex, pos);
attributeDescriptions[1].binding = 0;
attributeDescriptions[1].location = 1;
attributeDescriptions[1].format = VK_FORMAT_R32G32B32A32_SFLOAT;
attributeDescriptions[1].offset = offsetof(Vertex, color);
attributeDescriptions[2].binding = 0;
attributeDescriptions[2].location = 2;
attributeDescriptions[2].format = VK_FORMAT_R32G32_SFLOAT;
attributeDescriptions[2].offset = offsetof(Vertex, texCoord);
return attributeDescriptions;
}
bool operator==(const Vertex& other) const {
return pos == other.pos && color == other.color && texCoord == other.texCoord;
}
};
namespace std {
template<> struct hash<Vertex> {
size_t operator()(Vertex const& vertex) const {
return ((hash<glm::vec3>()(vertex.pos) ^
(hash<glm::vec4>()(vertex.color) << 1)) >> 1) ^
(hash<glm::vec2>()(vertex.texCoord) << 1);
}
};
}
Later on in program execution, after adding all our Vertex elements to the GUI_Vertex vector I perform the following operation on GUI_Vertex:
memcpy(GUI_VertexAllocation->GetMappedData(), GUI_Vertices.data(), sizeof(Vertex) * GUI_Vertices.size());
I'm copying the memory from GUI_Vertices into a preallocated buffer which will be used by Vulkan to render our vertices.
Now i'm trying to figure out why the first method of adding Vertex objects into GUI_Vertices always works and the second method only sometimes works.
Here is a link to the entire project https://github.com/kklouzal/WorldEngine/blob/GUI_Indirect_Draw/Vulkan/VulkanGWEN.hpp
After recompiling the project the second method will occasionally work so I'm getting some undefined behavior here. I have checked the validity of GUI_Vertices up until the point where we do our memcpy and the data appears to be valid so I'm not sure whats going on.
I would like to get the second method working so I can conditionally add new vertices into the buffer.
NewVertex.pos.x = ((float)x / 400) - 1.f;
NewVertex.pos.y = ((float)y / 300) - 1.f;
...
glm::vec3 pos;
emplace_back will always perform value initialization on the object it creates, which initializes all of the data members. By contrast, Vertex NewVertex; will default-initialize the object, which leaves its members uninitialized (since the GLM types have trivial default constructors).
So pos.z is uninitialized. And your code doesn't initialize it yourself. So you're sending uninitialized garbage to the GPU.
If you create the object with Vertex NewVertex{};, then it will be value-initialized, just like emplace_back does.
The repository (GitHub)
I'm having issues with my diffuse shading for my models (It does not arise when rendering primitives.
What's interesting here to note is I think that when you look at the left reflective sphere, the shading appears normal (I might be wrong on this, basing on observation).
Low poly bunny and a triangle
Low poly bunny and 2 reflective spheres
Cube and 2 reflective spheres
I'm not sure what I'm doing wrong, as the normals are calculated each time I create a triangle in the constructor. I am using tinyobjloader to load my models, here is the TriangleMesh intersection algorithm.
FPType TriangleMesh::GetIntersection(const Ray &ray)
{
for(auto &shape : shapes)
{
size_t index_offset = 0;
for(size_t f = 0; f < shape.mesh.num_face_vertices.size(); ++f) // faces (triangles)
{
int fv = shape.mesh.num_face_vertices[f];
tinyobj::index_t &idx0 = shape.mesh.indices[index_offset + 0]; // v0
tinyobj::index_t &idx1 = shape.mesh.indices[index_offset + 1]; // v1
tinyobj::index_t &idx2 = shape.mesh.indices[index_offset + 2]; // v2
Vec3d &v0 = Vec3d(attrib.vertices[3 * idx0.vertex_index + 0], attrib.vertices[3 * idx0.vertex_index + 1], attrib.vertices[3 * idx0.vertex_index + 2]);
Vec3d &v1 = Vec3d(attrib.vertices[3 * idx1.vertex_index + 0], attrib.vertices[3 * idx1.vertex_index + 1], attrib.vertices[3 * idx1.vertex_index + 2]);
Vec3d &v2 = Vec3d(attrib.vertices[3 * idx2.vertex_index + 0], attrib.vertices[3 * idx2.vertex_index + 1], attrib.vertices[3 * idx2.vertex_index + 2]);
Triangle tri(v0, v1, v2);
if(tri.GetIntersection(ray))
return tri.GetIntersection(ray);
index_offset += fv;
}
}
}
The Triangle Intersection algorithm.
FPType Triangle::GetIntersection(const Ray &ray)
{
Vector3d v0v1 = v1 - v0;
Vector3d v0v2 = v2 - v0;
Vector3d pvec = ray.GetDirection().Cross(v0v2);
FPType det = v0v1.Dot(pvec);
// ray and triangle are parallel if det is close to 0
if(abs(det) < BIAS)
return false;
FPType invDet = 1 / det;
FPType u, v;
Vector3d tvec = ray.GetOrigin() - v0;
u = tvec.Dot(pvec) * invDet;
if(u < 0 || u > 1)
return false;
Vector3d qvec = tvec.Cross(v0v1);
v = ray.GetDirection().Dot(qvec) * invDet;
if(v < 0 || u + v > 1)
return false;
FPType t = v0v2.Dot(qvec) * invDet;
if(t < BIAS)
return false;
return t;
}
I think this is because when I'm handling all my object intersections, the triangle mesh is regarded as 1 object only, so it only returns 1 normal, when I'm trying to get the object normals: code
Color Trace(const Vector3d &origin, const Vector3d &direction, const std::vector<std::shared_ptr<Object>> &sceneObjects, const int indexOfClosestObject,
const std::vector<std::shared_ptr<Light>> &lightSources, const int &depth = 0)
{
if(indexOfClosestObject != -1 && depth <= DEPTH) // not checking depth for infinite mirror effect (not a lot of overhead)
{
std::shared_ptr<Object> sceneObject = sceneObjects[indexOfClosestObject];
Vector3d normal = sceneObject->GetNormalAt(origin);
screenshot of debug
EDIT: I have solved the issue and now shading works properly: https://github.com/MrCappuccino/Tracey/blob/testing/src/TriangleMesh.cpp#L35-L48
If you iterate all your faces and return on the first face you hit, it may happen that you hit faces which are behind other faces and therefore not really the face you want to hit, so you would have to measure the length of your ray and return the intersection for the shortest ray.
I've implemented the Marching Cube algorithm in a DirectX environment (To test and have fun). Upon completion, I noticed that the resulting model looks heavily distorted, as if the indices were off.
I've attempted to extract the indices, but I think the vertices are ordered correctly already, using the lookup tables, examples at http://paulbourke.net/geometry/polygonise/ . The current build uses a 15^3 volume.
Marching cubes iterates over the array as normal:
for (float iX = 0; iX < CellFieldSize.x; iX++){
for (float iY = 0; iY < CellFieldSize.y; iY++){
for (float iZ = 0; iZ < CellFieldSize.z; iZ++){
MarchCubes(XMFLOAT3(iX*StepSize, iY*StepSize, iZ*StepSize), StepSize);
}
}
}
The MarchCube function is called:
void MC::MarchCubes(){
...
int Corner, Vertex, VertexTest, Edge, Triangle, FlagIndex, EdgeFlags;
float Offset;
XMFLOAT3 Color;
float CubeValue[8];
XMFLOAT3 EdgeVertex[12];
XMFLOAT3 EdgeNorm[12];
//Local copy
for (Vertex = 0; Vertex < 8; Vertex++) {
CubeValue[Vertex] = (this->*fSample)(
in_Position.x + VertexOffset[Vertex][0] * Scale,
in_Position.y + VertexOffset[Vertex][1] * Scale,
in_Position.z + VertexOffset[Vertex][2] * Scale
);
}
FlagIndex = 0;
Intersection calculations:
...
//Test vertices for intersection.
for (VertexTest = 0; VertexTest < 8; VertexTest++){
if (CubeValue[VertexTest] <= TargetValue)
FlagIndex |= 1 << VertexTest;
}
//Find which edges are intersected by the surface.
EdgeFlags = CubeEdgeFlags[FlagIndex];
if (EdgeFlags == 0){
return;
}
for (Edge = 0; Edge < 12; Edge++){
if (EdgeFlags & (1 << Edge)) {
Offset = GetOffset(CubeValue[EdgeConnection[Edge][0]], CubeValue[EdgeConnection[Edge][1]], TargetValue); // Get offset function definition. Needed!
EdgeVertex[Edge].x = in_Position.x + VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0] * Scale;
EdgeVertex[Edge].y = in_Position.y + VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1] * Scale;
EdgeVertex[Edge].z = in_Position.z + VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2] * Scale;
GetNormal(EdgeNorm[Edge], EdgeVertex[Edge].x, EdgeVertex[Edge].y, EdgeVertex[Edge].z); //Need normal values
}
}
And the original implementation gets pushed into a holding struct for DirectX.
for (Triangle = 0; Triangle < 5; Triangle++) {
if (TriangleConnectionTable[FlagIndex][3 * Triangle] < 0) break;
for (Corner = 0; Corner < 3; Corner++) {
Vertex = TriangleConnectionTable[FlagIndex][3 * Triangle + Corner];3 * Triangle + Corner]);
GetColor(Color, EdgeVertex[Vertex], EdgeNorm[Vertex]);
Data.VertexData.push_back(XMFLOAT3(EdgeVertex[Vertex].x, EdgeVertex[Vertex].y, EdgeVertex[Vertex].z));
Data.NormalData.push_back(XMFLOAT3(EdgeNorm[Vertex].x, EdgeNorm[Vertex].y, EdgeNorm[Vertex].z));
Data.ColorData.push_back(XMFLOAT4(Color.x, Color.y, Color.z, 1.0f));
}
}
(This is the same ordering as the original GL implementation)
Turns out, I missed a parenthesis showing operator precedence.
EdgeVertex[Edge].x = in_Position.x + (VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0]) * Scale;
EdgeVertex[Edge].y = in_Position.y + (VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1]) * Scale;
EdgeVertex[Edge].z = in_Position.z + (VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2]) * Scale;
Corrected, obtained Visine; resumed fun.
I am working on my ray-tracer and I think I've made some significant achievements. I am currently trying to place texture images onto objects. However they don't place quite well. They appear flipped on the sphere. Here is the final image of my current code:
Here are the relevant code:
-Image Class for opening image
class Image
{
public:
Image() {}
void read_bmp_file(char* filename)
{
int i;
FILE* f = fopen(filename, "rb");
unsigned char info[54];
fread(info, sizeof(unsigned char), 54, f); // read the 54-byte header
// extract image height and width from header
width = *(int*)&info[18];
height = *(int*)&info[22];
int size = 3 * width * height;
data = new unsigned char[size]; // allocate 3 bytes per pixel
fread(data, sizeof(unsigned char), size, f); // read the rest of the data at once
fclose(f);
for(i = 0; i < size; i += 3)
{
unsigned char tmp = data[i];
data[i] = data[i+2];
data[i+2] = tmp;
}
/*Now data should contain the (R, G, B) values of the pixels. The color of pixel (i, j) is stored at
data[j * 3* width + 3 * i], data[j * 3 * width + 3 * i + 1] and data[j * 3 * width + 3*i + 2].
In the last part, the swap between every first and third pixel is done because windows stores the
color values as (B, G, R) triples, not (R, G, B).*/
}
public:
int width;
int height;
unsigned char* data;
};
-Texture class
class Texture: public Material
{
public:
Texture(char* filename): Material() {
image_ptr = new Image;
image_ptr->read_bmp_file(filename);
}
virtual ~Texture() {}
virtual void set_mapping(Mapping* mapping)
{ mapping_ptr = mapping;}
virtual Vec get_color(const ShadeRec& sr) {
int row, col;
if(mapping_ptr)
mapping_ptr->get_texel_coordinates(sr.local_hit_point, image_ptr->width, image_ptr->height, row, col);
return Vec (image_ptr->data[row * 3 * image_ptr->width + 3*col ]/255.0,
image_ptr->data[row * 3 * image_ptr->width + 3*col+1]/255.0,
image_ptr->data[row * 3 * image_ptr->width + 3*col+2]/255.0);
}
public:
Image* image_ptr;
Mapping* mapping_ptr;
};
-Mapping class
class SphericalMap: public Mapping
{
public:
SphericalMap(): Mapping() {}
virtual ~SphericalMap() {}
virtual void get_texel_coordinates (const Vec& local_hit_point,
const int hres,
const int vres,
int& row,
int& column) const
{
float theta = acos(local_hit_point.y);
float phi = atan2(local_hit_point.z, local_hit_point.x);
if(phi < 0.0)
phi += 2*PI;
float u = phi/(2*PI);
float v = (PI - theta)/PI;
column = (int)((hres - 1) * u);
row = (int)((vres - 1) * v);
}
};
-Local hit points:
virtual void Sphere::set_local_hit_point(ShadeRec& sr)
{
sr.local_hit_point.x = sr.hit_point.x - c.x;
sr.local_hit_point.y = (sr.hit_point.y - c.y)/R;
sr.local_hit_point.z = sr.hit_point.z -c.z;
}
-This is how I constructed the sphere in main:
Texture* t1 = new Texture("Texture\\earthmap2.bmp");
SphericalMap* sm = new SphericalMap();
t1->set_mapping(sm);
t1->set_ka(0.55);
t1->set_ks(0.0);
Sphere *s1 = new Sphere(Vec(-60,0,50), 149);
s1->set_material(t1);
w.add_object(s1);
Sorry for long codes but if I had any idea where that problem might occur, I'd have posted that part. Finally this is how I call get_color() function from the main:
xShaded += sr.material_ptr->get_color(sr).x * in.x * max(0.0, sr.normal.dot(l)) +
sr.material_ptr->ks * in.x * pow((max(0.0,sr.normal.dot(h))),1);
yShaded += sr.material_ptr->get_color(sr).y * in.y * max(0.0, sr.normal.dot(l)) +
sr.material_ptr->ks * in.y * pow((max(0.0,sr.normal.dot(h))),1);
zShaded += sr.material_ptr->get_color(sr).z * in.z * max(0.0, sr.normal.dot(l)) +
sr.material_ptr->ks * in.z * pow((max(0.0,sr.normal.dot(h))),1);
Shot in the dark: if memory serves, BMPs are stored from the bottom up, while many other image formats are top-down. Could that possibly be the problem? Perhaps your file reader just needs to reverse the rows?
Changing float phi = atan2(local_hit_point.z, local_hit_point.x); to float phi = atan2(local_hit_point.x, local_hit_point.z); solved the problem.
Suppose I have an explicit equation that could represent an object shape in OpenGL, how should I sort of "plot" out the shape from the explicit equation?
For example, I have this explicit equation:
Both u and v are members of the real numbers.
I then tried to do this in OpenGL C++:
float maxParts = 20;
vector<float> point(3);
glBegin(GL_QUAD_STRIP);
for(int i=0; i<=maxParts; i++) {
float u = ((float)i/maxParts)*2*M_PI;
for(int j=-maxParts; j<=maxParts; j++) {
float v = (j/maxParts) * M_PI;
point[0] = cos(u) * (4.0+3.8*cos(v));
point[1] = sin(u) * (4.0+3.8*cos(v));
point[2] = (cos(v) + sin(v) - 1.0) * (1.0 + sin(v)) * log(1.0-M_PI*v/10.0) + 7.5 * sin(v);
glVertex3f(point[0], point[1], point[2]);
}
}
glEnd();
But it turns out to be just really crappy. The above code somewhat gives a slight impression of the shape but the polygons are not rendered properly. How should I iterate through the explicit equation for the x, y and z coordinates to construct the shape from the equation?
You're generally going into the right direction. However you missed the crucial step, that you'll have to split down the patch into smaller quads (tesselate it). So you don't just iterate over the sampling points, you iterate over the patches and must generate 4 sampling points for each patch.
Also you need to supply the vertex normals. The vertex normals are given by taking the cross product of the vectors δ{x,y,z}/δu and δ{x,y,z}/δv
EDIT due to comment
Code example for emitting independent quads:
const int patches_x, patches_y;
const float patch_size_x, patch_size_y;
int px, py;
for(px = 0; px < patches_x; px++) for(py = 0; py < patches_y; py++) {
const float U = px * patch_size_x;
const float V = py * patch_size_y;
{
float u, v;
u = U - patch_size_x/2.0;
v = V - patch_size_y/2.0;
emit_quad_vertex(u, v);
}
{
float u, v;
u = U + patch_size_x/2.0;
v = V - patch_size_y/2.0;
emit_quad_vertex(u, v);
}
{
float u, v;
u = U + patch_size_x/2.0;
v = V + patch_size_y/2.0;
emit_quad_vertex(u, v);
}
{
float u, v;
u = U - patch_size_x/2.0;
v = V + patch_size_y/2.0;
emit_quad_vertex(u, v);
}
}