I'm using OpenCV v4.4.0 with gcc v10.1 on Ubuntu 18.04.5. This code snippet:
using namespace std;
using namespace cv;
using namespace cv::viz;
......
vector<Vec3f> points{{0, 0, 0}, {1, 0, 0}, {1, 1, 0}, {0, 1, 0}, {2, 0, 0}, {2, 1, 0}};
vector<int> faces{4, 0, 1, 2, 0, 4, 0, 2, 3, 0, 5, 1, 4, 5, 2, 1};
Viz3d window("Mesh");
WMesh mesh(points, faces);
window.setBackgroundColor(Color::gray());
mesh.setColor(Color::indigo());
mesh.setRenderingProperty(OPACITY, 0.4);
mesh.setRenderingProperty(SHADING, SHADING_FLAT);
mesh.setRenderingProperty(REPRESENTATION, REPRESENTATION_SURFACE);
window.showWidget("mesh", mesh);
window.spin();
creates and displays this planar mesh:
The square on the left is defined as 2 triangles and is shaded uniformly, but the square on the right, which is defined as a single quadrilateral face is not shaded uniformly. Again, the mesh is planar. Why the non-uniformity?
It becomes even more non-uniform when I change shading from SHADING_FLAT to SHADING_GOURAUD:
Can someone explain what is going on here? I know that quadrilateral faces are converted to triangles, but why the shading is non-uniform?
EDIT
As noted by Матвей Вислоух in his answer below, I intended to use:
vector<int> faces{3, 0, 1, 2, 3, 0, 2, 3, 4, 1, 4, 5, 2};
which properly defines two triangular and one quadrilateral face. This solves the problem of artifacts in the left half, but they still remain in the right half:
vector<int> faces{3, 0, 1, 2, 3, 0, 2, 3, 4, 1, 4, 2, 5};
There is a specific rule how opencv unpacks indices from polygon.
if there is a polygon : (5, 1, 4, 5, 2, 0) it consists of 3 triangles: (1,4,5) , (4, 5, 2), (5, 2, 0)
OLD:
vector faces{4, 0, 1, 2, 0, 4, 0, 2, 3, 0, 5, 1, 4, 5, 2, 1};
This means you draw 3 polygons: 2 quads and 1 pentagon.
But by indices I guess that you want to draw 2 triangles, and one quad, so try this:
vector faces{3, 0, 1, 2, 3, 0, 2, 3, 4, 1, 4, 5, 2};
Related
Ok so I want to do this operation in Eigen:
float StartMatrix[7][7] = { { 1, 4, 6, 9, 3, 5, 8 }, { 2, 5, 3, 7, 4, 8, 2 }, { 3, 6, 6, 7, 0, 2, 4 },
{ 2, 4, 3, 7, 4, 8, 2 }, { 2, 3, 3, 11, 4, 8, 1 }, { 2, 12, 3, 7, 0, 8, 2 },
{ 2, 2, 3, 4, 4, 11, 2 } };
float TotalMatrix[7] = { 22, 15, 13, 26, 27, 33, 19 };
float CoMatrix[7][7] = { { 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0 },
{ 0, 0, 0, 0, 0, 0, 0 } };
for (int row = 0; row < 7; row++) {
for (int col = 0; col < 7; col++) {
CoMatrix[row][col] = StartMatrix[row][col] / TotalMatrix[col];
}
}
Divide each row by just the column in the TotalMatrix. And then I want to subtract the Identity matrix from the CoMatrix in Eigen and get the inverse on that (just to get an idea why I want to do this).
Problem is, how do I either perform this operation with Eigen, or somehow get the CoMatrix array into a matrix in Eigen so I can do stuff with it (like getting inverse etc).
Thanks!
Your code in Eigen would look something like this (after importing the Eigen namespace, using namespace Eigen;):
MatrixXd StartMatrix(7, 7);
StartMatrix <<
1, 4, 6, 9, 3, 5, 8, 2, 5, 3, 7, 4, 8, 2, 3, 6, 6, 7, 0, 2, 4,
2, 4, 3, 7, 4, 8, 2, 2, 3, 3, 11, 4, 8, 1, 2, 12, 3, 7, 0, 8, 2,
2, 2, 3, 4, 4, 11, 2;
VectorXd TotalMatrix(7);
TotalMatrix << 22, 15, 13, 26, 27, 33, 19;
MatrixXd CoMatrix = MatrixXd::Zero(StartMatrix.rows(), StartMatrix.cols());
CoMatrix = StartMatrix.array() / (TotalMatrix.replicate(1,StartMatrix.cols())).array();
You can continue subtracting the identity matrix with
CoMatrix -= MatrixXd::Identity(CoMatrix.rows(), CoMatrix.cols());
or combine it with the previous expression as:
CoMatrix = (StartMatrix.array() / (TotalMatrix.replicate(1, StartMatrix.cols())).array()).matrix()
- MatrixXd::Identity(CoMatrix.rows(), CoMatrix.cols());
I need to check, whether a letter (in english and russian languages) is alphabetical. A file is supposed to be encoded with UTF-8 by default.
I found out, that the best solution is working with UCS codes.
The way to calculate UCS-code of 2-bytes encoded letter is
#include <stdio.h>
#include <stdlib.h>
char utf8len[256] = {
// len = utf8len[c] & 0x7 cont = utf8len[c] & 0x8
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 0 - 15
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 16 - 31
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 32 - 47
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 48 - 63
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 64 - 79
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 80 - 95
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 96 - 111
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 112 - 127
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, // 80 - 8f
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, // 90 - 9f
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, // a0 - af
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, // b0 - bf
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, // c0 - cf
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, // d0 - df
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, // e0 - ef
4, 4, 4, 4, 4, 4, 4, 4, // f0 - f7
5, 5, 5, 5, // f8, f9, fa, fb
6, 6, // fc, fd
0, 0 // fe, ff
};
#define UTF8LEN(c) (utf8len[(unsigned char)(c)] & 0x7)
#define UTF8CONT(c) (utf8len[(unsigned char)(c)] & 0x8)
int main (int argc, char *argv[])
{
char *s = "Б№1АГД"; //string which contains cyrillic symbols
while (*s) {
int ucode;
printf ("[%s] %d\n", s, UTF8LEN(*s));
if ((UTF8LEN(*s) == 2) && UTF8CONT(s[1])) {
ucode = ((*s & 0x1f) << 6) | (s[1] & 0x3f); //! HERE I GET UCS CODE
printf ("ucode = 0x%x\n", ucode);
s++;
}
s++;
}
}
It's a half of the solution I'm looking for. This code alows me to work with cyrillic symbols only (as they're encoded with 2 bytes in UTF-8). The problem is, I need to work with latin alphabet as well.
So what should i do to get UCS code for 1-byte symbol (in my case with UTF8LEN(c)=1)?
Upd: Probably, the solution is:
ucode = *s
Will this work?
I'm looking for a way to draw a cube with a unique color per side. Currently, I'm using the following for the vertex and index data with w = width, h = height, and d = depth:
GLfloat vdata[8][3] = {
{-w, -h, -d}, {-w, h, -d},
{w, h, -d}, {w, -h, -d},
{-w, -h, d}, {w, -h, d},
{-w, h, d}, {w, h, d}
};
GLint indices[6][4] = {
{3, 2, 1, 0},
{3, 5, 4, 0},
{3, 5, 7, 2},
{0, 4, 6, 1},
{1, 2, 7, 6},
{5, 4, 6, 7}
};
I'm somewhat certain that I could just draw four vertices per face to achieve what I'm after, but I'd rather not take the performance hit of drawing all those extra vertices. Would a mapped texture make more sense?
Using 4 vertices per face is the correct way to do this. Why do you think that there will be a relevant "performance hit"? Texturing would very likely introduce a much larger performance loss.
I want to render a 3D box by reading its geometry from a VRML file. The indices in the VRML file is given as :
coordIndex
[
0, 1, 2, -1,
2, 3, 0, -1,
3, 2, 4, -1,
4, 5, 3, -1,
5, 4, 7, -1,
7, 6, 5, -1,
6, 7, 1, -1,
1, 0, 6, -1,
6, 0, 3, -1,
3, 5, 6, -1,
1, 7, 2, -1,
7, 4, 2, -1,
]
I want to call glDrawElements function to render the box but I am not sure about the "count" and "indices" parameter. Should count be 12 indicating the number of faces or 36 denoting the total indices of vertices ? Also, please tell me about the indices array. Should it be like this :
GLint indices[] = {0,1,2,2,3,0,3,2,4,.....,7,4,2};
OR
GLint indices[] = {0,1,2,-1,2,3,0,-1,....,7,4,2};
According to the man page of DrawElements
When glDrawElements is called, it uses count sequential elements from an enabled array
So it would be 36 for the total indices in your indexbuffer.
For the indices array you would have to choose the first version. The indices have to be >= 0 and for 3 consecutive indices a triangle will be drawn.
Indices array should be your first example (always positive numbers, referencing vertex index).
Draw elements count should be the length of your index array. This is usually something like indices.length.
I have been staring at this code for few hours, tried walkthrough,debugging with autos and breakpoints and it's no solution so far. Maybie someone's fresh look would help me ;) .
#include <iostream>
using namespace std;
int matrix[9][9] = {{0, 0, 6, 0, 0, 0, 1, 0, 5},
{0, 4, 0, 7, 0, 6, 0, 3, 9},
{2, 0, 0, 9, 3, 0, 6, 0, 0},
{7, 0, 0, 1, 8, 0, 5, 0, 4},
{0, 0, 4, 0, 6, 0, 9, 0, 0},
{1, 0, 9, 0, 5, 2, 0, 0, 3},
{0, 0, 1, 0, 9, 3, 0, 0, 7},
{6, 7, 0, 5, 0, 8, 0, 9, 0},
{9, 0, 8, 0, 0, 0, 4, 0, 0}};
bool check(int column ,int row,int checkedValue)
{
//column check
for(int i=0; i<9; i++)
{
if(i==row)continue;
if(checkedValue==matrix[column][i]) return false;
}
//row check
for(int i=0; i<9; i++)
{
if(i==column) continue;
if(checkedValue==matrix[i][row]) return false;
}
return true;
}
int main()
{
cout<<check(4,0,4); //Why does it output 0? There is no "4" in the 5th column and the 1st row.
system("pause");
return 0;
}
The function check(column,row,value) was designed to return 0 when number occurs at least once in the "matrix" two dimensional table. This program is a chunk of sudoku solver.
You mixed the indices up in the if statements. They should be:
if(checkedValue==matrix[i][column]) return false; // not matrix[column][i]
and
if(checkedValue==matrix[row][i]) return false; // not matrix[i][row]
The reason is that the first dimension is the row. You can check this by printing matrix[2][0].
For your matrix, you will get 2 (and not 6).