Calculating Normals for Surface in Open GL - c++

I am trying to add shading/lighting to my terrain generator. But for some reason my output still looks blocky even after I calculate surface normals.
set<pair<int,int> >::const_iterator it;
for ( it = mRandomPoints.begin(); it != mRandomPoints.end(); ++it )
{
for ( int i = 0; i < GetXSize(); ++i )
{
for ( int j = 0; j < GetZSize(); ++j )
{
float pd = sqrt(pow((*it).first - i,2) + pow((*it).second - j,2))*2 / mCircleSize;
if(fabs(pd) <= 1.0)
{
mMap[i][j][2] += mCircleHeight/2 + cos(pd*3.14)*mCircleHeight/2; ;
}
}
}
}
/*
The three points being considered to compute normals are
(i,j)
(i+1,j)
(i, j+1)
*/
for ( int i = 0; i < GetXSize() -1 ; ++i )
{
for ( int j = 0; j < GetZSize() - 1; ++j )
{
float b[] = {mMap[i+1][j][0]-mMap[i][j][0], mMap[i+1][j][1]-mMap[i][j][1], mMap[i+1][j][2]-mMap[i][j][2] };
float c[] = {mMap[i][j+1][0]-mMap[i][j][0], mMap[i][j+1][1]-mMap[i][j][1], mMap[i][j+1][2]-mMap[i][j][2] };
float a[] = {b[1]*c[2] - b[2]*c[1], b[2]*c[0]-b[0]*c[2], b[0]*c[1]-b[1]*c[0]};
float Vnorm = sqrt(pow(a[0],2) + pow(a[1],2) + pow(a[2],2));
mNormalMap[i][j][0] = a[0]/Vnorm;
mNormalMap[i][j][1] = a[1]/Vnorm;
mNormalMap[i][j][2] = a[2]/Vnorm;
}
}
Then when drawing this I use the following
float*** normal = map->GetNormalMap();
for (int i = 0 ; i < map->GetXSize() - 1; ++i)
{
glBegin(GL_TRIANGLE_STRIP);
for (int j = 0; j < map->GetZSize() - 1; ++j)
{
glNormal3fv(normal[i][j]);
float color = 1 - (terrain[i][j][2]/height);
glColor3f(color,color, color);
glVertex3f(terrain[i][j][0], terrain[i][j][2], terrain[i][j][1]);
glVertex3f(terrain[i+1][j][0], terrain[i+1][j][2], terrain[i+1][j][1]);
glVertex3f(terrain[i][j+1][0], terrain[i][j+1][2], terrain[i][j+1][1]);
glVertex3f(terrain[i+1][j+1][0], terrain[i+1][j+1][2], terrain[i+1][j+1][1]);
}
glEnd();
}
EDIT: Initialization Code
glFrontFace(GL_CCW);
glCullFace(GL_FRONT); // glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glMatrixMode(GL_PROJECTION);
Am I calculating the Normals Properly?

In addition to what Bovinedragon suggested, namely glShadeModel(GL_SMOOTH);, you should probably use per-vertex normals. This means that each glVertex3f would be preceded by a glNormal3fv call, which would define the average normal of all adjacent faces. To obtain it, you can simply add up these neighbouring normal vectors and normalize the result.
Reference this question: Techniques to smooth face edges in OpenGL

Have you set glShadeModel to GL_SMOOTH?
See: http://www.khronos.org/opengles/documentation/opengles1_0/html/glShadeModel.html
This settings also effects vertex colors in addition to lighting. You seem to say it was blocky even before lighting which makes me think this is the issue.

Related

OpenCV: lab color quantization to predefined colors

I trying to reduce my image colors to some predefined colors using the following function:
void quantize_img(cv::Mat &lab_img, std::vector<cv::Scalar> &lab_colors) {
float min_dist, dist;
int min_idx;
for (int i = 0; i < lab_img.rows*lab_img.cols * 3; i += lab_img.cols * 3) {
for (int j = 0; j < lab_img.cols * 3; j += 3) {
min_dist = FLT_MAX;
uchar &l = *(lab_img.data + i + j + 0);
uchar &a = *(lab_img.data + i + j + 1);
uchar &b = *(lab_img.data + i + j + 2);
for (int k = 0; k < lab_colors.size(); k++) {
double &lc = lab_colors[k](0);
double &ac = lab_colors[k](1);
double &bc = lab_colors[k](2);
dist = (l - lc)*(l - lc)+(a - ac)*(a - ac)+(b - bc)*(b - bc);
if (min_dist > dist) {
min_dist = dist;
min_idx = k;
}
}
l = lab_colors[min_idx](0);
a = lab_colors[min_idx](1);
b = lab_colors[min_idx](2);
}
}
}
However it does not seem to work properly! For example the output for the following input looks amazing!
if (!(src = imread("im0.png")).data)
return -1;
cvtColor(src, lab, COLOR_BGR2Lab);
std::vector<cv::Scalar> lab_color_plate_({
Scalar(100, 0 , 0), //white
Scalar(50 , 0 , 0), //gray
Scalar(0 , 0 , 0), //black
Scalar(50 , 127, 127), //red
Scalar(50 ,-128, 127), //green
Scalar(50 , 127,-128), //violet
Scalar(50 ,-128,-128), //blue
Scalar(68 , 46 , 75), //orange
Scalar(100,-16 , 93) //yellow
});
//convert from conventional Lab to OpenCV Lab
for (int k = 0; k < lab_color_plate_.size(); k++) {
lab_color_plate_[k](0) *= 255.0 / 100.0;
lab_color_plate_[k](1) += 128;
lab_color_plate_[k](2) += 128;
}
quantize_img(lab, lab_color_plate_);
cvtColor(lab, lab, CV_Lab2BGR);
imwrite("im0_lab.png", lab);
Input image:
Output image
Can anyone explain where the problem is?
After checking your algorithm I noticed that the algorithm is correct 100% and the problem is your color space.... Let's take one of the colors that is changed "wrongly" like the green from the trees.
Using a color picker tool in GIMP it tells you that at least one of the green used is in RGB (111, 139, 80). When this is converted to LAB, you get (54.4, -20.7, 28.3). The distance to green is (by your formula) 21274.34 , and with grey the distance is 1248.74... so it will choose grey over green, even though it is a green color.
A lot of values in LAB can generate a green value. You can test it out the color ranges in this webpage. I would suggest you to use HSV or HSL and compare the H values only which is the Hue. The other values changes only the tone of green, but a small range in the Hue determines that it is green. This will probably give you more accurate results.
As some suggestion to improve your code, use Vec3b and cv::Mat functions like this:
for (int i = 0; i < lab_img.rows; ++i) {
for (int j = 0; j < lab_img.cols; ++j) {
Vec3b pixel = lab_img.at<Vec3b>(i,j);
}
}
This way the code is more readable, and some checks are done in debug mode.
The other way would be to do a one loop since you don't care about indices
auto currentData = reinterpret_cast<Vec3b*>(lab_img.data);
for (size_t i = 0; i < lab_img.rows*lab_img.cols; i++)
{
auto& pixel = currentData[i];
}
This way is also better. This last part is just a suggestion, there is nothing wrong with your current code, just harder to read understand to the outside viewer.

glsl texture access synchronization, openCL vs glsl image processing

This might be a trivial question.
I am curious about how glsl would synchronize when accessing texture data via a fragment shader.
Say I have a code like below in a fragment shader.
void main() {
vec3 texCoord = in_texCoord;
vec4 out_voxel_intensity = texture(image, vec3(texCoord.x , texCoord.y, texCoord.z));
out_voxel = float(out_voxel_intensity) ;
if(out_voxel <= threshold)
{
out_voxel = 0.0;
return;
}
for(int i = -int(kernalSize); i <= int(kernalSize);++i)
for(int j = -int(kernalSize); j <= int(kernalSize); ++j)
for(int k = -int(kernalSize); k <= int(kernalSize); ++k)
{
float x_o = texCoord.x + i / (imageSize.x);
float y_o = texCoord.y + j / (imageSize.y);
float z_o = texCoord.z + k / (imageSize.z);
if(x_o < 0.0 || x_o > 1.0
|| y_o < 0. || y_o > 1.0
|| z_o < 0. || z_o > 1.0)
continue;
if(float(texture(image, vec3(x_o, y_o, z_o))) <= threshold)
{
out_voxel = 0.0;
return;
}
}
}
as the code above access not only the current texture coordinate, but the values around it with the specified kernel size, how glsl takes care that no other parallel process access the same texture coordinates.
W.r.t that question, does the code above performs efficiently in a fragment shader given it access neighboring texture data or using openCL better?
Thanks

Building a UV Sphere in c++

I'm trying to make a UV sphere in C++ using Qt Creator, I want to build the sphere without using the openGL commands. I'm trying trying to add the vertices to lObject and then add the normals and triangles. The sphere will have a radius of 1. First problem is that it doesn't render a sphere when drawn, so maybe I'm not adding the right vertices or maybe I'm not adding the triangles correctly. Any help on what I'm doing wrong would be great.
Here's what I've tried:
NodeObject* ObjectFactory::buildSphere(int slices, int stacks)
{
// Allocate a new node object
NodeObject* lObject = new NodeObject();
for(int i=0; i<stacks; i++)
{
double lnum1 = 360.0/stacks;
double lTheta = ((double)i)*(lnum1*(M_PI/180.0));
double lNextTheta = ((double)(i+1))*lnum1*(M_PI/180.0);
for(int j=0; j<slices; j++)
{
double lnum2 = 180.0/slices;
double lPhi = ((double)i)*(lnum2*(M_PI/180.0));
double lNextPhi = ((double)(i+1))*lnum1*(M_PI/180.0);
lObject->addVertex(0.0, 1.0, 0.0); //Top
lObject->addVertex(sin(lTheta)*cos(lPhi), sin(lTheta)*sin(lPhi), cos(lTheta));
lObject->addVertex(sin(lNextTheta)*cos(lNextPhi), sin(lNextTheta)*sin(lNextPhi), cos(lNextTheta));
lObject->addVertex(sin(lTheta)*cos(lPhi), -(sin(lTheta)*sin(lPhi)), cos(lTheta));
lObject->addVertex(sin(lNextTheta)*cos(lNextPhi), -(sin(lNextTheta)*sin(lNextPhi)), cos(lNextTheta));
lObject->addVertex(0.0, -1.0, 0.0); //Bottom
lObject->addNormal(0.0,1.0,0.0);
lObject->addNormal(0.0,-1.0,0.0);
lObject->addNormal(sin(lTheta)*cos(lPhi),sin(lTheta)*sin(lPhi), cos(lTheta));
for(int k=0; k<pSlices*6; k++)
{
if(i==0) { lObject->addTriangle(0,1,2,0,0,0); }
else if(i+1 == stacks) {lObject->addTriangle(2,0,1,0,0,0); }
else
{
lObject->addTriangle(k, k+1, k+2,k,k+1,k+2);
}
}
}
}
return lObject;
}
In you third for loop what is the value for pSlices? Also why are you adding top and bottom for each stack?
And for better practice generate the vertices first, then do other stuff.
You can use a simple data structure for holding the data such as, also generate one layer of vertices at the inner loop.:
QVector3D sphereVertices[stacks][slices];
For first layer fill it with (0.0, 1.0, 0.0);
For last layer fill it with (0.0, -1.0, 0.0);
Then iterate over the vertices to calculate normals and create triangles in CCW.
for ( int i = 0; i < stacks -1; i++){ //-1 is due to we are using the next stack to create the face
for ( int j = 0; j < slices -1; j++){ //we are also using the next slice
//Add these vertex indices
//First triangle indices
//i*slices + j, (i+1)*slices + j, (i+1)*slices + j + 1
//Second triangle indices
//i*slices + j, (i+1)*slices + j + 1, i*slices + j + 1
//Furthermore you can calculate triangle normal by using these vertices
//https://www.opengl.org/wiki/Calculating_a_Surface_Normal
}
}

why are my primitives see through?

i am rendering a 3d surface in opengl by drawing a bunch of triangles. some of my primitives are see through. i don't simply mean that there is a blending of the color behind them, i mean that i can see completely through them. i have no idea why i am able to see through these primitives and would not like that to be the case (unless i specify alpha blending which i have not).
unfortunately i cannot link any code (there are ~1800 lines right now and i don't know where the error would be!), but any help would be great.
i hope i have given enough information, if not, please feel free to ask for me to clarify!
EDIT: more info ...
i call plotPrim(ix,iy,iz) which uses cube marching to plot a triangle (or a few) through the current cube of a rectangular grid.
myInit() is ...
void myInit()
{
// initialize vectors
update_vectors();
// set initial color to white
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_BLEND | GL_DEPTH_TEST);
}
plotMesh() is where i do my work of going through each cube and plotting the primitives
void plotMesh() //
{
if(plot_prop)
{
// do some stuff
}
else
{
glBegin(GL_TRIANGLES);
for(int ix = 0; ix < snx-1; ix++)
{
//x = surf_x[ix];
for(int iy = 0; iy < sny-1; iy++)
{
//y = surf_y[iy];
for(int iz = 0; iz < snz-1; iz++)
{
//z = surf_z[iz];
// front face
a = sv(ix+0, iy+0, iz+0);
b = sv(ix+0, iy+1, iz+0);
g = sv(ix+0, iy+0, iz+1);
d = sv(ix+0, iy+1, iz+1);
// back face
al = sv(ix+1, iy+0, iz+0);
be = sv(ix+1, iy+1, iz+0);
ga = sv(ix+1, iy+0, iz+1);
de = sv(ix+1, iy+1, iz+1);
// test to see if a primitive needs to be plotted
plotPrim(ix, iy, iz);
}
}
}
glEnd();
}
}
one example of a primitive being plotted in plotPrim() is ....
if(val>a && val<g && val<b && val<al || val<a && val>g && val>b && val>al) // "a" corner
{
tx = (val-a)/(al-a);
ty = (val-a)/(b-a);
tz = (val-a)/(g-a);
x1 = surf_x[ix] + tx*surf.dx;
y1 = surf_y[iy];
z1 = surf_z[iz];
x2 = surf_x[ix];
y2 = surf_y[iy] + ty*surf.dy;
z2 = surf_z[iz];
x3 = surf_x[ix];
y3 = surf_y[iy];
z3 = surf_z[iz] + tz*surf.dz;
getColor( (1.0-tx)*sv(ix,iy,iz) + tx*sv(ix+1,iy,iz) );
glVertex3f(x1,y1,z1);
getColor( (1.0-ty)*sv(ix,iy,iz) + ty*sv(ix,iy+1,iz) );
glVertex3f(x2,y2,z2);
getColor( (1.0-tz)*sv(ix,iy,iz) + tz*sv(ix,iy,iz+1) );
glVertex3f(x3,y3,z3);
}
glEnable(GL_BLEND | GL_DEPTH_TEST);
is wrong as glEnable only takes a single capability to enable, not a bitmask. You might have more errors, but you want to change the above to:
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);

My shadow volumes don't move with my light

I'm currently trying to implement shadow volumes in my opengl world. Right now I'm just focusing on getting the volumes calculated correctly.
Right now I have a teapot that's rendered, and I can get it to generate some shadow volumes, however they always point directly to the left of the teapot. No matter where I move my light(and I can tell that I'm actually moving the light because the teapot is lit with diffuse lighting), the shadow volumes always go straight left.
The method I'm using to create the volumes is:
1. Find silhouette edges by looking at every triangle in the object. If the triangle isn't lit up(tested with the dot product), then skip it. If it is lit, then check all of its edges. If the edge is currently in the list of silhouette edges, remove it. Otherwise add it.
2. Once I have all the silhouette edges, I go through each edge creating a quad with one vertex at each vertex of the edge, and the other two just extended away from the light.
Here is my code that does it all:
void getSilhoueteEdges(Model model, vector<Edge> &edges, Vector3f lightPos) {
//for every triangle
// if triangle is not facing the light then skip
// for every edge
// if edge is already in the list
// remove
// else
// add
vector<Face> faces = model.faces;
//for every triangle
for ( unsigned int i = 0; i < faces.size(); i++ ) {
Face currentFace = faces.at(i);
//if triangle is not facing the light
//for this i'll just use the normal of any vertex, it should be the same for all of them
Vector3f v1 = model.vertices[currentFace.vertices[0] - 1];
Vector3f n1 = model.normals[currentFace.normals[0] - 1];
Vector3f dirToLight = lightPos - v1;
dirToLight.normalize();
float dot = n1.dot(dirToLight);
if ( dot <= 0.0f )
continue; //then skip
//lets get the edges
//v1,v2; v2,v3; v3,v1
Vector3f v2 = model.vertices[currentFace.vertices[1] - 1];
Vector3f v3 = model.vertices[currentFace.vertices[2] - 1];
Edge e[3];
e[0] = Edge(v1, v2);
e[1] = Edge(v2, v3);
e[2] = Edge(v3, v1);
//for every edge
//triangles only have 3 edges so loop 3 times
for ( int j = 0; j < 3; j++ ) {
if ( edges.size() == 0 ) {
edges.push_back(e[j]);
continue;
}
bool wasRemoved = false;
//if edge is in the list
for ( unsigned int k = 0; k < edges.size(); k++ ) {
Edge tempEdge = edges.at(k);
if ( tempEdge == e[j] ) {
edges.erase(edges.begin() + k);
wasRemoved = true;
break;
}
}
if ( ! wasRemoved )
edges.push_back(e[j]);
}
}
}
void extendEdges(vector<Edge> edges, Vector3f lightPos, GLBatch &batch) {
float extrudeSize = 100.0f;
batch.Begin(GL_QUADS, edges.size() * 4);
for ( unsigned int i = 0; i < edges.size(); i++ ) {
Edge edge = edges.at(i);
batch.Vertex3f(edge.v1.x, edge.v1.y, edge.v1.z);
batch.Vertex3f(edge.v2.x, edge.v2.y, edge.v2.z);
Vector3f temp = edge.v2 + (( edge.v2 - lightPos ) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
temp = edge.v1 + ((edge.v1 - lightPos) * extrudeSize);
batch.Vertex3f(temp.x, temp.y, temp.z);
}
batch.End();
}
void createShadowVolumesLM(Vector3f lightPos, Model model) {
getSilhoueteEdges(model, silhoueteEdges, lightPos);
extendEdges(silhoueteEdges, lightPos, boxShadow);
}
I have my light defined as and the main shadow volume generation method is called by:
Vector3f vLightPos = Vector3f(-5.0f,0.0f,2.0f);
createShadowVolumesLM(vLightPos, boxModel);
All of my code seems self documented in places I don't have any comments, but if there are any confusing parts, let me know.
I have a feeling it's just a simple mistake I over looked. Here is what it looks like with and without the shadow volumes being rendered.
It would seem you aren't transforming the shadow volumes. You either need to set the model view matrix on them so they get transformed the same as the rest of the geometry. Or you need to transform all the vertices (by hand) into view space and then do the silhouetting and transformation in view space.
Obviously the first method will use less CPU time and would be, IMO, preferrable.