Add sin wave to triangle mesh - opengl

can someone help me addd a sin wave onto my triangle mesh to help me get a wave effect.
for(int i = 0; i<150; i++){
for(int j = 0; j<150; j++){
grid[i][j] = 0;
glBegin(GL_LINE_LOOP);
glVertex3f(i*3,grid[i][j],j*3);
glVertex3f(i*3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(i*3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3);
glEnd();
}
}
If i've got it right, all i should need to do is add a sin value to grid[i][j], am i right?
Are all the y values to be set to the same grid[i][j]?

It really depends on what you are trying to accomplish.
Are you trying to set up a surface that when looked on edge it looks like a sine wave?
If that is the case then assuming that you are modulating the y-axis and the z-axis plays no effect then you need to determine the frequency you want to use.
i.e y = A * sine (w * x + p) where A is amplitude, w is angular frequency, and p is phase.
You will also have to take into account the number of sample points on the x-axis so that it doesn't look to aliased. Sine is a continuous function but you are take only 150 samples.
Also you may want to reconsider how to calculate and draw your final triangle mesh. Your current code is not the most efficient because you are recalculating your mesh every frame.
You may want to consider initializing grid and then drawing triangle strips, etc. There is a lot online that discusses that.

Related

Implementing soft shadows in a ray tracer

what I am trying to do is implementing soft shadows in my simple ray tracer, developed in C++. The idea behind this, if I understood correctly, is to shoot multiple rays towards the light, instead of a single ray towards the center of the light, and average the results. The rays are therefore shot in different positions of the light. So far I am using random points, which I don't know if it is correct or if I should use points regularly distributed on the light surface. Assuming that I am doing right, I choose a random point on the light, which in my framework is implemented as a sphere. This is given by:
Vec3<T> randomPoint() const
{
T x;
T y;
T z;
// random vector in unit sphere
std::random_device rd; //used for the new <random> library
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(-1, 1);
do
{
x = dis(gen);
y = dis(gen);
z = dis(gen);
} while (pow(x, 2) + pow(y, 2) + pow(z, 2) > 1); // simple rejection sampling
return center + Vec3<T>(x, y, z) * radius;
}
After this, I don't know how exactly I should move since my rendering equation (in my simple ray tracer) is defined as follows:
Vec3<float> surfaceColor = 0
for(int i < 0; i < lightsInTheScene.size(); i++){
surfaceColor += obj->surfaceColor * transmission *
std::max(float(0), nHit.dot(lightDirection)) * g_lights[i]->emissionColor;
}
return surfaceColor + obj->emissionColor;
where transmission is a simple float which is set to 0 in case the ray that goes from my hitPoint to the lightCenter used to find an object in the middle.
So, what I tried to do was:
creating multiple rays towards random points on the light
counting how many of them hit an object on their path and memorize this number
for simplicity: Let's imagine in my case that I shoot 3 shadow rays from my point towards random points on the light. Only 2 of 3 rays reach the light. Therefore the final color of my pixel will be = color * shadowFactor where shadowFactor = 2/3. In my equation then I delete the transmission factor (which is now wrong) and I use the shadowFactor instead. The problem is that in my equation I have:
std::max(float(0), nHit.dot(lightDirection))
Which I don't know how to change since I don't have anymore a lightDirection which points towards the center of the light. Can you please help me understanding what should I do it and what's wrong so far? Thanks in advance!
You should evaluate the entire BRDF for the picked light samples. Then, you will also have the light direction (vector from object position to picked light sample). And you can average these results. Note that most area lights have a non-isotropic light emission characteristic (i.e. the amount of light emitted from a point varies by the outgoing direction).
Averaging the visibility does not produce correct results (although they are usually visually plausible).

Raytracing - Rays shot from camera through screen don't deviate on the y axis - C++

So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)

GL_POINTS drawing only one point instead of many

I have this code which is supposed to draw a number of points on screen :
glBegin(GL_POINTS);
for(int i = 0; i < x; i++)
{
for(int j = 0; j < y; j++)
{
glColor3f(0,0,0);
glVertex3f(array1[i][j], array2[i][j], array3[i][j]);
cout<<array1[i][j]<<" "<<array2[i][j]<<" "<<array3[i][j]<<endl;
}
}
glEnd();
I only get one point on the screen. I can't imagine how this is happening. I am printing array values, they are all different, but I am getting only one point instead of a few hundred points. Can you tell what is wrong with this code?
It could be that either only one point out of your dataset falls in the viewport, or that all the points end up being projected to only one visible pixel. Either way you should check your projection range. You could extract the bounding box of your dataset and set the viewing volume to be slightly larger than that.

Missing triangles when drawing a sphere

I'm trying to draw a sphere with openGL, but I can't find my mistake...
Only half of the triangles are drawn, like here in the picture.
Here is my algorithm so far:
// The angle step used in iteration
float a = (2.0f*M_PI)/8.0;
float c = 0.0f;
for (float theta = 0.0f; theta < 2.0f*M_PI; theta += a, c += a/2.0f)
for (float phi = 0.0f; phi < 2.0f*M_PI; phi += a) {
// Here something is missing...
glBegin(GL_TRIANGLES);
float p_1[3] = {sin(theta)*cos(phi+c),
sin(theta)*sin(phi+c),
cos(theta)};
glVertex3f(p_1[0], p_1[1], p_1[2]);
float p_3[3] = {sin(theta+a)*cos(phi+c+a/2.0f),
sin(theta+a)*sin(phi+c+a/2.0f),
cos(theta+a)};
glVertex3f(p_3[0], p_3[1], p_3[2]);
float p_2[3] = {sin(theta)*cos(phi+c+a),
sin(theta)*sin(phi+c+a),
cos(theta)};
glVertex3f(p_2[0], p_2[1], p_2[2]);
glEnd();
}
Could it be that half your triangles have the wrong orientation?
http://math.hws.edu/graphicsnotes/c3/s2.html
Several problems with this code:
Using floating point tests for loop condition. Keep in mind that float calculations are not exact. You have about a 50% chance of going through the loop one more time than you intended.
With standard spherical coordinates, the range of theta should be from 0..pi, not 0..2*pi.
You need to generate a quad inside the loop, not a triangle. Picture a map. If you look at the area between two lines of longitude and two lines of latitude, it has 4 corners.
I don't understand what your value c is doing. Not sure if this is a problem, or if I'm just not getting it.
Not a correctness problem, but you can place your glBegin and glEnd calls outside the loop.

Flag effect in opengl

I'm trying to follow this online tutorial to create some waves
http://nehe.gamedev.net/tutorial/flag_effect_(waving_texture)/16002/.
I want to make the wave much bigger, but I'm not sure if I'm going about it the right way, the current mesh of quads is sized 45 in the tutorial, so i have increased to 450, however the size doesn't seem to increase that much.
Can someone point me in the right direction as to what needs to be modified to make the quads bigger.
If you just want to make the quads bigger, then you need to modify the vertex position code. In the NeHe tutorial you posted change this part:
// Loop Through The X Plane
for(int x=0; x<45; x++)
{
// Loop Through The Y Plane
for(int y=0; y<45; y++)
{
// Apply The Wave To Our Mesh
points[x][y][0]=float((x/5.0f)-4.5f);
points[x][y][1]=float((y/5.0f)-4.5f);
points[x][y][2]=float(sin((((x/5.0f)*40.0f)/360.0f)*3.141592654*2.0f));
}
}
To this:
// Loop Through The X Plane
float spacing = 0.5f;
float spacingInv = 1.0f/spacing;
float offset = (45 / spacingInv) / 2.0f; // The 45 comes from the number of points (if you change this, change the for loop and the variable creation)
for(int x=0; x<45; x++)
{
// Loop Through The Y Plane
for(int y=0; y<45; y++)
{
// Apply The Wave To Our Mesh
// We change the x/5.0f-4.5f to change the size of the quads
// See text after for more details
points[x][y][0]=float((x/spacingInv)-offset);
points[x][y][1]=float((y/spacingInv)-offset);
points[x][y][2]=float(sin((((x/spacingInv)*40.0f)/360.0f)*3.141592654*2.0f));
}
}
Explanation:
x/5.0f gives you values 0, 0.2, 0.4, 0.6, 0.8, 1.0, ......, 9.0.
If you were to take just those values, you would now have an off center grid of quads. Now taking x/5.0f - 4.5f gives you values -4.5 -4.3, -4.1, ...... 4.1, 4.3, 4.5
If you wanted to make the quads bigger, you need to increase the spacing between the points (i.e. change the x/5.0f to something like x/2.0f (which is what happens in the example I gave)). And then you want to recenter (i.e. change the -4.5f).