I am trying to create a 3D box, that would play the role of a playing board/surface/room for a 3D game in C++, using OpenGl. As a starting point I found some code that does that for a 2D surface. My question would be how to modify the following code to serve my purpose:
for (float i = -width; i + 0.1 <= width; i += 0.1) {
for (float j = -height; j + 0.1 <= height; j+= 0.1) {
glBegin(GL_QUADS);
glNormal3f(0, 1, 0);
glVertex3f(i, 0, j);
glVertex3f(i, 0, j + 0.1);
glVertex3f(i + 0.1, 0, j + 0.1);
glVertex3f(i + 0.1, 0, j);
glEnd();
}
}
Thanks a lot.
You can either use above code 6 times but each time apply different rotation/translation matrix, or you can do it the right way and generate correct geometry for missing 5 walls of your box. There are lots of samples for drawing cubes, ie.:
http://www.opengl.org/resources/code/samples/glut_examples/examples/examples.html
Related
I am drawing a grid in opengl using this -
void draw_grid()
{
glColor3f(0.32, 0.32, 0.32);
int iterations = int(WIDTH / grid_width) + 1;
glBegin(GL_LINES);
for (int i = 0; i < iterations; i++)
{
int x = i * grid_width;
glVertex2f(x, 0);
glVertex2f(x, HEIGHT);
}
for (int i = 0; i < iterations; i++)
{
int y = i * grid_width;
glVertex2f(0, y);
glVertex2f(WIDTH, y);
}
glEnd();
}
And, then I plot points at the intersections of this grid.
I am implementing simple dda method for drawing lines.
For endpoints (2,3) and (15, 8) I get this as output -
But, for endpoints (2,3) and (35, 8) I get this -
You can see that for the second case some points get plotted outside the window and thus are not visible.
This happens because I have hardcoded the grid_width.
I understand that more the difference between the endpoints the smaller the grid_width is supposed to be.
But I can't figure out how exactly to calculate the grid_width so that no matter what endpoints are given they are drawn in the bounds of the window.
You have to determine the grid_width depending on the maximum coordinate. The maximum width of the grid is WIDTH / maxC where maxC is the maximum coordinate. e.g.:
grid_width = int(WIDTH / maxC);
I am using OpenGL in WinAPI to create a 2D line graph. My points are plotted in point size 8, and I want to adjust the height of the plotted points (and the line connecting them) so that the bottom of the point is at the proper y-position (i.e., so that a point at 0 isn't split by the x-axis).
I had an adjustment hard-coded, but I would rather have it scale with the plotted point size, so that when it's plotted in a different size window, it works the same.
Here is my method for plotting the points and the line connecting them:
void plotScores() {
if (samples > 1) { //if this is at least the second score, connect the scores with a line
glLineWidth(12.0);
GLdouble lineXPos = 0, lineYPos = 0;
glColor3d(0.3, 0.3, 0.3);
glBegin(GL_LINE_STRIP);
for (int i = 0; i < scores.size(); i++) {
lineXPos = (i * 0.05) - 0.88;
lineYPos = ((scores[i] - 0.5) * 1.6); //need to adjust this for line y-position...
glVertex2d(lineXPos, lineYPos);
}
glEnd();
}
for (int i = 0; i < scores.size(); i++) {
GLdouble pointXPos = (i * 0.05) - 0.88;
GLdouble pointYPos = ((scores[i] - 0.5) * 1.6); //...and this for point y-position
if (scores[i] >= threshold) {
glColor3d(0.0, 1.0, 0.2);
}
else {
glColor3d(1.0, 0.2, 0.0);
}
glBegin(GL_POINTS);
glVertex2d(pointXPos, pointYPos);
glEnd();
}
}
You set the point size with glPointSize, so you should know that value. If you want to query it afterwards for some reason, it can be done with glGet and GL_POINT_SIZE enum.
I am using vertex arrays to store circle vertices and colors.
Here is the setup function:
void setup1(void)
{
glClearColor(1.0, 1.0, 1.0, 0.0);
// Enable two vertex arrays: co-ordinates and color.
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
// Specify locations for the co-ordinates and color arrays.
glVertexPointer(3, GL_FLOAT, 0, Vertices1);
glColorPointer(3, GL_FLOAT, 0, Colors1);
}
The global declaration of the arrays is here:
static float Vertices1[500] = { 0 };
static float Colors1[500] = { 0 };
The arrays are all set up here (R is the radius, X and Y are the (X,Y) center, and t is the angle parameter of the circle)
void doGlobals1()
{
for (int i = 0; i < numVertices1 * 3; i += 3)
{
Vertices1[i] = X + R * cos(t);
Vertices1[i + 1] = Y + R * sin(t);
Vertices1[i + 2] = 0.0;
t += 2 * PI / numVertices1;
}
for (int j = 0; j < numVertices1 * 3; j += 3)
{
Colors1[j] = (float)rand() / (float)RAND_MAX;
Colors1[j + 1] = (float)rand() / (float)RAND_MAX;
Colors1[j + 2] = (float)rand() / (float)RAND_MAX;
}
}
Finally, this is where the shape is drawn.
// Window 1 drawing routine.
void drawScene1(void)
{
glutSetWindow(win1);
glLoadIdentity();
doGlobals1();
glClear(GL_COLOR_BUFFER_BIT);
glRotatef(15, 1, 0, 0);
glDrawArrays(GL_TRIANGLE_FAN, 0, numVertices1);
glFlush();
}
Without the Rotation, the circle draws just fine. The circle also draws fine with any Scale/Translate function. I suspect there is some special protocol necessary to rotate an object drawn with vertex arrays.
Can anyone tell me where I have gone wrong, what I will need to do in order to rotate the object, or offer any advice?
glRotatef(15, 1, 0, 0);
^ why the X axis?
The default ortho projection matrix has pretty tight near/far clipping planes: -1 to 1.
Rotating your circle of X/Y coordinates outside of the X/Y plane will tend to make those points get clipped.
Rotate around the Z axis instead:
glRotatef(15, 0, 0, 1);
So I'm working with a 2d array, and I'm trying to display it on a widget using opengl. It seems to work fine, but it does not fill the widget properly. Rather than filling it evenly it's moved to the top right as seen in the image below. How can I get this to be centered?
int x = -0.1;
int y = -0.1;
float lengthX = 0.9 / ROW;
float lengthY = 0.9 / COLM;
for (int i = 0; i < ROW; i++) {
for (int j = 0; j < COLM; j++) {
if (arr[i][j] == 1) {
glColor3f(1.0f, 1.0f, 1.0f);
} else {
glColor3f(0.0f, 0.0f, 0.0f);
}
glBegin(GL_QUADS);
// Q3 Q4 Q1 Q2
glVertex2f((-x) + 2 * i * lengthX, (-y) + 2 * j * lengthY);
glVertex2f(x + (2 * i + 1) * lengthX, (-y) + (2 * j + 1) * lengthX);
glVertex2f(x + (2 * i + 1) * lengthX, y + (2 * j + 1) * lengthY);
glVertex2f((-x) + 2 * i * lengthX, y + 2 * j * lengthY);
glEnd();
}
}
First off, your code is wrong with it's variables; Such as int x = -0.1.
Now, to fix the problem just add this to the beginning of your code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1, 1, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
The problem was that you're using the default matrix setup. Basically, it ranges from -1, -1 to 1, 1, instead of 0, 0, to 1, 1. I can't quite read your code, but if you can edit the glOrtho function's first 4 parameters (the last two shouldn't effect anything) to change the visible range of what you're drawing.
Just to explain a bit more, the first parameter sets the left side, the second right, the third bottom, and the fourth top. So glOrtho(0, 600, 800, 0) means setting a vertex to 0, 0 means it's showing on the top left, while a vertex set to 800, 600 will be showing on the bottom right.
Your setup was only showing a part of what it was supposed to show because the center of it, was the corner of your screen.
I have an array that contains 0's and 1's and based on the value of the array I want to draw a square and fill it with a color. I have the follow code below, but it only makes 1 square in the middle of the screen. I feel like there is something I need to do with glVertex2f() but I'm kind of stumped.
The end result is something like this
but my code is just one square colored.
for (int i = 0; i < Width; i++) {
for (int j = 0; j < Height; i++) {
if (myArray[i][j] == 0) {
glColor(1.1, 1.1, 1.1);
} else {
glColor(2.2, 2.2, 2.2);
}
glBegin(GL_QUADS);
glVertex2f(-0.2, 0.2);
glVertex2f(0.2, 0.2);
glVertex2f(0.2, 0.2);
glVertex2f(-0.2, 0.2);
glEnd;
}
}
You are drawing all of your squares in the same spot - you need to space them out.
Try adding a translate in front of each draw. Something like:
glPushMatrix();
glTranslatef(2. * i, 2. * j, 0.0) ;
glBegin(GL_QUADS) ;
...
glPopMatrix() ;