How do I make a 2D square move on the screen? I try to move it but it just stays there.
int x = 100;
int y = 100;
int width = 50;
int height = 50;
x += 1;
glBegin(GL_QUADS);
glColor3f(r, g, b);
glVertex2f(x, y);
glVertex2f(x + width, y);
glVertex2f(x + width, y + height);
glVertex2f(x, y + height);
glEnd();
It all loads fine, it draws the square and everything, but it just doesn't move the square, I'm using SDL to draw the window incase you want to know.
OpenGL expects you to send relative coordinates between 0 and 1. Moreover, you create new variables every frame, so they can't really be incremented along all frames.
// box parameters in pixels
int boxleft = 100,
boxbottom = 100;
int boxwidth = 50,
boxheight = 50;
// window dimensions
int screenwidth = 1920,
screenheight = 1080;
for(;;)
{
// clear last frame
glClear(GL_COLOR_BUFFER_BIT);
// calculate screen space coordinates
float left = (float)boxleft / screenwidth,
right = left + (float)boxwidth / screenwidth,
bottom = (float)boxbottom / screenheight,
top = bottom + (float)boxheight / screenheight;
// draw the box
glBegin(GL_QUADS);
glColor3f(r, g, b);
glVertex2f(left, top);
glVertex2f(right, top);
glVertex2f(right, bottom);
glVertex2f(left, bottom);
glEnd();
// shift box for next frame
boxleft++;
}
Update: Okay, you say the square draws fine with your coordinates, so you might not change that. But defining the variables outside your draw loop is essential. Tell me if this works for you.
Assuming that's all one function, the issue is that the begining of the function is constantly resetting your value for x to 100. Move your variable definitions out of the function. As an example:
int x = 100;
int y = 100;
int width = 50;
int height = 50;
function drawSquare()
{
x += 1;
glBegin(GL_QUADS);
glColor3f(r, g, b);
glVertex2f(x, y);
glVertex2f(x + width, y);
glVertex2f(x + width, y + height);
glVertex2f(x, y + height);
glEnd();
}
Each time you call that function, the square will have x incremented by one and so will move progressively over.
Related
I am looking for a function that draws a filled circle using SDL2 without using a renderer at all. I currently have this:
void Circle(int center_x, int center_y, int radius, SDL_Color color) {
eraseOldCircle();
uint32_t *pixels = (uint32_t *) windowSurface->pixels;
SDL_PixelFormat *windowFormat = windowSurface->format;
SDL_LockSurface(windowSurface); // Lock surface for direct pixel access capability
int radiussqrd = radius * radius;
for(int x=center_x-radius; x<=center_x+radius; x++) {
int dx = center_x - x;
for(int y=center_y-radius; y<=center_y+radius; y++) {
int dy = center_y - y;
if((dy * dy + dx * dx) <= radiussqrd) {
pixels[(y * WIDTH + x)] = SDL_MapRGB(windowFormat, color.r, color.g, color.b);
}
}
}
SDL_UnlockSurface(windowSurface);
SDL_UpdateWindowSurface(window);
}
which has been adapted from another function I found here, it draws the pixels directly to the windowSurface after calling eraseOldCircle (which puts the game's background image back to the previous position of the circle, effectively erasing it from there.) but it is still too slow for what I need (probably the maths?). What would be the fastest way to draw a circle using direct pixel access? I need it to be high speed so I can use it in a 2D game. I haven't been able to find anything until now, everything I see uses SDL_Renderer, but I should strictly never use it.
Here is eraseOldCircle() in case it helps:
void eraseOldCircle() {
//Calculate previous position of ball
SDL_Rect pos = {circlePosition.x-(radius+steps), circlePosition.y-(radius+steps), radius*radius, radius*2+steps};
SDL_BlitSurface(backgroundImage, &pos, windowSurface, &pos);
}
I'm not too sure how to do it with surfaces and memory management and all that, but if this helps, here is a version using an SDL_Renderer that runs pretty quickly:
void draw_circle(SDL_Renderer *renderer, int x, int y, int radius, SDL_Color color)
{
SDL_SetRenderDrawColor(renderer, color.r, color.g, color.b, color.a);
for (int w = 0; w < radius * 2; w++)
{
for (int h = 0; h < radius * 2; h++)
{
int dx = radius - w; // horizontal offset
int dy = radius - h; // vertical offset
if ((dx*dx + dy*dy) <= (radius * radius))
{
SDL_RenderDrawPoint(renderer, x + dx, y + dy);
}
}
}
}
If you draw many circles, I would guess SDL_UpdateWindowSurface is where you spend the most time. Try this instead
SDL_LockSurface
// erase and draw all circles (possibly >1000)
SDL_UnlockSurface
SDL_UpdateWindowSurface
You can optimize your circle drawing code a bit, but it is probably fast enough. I also think that SDL_Renderer is probably fast enough.
The documentation for SDL_UpdateWindowSurface says it will copy the surface to the screen. You only need to do this once per frame.
I created Lines and when I'm rotate the line. Line will be stretch. How can I stop stretch at rotation time. When I change height in Ortho it will be not displaying properly. When Line is going left or right it will be start strtching but when it will be reach in main point it will come in real position.
#include<fstream>
#include<iostream>
#include<stdlib.h>
#include<glut.h>
using namespace std;
float yr = 0;
void introscreen();
void screen();
void screen1();
void PitchLadder();
int width = 1268;
int height = 720;
float translate = 0.0f;
GLfloat angle = 0.0f;
void display(void) {
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-300, 300, -10, 25, 0, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
static int center_x = 0;
static int center_y = 0;
}
void specialKey(int key, int x, int y) {
switch (key) {
case GLUT_KEY_UP:
translate += 1.0f;
break;
case GLUT_KEY_DOWN:
translate -= 1.0f;
break;
case GLUT_KEY_LEFT:
angle += 1.0f;
break;
case GLUT_KEY_RIGHT:
angle -= 1.0f;
break;
}
glutPostRedisplay();
}
void Rolling(void) {
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0, 1, 0);
glPushMatrix();
glRotatef(-angle, 0, 0, 1);
glTranslatef(-10, translate,0);
PitchLadder();
glPopMatrix();
glFlush();
}
void PitchLadder() {
GLfloat y;
GLfloat y2;
GLfloat fSize[5];
GLfloat fCurrSize;
fCurrSize = fSize[2];
for (y2 = -90.0f ; y2 <= 90.0f ; y2 += 10.0f) {
glLineWidth(fCurrSize);
glBegin(GL_LINES);
glVertex3f(-50.0f , y2 , 0);
glVertex3f(50.0f , y2 , 0);
glEnd();
fCurrSize += 1.0f;
screen();
screen1();
}
}
void renderbitmap1(float x3, float y3, void *font1, char *string1) {
char *c1;
glRasterPos2f(x3, y3);
for (c1=string1; *c1 != '\0'; c1++) {
glutBitmapCharacter(font1, *c1);
}
}
void screen(void) {
glColor3f(0, 1, 0);
char buf1[20] = { '\0' };
for (int row1 = -90.0f; row1 <= 90 + yr; row1 +=10.0f) {
sprintf_s(buf1,"%i", row1);
renderbitmap1(70 , (yr+row1), GLUT_BITMAP_TIMES_ROMAN_24, buf1);
}
}
void renderbitmap2(float x4, float y4, void *font2, char *string2) {
char *c1;
glRasterPos2f(x4, y4);
for (c1=string2; *c1 != '\0'; c1++) {
glutBitmapCharacter(font2, *c1);
}
}
void screen1(void) {
glColor3f(0, 1, 0);
char buf1[20] = { '\0' };
for (int row1 = -90.0f; row1 <= 90 + yr; row1 +=10.0f) {
sprintf_s(buf1,"%i", row1);
renderbitmap2(-70 , (yr+row1), GLUT_BITMAP_TIMES_ROMAN_24, buf1);
}
}
int main(int arg, char** argv) {
glutInit(&arg, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(width, height);
glutInitWindowPosition(50, 100);
glutCreateWindow("HUD Lines");
display();
glutDisplayFunc(Rolling);
glutSpecialFunc(specialKey);
glutMainLoop();
return 0;
}
At Orthographic Projection, the view space coordinates are linearly mapped to the clip space coordinates respectively normalized device coordinates. The normlaized device space is a cube with a minimum of (-1, -1, -1) and a maximum of (1, 1, 1).
Finally the coordinates in normalized device space are mapped to the rectangular viewport.
If the viewport is rectangular then the aspect ratio has to be considered, when the view space coordinates are transformed to clip space.
The mapping of the normalized device coordinates to the viewport distorted the geometry by the reciprocal aspect ration of the viewport. This distortion has to be compensated by the orthographic projection.
When the orthographic projection is set by glOrtho(left, right, bottom, top, near, far), then the cuboid volume is defined, which maps (left, bottom, near) to (-1, -1, -1) and (right, top, far) to (1, 1, 1).
It is not necessary that the x and y range of the orthographic projection is equal the view port rectangle, bit the ration (left-right)/(top-bottom)hast to be equal the ration of the viewport rectangle else the geometry will be distored.
double size = 200.0f;
double aspect = (double)width / (double)height;
glOrtho(-aspect*size/2.0, aspect*size/2.0, -size/2.0, size/2.0, -1.0, 1.0);
Your window size and orthographic "view" do not have the same aspect ratio:
// This creates a window that's 1268 x 720 (a wide rectangle)
int width = 1268;
int height = 720;
glutInitWindowSize(width, height);
// This creates a "view" that's 300 x 300 (a square)
glOrtho(-300, 300, -10, 25, 0, 1);
The "view" will be stretched to fill the viewport (window). You are seeing a 300 x 300 image being stretched to 1268x720, which definitely makes horizontal lines appear longer than vertical lines even though they're the same length in the code.
You should call glOrtho using the width and height variables of your window:
glOrtho(0, width, 0, height, 0, 1);
Notice that I have changed the arguments to (left = 0, right = width, bottom = 0, top = height, ...). This allows you to work with a screen coordinate space that is similar to 2D rendering but the bottom-left corner is (0,0) and the top-right is (width,height).
I am making a 3d project in OpenGL which contain a ground (drawn as line loops). The issue I have is when the project starts only a single line is drawn as shown in the next image:
When I resize or maximize the window then the actual ground gets displayed like this:
Any idea how to resolve this issue? I'm a beginner in OpenGL programming.
Here is the code :
void drawHook(void);
void timer(int);
void drawFlorr();
float L = 100;
const int screenWidth = 1000; // width of screen window in pixels
const int screenHeight = 1000; // height of screen window in pixels
float ww = 800;
float wh = 800;
float f = 520, n = 10.0;
static GLdouble ort1[] = { -200, 200, -33, 140 };
static GLdouble viewer[] = { 525, 25, -180 };
static GLdouble objec[] = { 525.0, 25, -350 };
float x, y = 0.0, z, z1;
float xmax = screenWidth - 200.0;
float zmax = screenWidth - 200.0;
float xmin, zmin;
float step = 5.0;
float fov = 80;
void myInit(void)
{
glClearColor(0.0,0.0,0.0,0.0); // background color is white
glPointSize(2.0); // a 'dot' is 2 by 2 pixels
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, screenWidth, 0.0, screenHeight);//dino window
glViewport(0, 0, screenWidth, screenHeight);
}
void myDisplay(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
gluLookAt(viewer[0], viewer[1], viewer[2], objec[0], objec[1], objec[2], 0, 1, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fov, 1.333, n, f);
glPointSize(2.0);
glMatrixMode(GL_MODELVIEW);
drawFlorr();
glutSwapBuffers();
}
int main(int argc, char** argv)
{
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); // set display mode
glutInitWindowSize(screenWidth, screenHeight); // set window size
glutInitWindowPosition(10, 10); // set window position on screen
glutCreateWindow("Dino Line Drawing"); // open the screen window
glutDisplayFunc(myDisplay); // register redraw function
myInit();
//glutTimerFunc(1,timer,1);
glutMainLoop(); // go into a perpetual loop
return 1;
}
void drawFlorr()
{
xmin = -100;
zmin = -100;
for (x = xmin; x < xmax; x += step)
{
for (z = zmin; z < zmax; z += step)
{
z1 = -z;
glBegin(GL_LINE_LOOP);
glVertex3f(x, y, z1);
glVertex3f(x, y, z1-step+1.0);
glVertex3f(x + step - 1.0, y, z1 - step + 1.0);
glVertex3f(x+step-1.0, y, z1);
glEnd();
}
}
}
Your code is broken in many ways:
Your myDisplay function uses whatever the current matrix mode is to set the view matrix on.
Initially, you leave the matrix mode as GL_PROJECTION in myInit()
These two together mean that for the first frame, you just use identity as MODELVIEW matrix, and just overwrite the projection matrix twice. After a resize, the frame ais drawn again, and your code does waht you probably intented to do.
However, there is more:
You do not have any resize handler, so your viewport will not change when you resize the window.
You are setting an ortho matrix initailly for the projection, although you are not planning to use it at all.
and, the most import point:
All of your code depends on deprecated functionality which is not even available in modern OpenGL at all. You should really not use this in 2016, but learn modern OpenGL instead (with "modern" meaning "only a decade old" here).
So I have this piece of code, which pretty much draws various 2D textures on the screen, though there are multiple sprites that have to be 'dissected' from the texture (spritesheet). The problem is that rotation is not working properly; while it rotates, it does not rotate on the center of the texture, which is what I am trying to do. I have narrowed it down to the translation being incorrect:
glTranslatef(x + sr->x/2 - sr->w/2,
y + sr->y/2 - sr->h/2,0);
glRotatef(ang,0,0,1.f);
glTranslatef(-x + -sr->x/2 - -sr->w/2,
-y + -sr->y/2 - -sr->h/2,0);
X and Y is the position that it's being drawn to, the sheet rect struct contains the position X and Y of the sprite being drawn from the texture, along with w and h, which are the width and heights of the 'sprite' from the texture. I've tried various other formulas, such as:
glTranslatef(x, y, 0);
The below three switching the negative sign to positive (x - y to x + y)
glTranslatef(sr->x/2 - sr->w/2, sr->y/2 - sr->h/2 0 );
glTranslatef(sr->x - sr->w/2, sr->y - sr->h/2, 0 );
glTranslatef(sr->x - sr->w, sr->y - sr->w, 0 );
glTranslatef(.5,.5,0);
It might also be helpful to say that:
glOrtho(0,screen_width,screen_height,0,-2,10);
is in use.
I've tried reading various tutorials, going through various forums, asking various people, but there doesn't seem to be a solution that works, nor can I find any useful resources that explain to me how I find the center of the image in order to translate it to '(0,0)'. I'm pretty new to OpenGL so a lot of this stuff takes awhile for me to digest.
Here's the entire function:
void Apply_Surface( float x, float y, Sheet_Container* source, Sheet_Rect* sr , float ang = 0, bool flipx = 0, bool flipy = 0, int e_x = -1, int e_y = -1 ) {
float imgwi,imghi;
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,source->rt());
// rotation
imghi = source->rh();
imgwi = source->rw();
Sheet_Rect t_shtrct(0,0,imgwi,imghi);
if ( sr == NULL ) // in case a sheet rect is not provided, assume it's width
//and height of texture with 0/0 x/y
sr = &t_shtrct;
glPushMatrix();
//
int wid, hei;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&wid);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&hei);
glTranslatef(-sr->x + -sr->w,
-sr->y + -sr->h,0);
glRotatef(ang,0,0,1.f);
glTranslatef(sr->x + sr->w,
sr->y + sr->h,0);
// Yeah, out-dated way of drawing to the screen but it works for now.
GLfloat tex[] = {
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h *!flipy )/imghi,
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h *!flipy)/imghi
};
GLfloat vertices[] = { // vertices to put on screen
x, (y + sr->h),
x, y,
(x +sr->w), y,
(x +sr->w),(y +sr->h)
};
// index array
GLubyte index[6] = { 0,1,2, 2,3,0 };
float fx = (x/(float)screen_width)-(float)sr->w/2/(float)imgwi;
float fy = (y/(float)screen_height)-(float)sr->h/2/(float)imghi;
// activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// pass verteices and texture information
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, index);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Sheet container class:
class Sheet_Container {
GLuint texture;
int width, height;
public:
Sheet_Container();
Sheet_Container(GLuint, int = -1,int = -1);
void Load(GLuint,int = -1,int = -1);
float rw();
float rh();
GLuint rt();
};
Sheet rect class:
struct Sheet_Rect {
float x, y, w, h;
Sheet_Rect();
Sheet_Rect(int xx,int yy,int ww,int hh);
};
Image loading function:
Sheet_Container Game_Info::Load_Image(const char* fil) {
ILuint t_id;
ilGenImages(1, &t_id);
ilBindImage(t_id);
ilLoadImage(const_cast<char*>(fil));
int width = ilGetInteger(IL_IMAGE_WIDTH), height = ilGetInteger(IL_IMAGE_HEIGHT);
return Sheet_Container(ilutGLLoadImage(const_cast<char*>(fil)),width,height);
}
Your quad (two triangles) is centered at:
( x + sr->w / 2, y + sr->h / 2 )
You need to move that point to the origin, rotate, and then move it back:
glTranslatef ( (x + sr->w / 2.0f), (y + sr->h / 2.0f), 0.0f); // 3rd
glRotatef (0,0,0,1.f); // 2nd
glTranslatef (-(x + sr->w / 2.0f), -(y + sr->h / 2.0f), 0.0f); // 1st
Here is where I think you are getting tripped up. People naturally assume that OpenGL applies transformations in the order they appear (top-to-bottom), that is not the case. OpenGL effectively swaps the operands everytime it multiplies two matrices:
M1 x M2 x M3
~~~~~~~
(1)
~~~~~~~~~~
(2)
(1) M2 * M1
(2) M3 * (M2 * M1) --> M3 * M2 * M1 (row-major / textbook math notation)
The technical term for this is post-multiplication, it all has to do with the way matrices are implemented in OpenGL (column-major). Suffice it to say, you should generally read glTranslatef, glRotatef, glScalef, etc. calls from bottom-to-top.
With that out of the way, your current rotation does not make any sense.
You are telling GL to rotate 0 degrees around an axis: <0,0,1> (the z-axis in other words). The axis is correct, but a 0 degree rotation is not going to do anything ;)
I'm trying to create an effect of zooming on a rotating hexagon. I'm accomplishing this by changing the window. Once it "zooms in" it supposed to "zoom out", and then repeat continuously. I've managed to zoom in just fine, and by the looks of my code, it should zoom out as well, but once it zooms in, nothing else is drawn. I've debugged my code, and i can tell that the variables are indeed being incremented on this line:
gluOrtho2D(cx - w, cx + w, cy -h, cy +h);
But yet i still fail to see my hexagon "zoom out". Any help would be appreciated. I'm pretty sure its something simple i'm forgetting. But it keeps eluding me. My code follows:
#include <cstdlib>
#include <GL/glut.h>
#include <cmath>
#define PI 3.14159265
#define ZOOM_IN 1
#define ZOOM_OUT -1
using namespace std;
const int screenWidth = 500;
const int screenHeight = 500;
float cx = 0.0, cy = 0.0; //center of viewport (cx, cy)
float h=1.2, w = 1.2; //window size
int NumFrames = 10; //frames
int frame = 0;
int direction = ZOOM_IN;
//<<<<<<<<<<<<<<<<<<<<<<< myInit >>>>>>>>>>>>>>>>>>>>
void myinit() {
glClearColor (1.0, 1.0, 1.0, 1.0); //set the background color to white
glColor3f (0.0, 0.0, 0.0); //set the foreground color to black
glPointSize (3.0); //set the point size to 3 X 3 pixels
glViewport (0.0, 0.0, 500.0, 500.0); //set the viewport to be the entire window
//set up a world window to screen transformation
glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-5.0, 5.0, -5.0, 5.0);
// glMatrixMode (GL_MODELVIEW);
}
//<<<<<<<<<<<<<<<<<<<<<<< hexswirl >>>>>>>>>>>>>>>>>>>>
void hexswirl() {
double angle; //the angle of rotation
double angleInc = 2*PI/6.0; //the angle increment
double inc = 5.0/50; //the radius increment
double radius = 5.0/50.0; //the radius to be used
//clear the background
glClear (GL_COLOR_BUFFER_BIT);
//draw the hexagon swirl
for (int j = 0; j <= 50; j++) {
//the angle of rotation depends on which hexagon is
//being drawn.
angle = j* (PI/180.0);
//draw one hexagon
glBegin (GL_LINE_STRIP);
for (int k=0; k <= 6; k++) {
angle += angleInc;
glVertex2d(radius * cos(angle), radius *sin(angle));
}
glEnd();
//determine the radius of the next hexagon
radius += inc;
}
//swap buffers for a smooth change from one
//frame to another
glutSwapBuffers();
glutPostRedisplay();
glFlush();
}
//<<<<<<<<<<<<<<<<<<<<<<< viewZoom >>>>>>>>>>>>>>>>>>>>
void viewZoom(int i) {
if(direction == ZOOM_IN) {
//change the width and height of the window each time
w *= 0.9;
h *= 0.9;
}
if(direction == ZOOM_OUT) {
w /= 0.9;
h /= 0.9;
}
if(i%10 == 0) {
direction = -direction;
}
//change the window and draw the hexagon swirl
gluOrtho2D (cx - w, cx + w, cy - h, cy + h);
hexswirl();
glutPostRedisplay();
glutTimerFunc(200, viewZoom,i+1);
}
//<<<<<<<<<<<<<<<<<<<<<<<< main >>>>>>>>>>>>>>>>>>>>>>
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(screenWidth, screenHeight);
glutInitWindowPosition(100,100);
glutCreateWindow("hexanim");
glutDisplayFunc(hexswirl);
viewZoom(1);
myinit();
glutMainLoop();
return 1;
}
I figured out a way around my problem. I still don't know why my window wasn't redrawing after "zooming in", but i decided to implement it through changing my viewport instead. I ended up switching out:
gluOrtho2D (cx - w, cx + w, cy - h, cy + h);
for
cx = screenWidth / w;
cy = screenHeight / h;
glViewport((screenWidth-cx)/2, (screenHeight-cy)/2, cx, cy);
(and made all the corresponding changes associated with it).