Drawing a thick line with legacy OpenGL (immediate mode) in C++ - c++

i wanted to create a line that is thick using OpenGL library in c++ but it is not working.
i tried this code:
glBegin(GL_LINES);
glLineWidth(3);
glVertex2f(-0.7f, -1.0f);
glVertex2f(-0.7f, 1.0f);
glEnd();
is there something wrong here?

Note that rendering with glBegin/glEnd sequences and also glLineWidth is deprecated. See OpenGL Line Width for a solution using "modern" OpenGL.
It is not allowed to call glLineWidth with in a glBegin/glEnd sequence. Set the line width before:
glLineWidth(3);
glBegin(GL_LINES);
glVertex2f(-0.7f, -1.0f);
glVertex2f(-0.7f, 1.0f);
glEnd();
Once drawing of primitives was started by glBegin it is only allowed to specify vertex coordinates (glVertex) and change attributes (e.g. glColor, glTexCoord, etc.), till the drawn is ended (glEnd).
All other instruction will be ignored and cause a GL_INVALID_OPERATION error, which can be get by glGetError.

Related

How to blank my OpenGL texture

I created an OpenGL (GL_TEXTURE_2D) texture, made an OpenCL image2d_t buffer out of it using clCreateFromGLTexture(), I run my OpenCL kernel to draw to the texture, using clEnqueueAcquireGLObjects and clEnqueueReleaseGLObjects before and after, and then I display the result in OpenGL by doing this (I'm trying to simply draw my framebuffer texture to the window with no scaling, is this even the right way to do it?):
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex2f(1920.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex2f(1920.0f, 1080.0f);
glTexCoord2f(0.0, 1.0); glVertex2f(0.0f, 1080.0f);
glEnd();
SDL_GL_SwapWindow(window);
It works great except there's one problem, I can't figure out how to fill the texture with zeroes in order to blank it, so I see old pixels where no new ones were written.
Apparently there's no way to blank an OpenCL image object from the host, so I'd have to do it in OpenGL (before calling clEnqueueAcquireGLObjects of course), so I tried this:
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
However this doesn't do anything. It does do something on screen when I comment out the GL_QUADS block above, it's like glClear doesn't do anything on the texture itself but rather the screen directly. It's confusing, I'm not very familiar with OpenGL, how do I blank my texture?
OpenGL glClear clears the currently bound framebuffer. To make this operate on a texture create a framebuffer object, attach the texture to it, bind the framebuffer object for drawing operations and clear it (which will then clear the texture).

Drawing a line along z axis in opengl

I am trying to draw a line along the point 0.5,-0.5,0.0 to 0.5,-0.5,-0.5 using GL_LINES in the z direction .
Intialization of the window :
glutInitDisplayMode(GLUT_DOUBLE|GLUT_DEPTH|GLUT_RGB);
Setup in the display function.
glClearColor(1.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glColor3f(0.0, 0.0, 0.0);
However, the line is not displayed on the screen. Please help as to how to display a line going along the z direction.
You should probably share the piece of code where you actually attempt to draw the line using GL_LINES. Without it I must assume that you don't know how to do it properly. The correct way to draw the line after the setup is:
glBegin(GL_LINES);
glVertex3f(0.5f, -0.5f, 0.0f);
glVertex3f(0.5f, -0.5f, -0.5f);
glEnd();
Have you tried it that way? Also, if you use double buffering, don't forget to swap buffers after rendering, using glutSwapBuffers() when using glut or SwapBuffers(hdc) when not using it.
Edit:
Additionally you need to setup your camera correctly and move it slightly to actually see the line that you draw (it is possible that it's outside of the view area)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45,1,0.1,100); //Example values but should work in your case
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This piece of code should setup your projection correctly. Now OpenGL by default looks in the negative direction of Z axis so if you want to see your line you need to move tha camera towards the positive end of Z axis using the code (in fact the code moves the whole world, not just your camera, but it doesn't matter):
glTranslate(0.0f,0.0f,-1.0f);
Use this before glBegin and you should be good to go.

Fast way to make vertices darker?

To make a lighting system for a voxel game, I need to specify a darkness value per vertex. I'm using GL_COLOR_MATERIAL and specifying a color per vertex, like this:
glEnable(GL_COLOR_MATERIAL);
glBegin(GL_QUADS);
glColor3f(0.6f, 0.6f, 0.6f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.3f, 0.3f, 0.3f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.7f, 0.7f, 0.7f);
glTexCoord2f(...);
glVertex3f(...);
glColor3f(0.9f, 0.9f, 0.9f);
glTexCoord2f(...);
glVertex3f(...);
glEnd();
This is working, but with many quads it is very slow. I'm using display lists too. Any good ideas in how to make vertices darker?
You're using immediate mode (glBegin, glEnd and everything in between). If performance is what you need, then I recommend you stop doing that.
What you're probably after is a generic vertex attribute. And lo and behold: Modern OpenGL actually has exactly this: Generic attributes. They even went so far, doing the right thing and do away with the predefined attributes (position, color, normal, texcoords, etc.) and have only generic attributes in OpenGL-3 core and later.
The functions glVertexAttrib (most of the time a Uniform does the job better) and glVertexAttribPointer are your friends. Specify how vertex attributes are processed using an appropriate vertex shader.

OpenGL - PBuffer render to Texture

After my last post, when someone recommended me to use pBuffers, I digged a bit on Google and I found some cool examples to make Offscreen Rendering, using pbuffers. Some example, available on nVidia's website, does a simple offscreen rendering, which just renders on the pbuffer context, reads the pixels into an array and then calls the opengl functions to DrawPixels.
I changed this example, in order to create a texture from the pixels read - Render it offscreen, read the pixels to the array, and then initialize a texture with this colorbit array. But this looks very redundant to me - We render the image, we copy it from Graphical Card memory into our memory (the array), to later copy it back to the graphical card in order to display it on screen, but just in a different rendering context. It looks kinda stupid the copies that I am making just to display the rendered texture, so I tried a different approach using glCopyTexImage2D(), which unfortunately doesn't work. I'll display code and explanations:
mypbuffer.Initialize(256, 256, false, false);
- The false values are for Sharing context and sharing object. They are false cause this fantastic graphical card doesn't support it.
Then I perform the usual initializations, to enable Blending, and GL_TEXTURE_2D.
CreateTexture();
mypbuffer.Activate();
int viewport[4];
glGetIntegerv(GL_VIEWPORT,(int*)viewport);
glViewport(0,0,xSize,ySize);
DrawScene(hDC);
//save data to texture using glCopyTexImage2D
glBindTexture(GL_TEXTURE_2D,texture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
0,0, xSize, ySize, 0);
glClearColor(.0f, 0.5f, 0.5f, 1.0f); // Set The Clear Color To Medium Blue
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(viewport[0],viewport[1],viewport[2],viewport[3]);
// glBindTexture(GL_TEXTURE_2D,texture);
first = false;
mypbuffer.Deactivate();
- The DrawScene function is very simple, it just renders a triangle and a rectangle, which is suposed to be offscreen rendered (I HOPE). CreateTexture() creates an empty texture. The function should work, as it was tested in the previous way I described and it works.
After this, in the main loop, i just do the following:
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texture);
glRotatef(theta, 0.0f, 0.0f, 0.01f);
glBegin(GL_QUADS);
//Front Face
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-0.5, -0.5f, 0.5f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.5f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f( 0.5f, 0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-0.5f, 0.5f, 0.5f);
glEnd();
SwapBuffers(hDC);
theta = 0.10f;
Sleep (1);
The final result is just a window with a blue background, nothing got actually Rendered. Any Idea why is this happening? My Graphical Card doesn't support the extension wgl_ARB_render_texture, but this shouldn't be a problem when calling the glCopyTexImage2D() right?
My Card doesn't support FBO either
What you must do is, sort of "connect" your two OpenGL contexts so that the textures of your PBuffer context also show up in the main render context. The term you need to look for is "display list sharing". In Windows you connect the contexts retroactively using wglShareLists, on X11 and MacOS X you must supply the handle of the context to be shared at context creation.
An entirely different possibility and working just as well is reusing the same context on the PBuffer. It's a little known fact, that you can use OpenGL render contexts not only on the drawable it has been created with first, but on any drawable with compatible settings. So if your PBuffer matches your main window's pixel format, you can detach the render context from the main window and attach it to the PBuffer. Of course you then need low level access to the main window's device context/drawable, which is normally hidden behind a framework.
You should check whether your OpenGL implementation supports framebuffer objects: these object are able to be render targets, and they can have attached textures as color buffers, indeed rendering directly into a texture.
This should be the way to go, otherwise your method is the alternative.

Texture mapping with openGL

I was texture mapping a primitive, a quad to be exact. I had a problem with the texture being somehow rotated 90 degrees to anticlockwise direction. I thought the problem would be with the loading code of the texture, but turned out it was actually a problem with the draw function.
So this was the code which draw the picture erroneously:
glVertex2f(0.0f, 0.0f); glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.5f, 0.0f); glTexCoord2f(1.0f, 1.0f);
glVertex2f(0.5f, 0.5f); glTexCoord2f(1.0f, 0.0f);
glVertex2f(0.0f, 0.5f); glTexCoord2f(0.0f, 0.0f);
and this one draw it just as I intended it to be drawn:
glTexCoord2f(0.0f, 1.0f); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(0.5f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, 0.5f);
What causes this kind of behaviour? I really didn't think this would have such effects to the drawing.
I really didn't think this would have such effects to the drawing.
Think about it. What does glTexCoord do? It specifies the texture coordinate, correct? But the texture coordinate of what?
Yes, you know it specifies the texture coordinate of the next vertex, but OpenGL doesn't know that. All glTexCoord does is set the values you pass it into a piece of memory.
glVertex does something more. It sets the vertex position, but it also tells OpenGL, "Take all of the vertex values I've set so far and render a vertex with it." That's why you can't call glVertex outside of glBegin/glEnd, even though you can do that with glTexCoord, glColor, etc.
So when you do glTexCoord(...); glVertex(...), you're saying "set the current texture coordinate to X, then set the position to Y and render with these values." When you do glVertex(...); glTexCoord(...);, you're saying, "set the position to Y and render with the previously set values, then set the current texture coordinate to X."
It's a little late to be setting the texture coordinate after you've already told OpenGL to render a vertex.
OpenGL functions in a state-wise fashion. Many GL function calls serve to change the current state so that when you call some other functions, they can use the current state to do the proper operation.
In your situation, the glVertex2f() call uses the current texture state to define which part of the texture gets mapped on which vertex. In your first series of call, the first call to glVertex2f() would have no previous texture state, so it would probably default to (0.0f, 0.0f), although it could also be undefined behavior. The second call to glVertex2f would then use the state set by your first call to glTexCoord2f(), then the third call to glVertex2f() uses the state set by the second call to glTexCoord2(), and so on.
In the future, make sure to set the proper GL state before you call the functions which use those states, and you should be good to go.
The order in which you call glVertex and glTexCoord definitely matters! Whenever you specify vertex attributes like glTexCoord, glColor, etc.. they apply all future vertices that you draw, until you change one of those attributes again. So in the previous example, your first vertex was being drawn with some unspecified previous tex coord, the second vertex with tex coord (0.0, 1.0), etc..
Probably the best explanation there is online : Texture mapping - Tutorial
And also just to make sure, texture coordinates (texCoor) are as following :
And the order in which they are called matters!
(0,0) bottom left corner
(0,1) upper left corner
(1,0) bottom right corner
(1,1) upper right corner