How do I draw shapes in JOGL with OpenGL 4.0? - opengl

I'm trying to write code to draw shapes on my JOGL canvas. I have the canvas on screen, but I can't figure out how to draw shapes. In GL2 examples, I see examples like:
gl.glBegin( GL2.GL_LINES );
gl.glVertex3f( 0.0f,0.75f,0 );
gl.glVertex3f( -0.75f,0f,0 );
gl.glEnd();
However, this doesn't work for me when gl is an instance of GL4 (gl is an instance of GL2 in this example).

OpenGL 3.x Core Profile has deprecated immediate mode statements.
Use vertex arrays or vertex buffers.
I cannot offer an example in JOGL, but in other languages, one of related calls is glDrawArrays(). You'll need to enable and set up the data source arrays before calling glDrawArrays(); if memory serves well, you'll be interested in glEnableClientState(), glVertexPointer() et al.

Related

How can I use glut primitives with glfw3?

I'm learning about OpenGL graphics and the demonstrator has given us glut; however, it doesn't satisfy the needs of the project.
So, I want to move on to glfw3 but am having an issue with the primitives like, GL_TRIANGLES, GL_TRIANGLE_FAN, GL_LINE_LOOP, etc.
How can I use the primitives to draw polygons and lines using glfw3?

OpenGL Anti aliasing of shapes on texture after it is created

I have a transparent OpenGL texture which has some simple shapes drawn on it by OpenGL:
circles, polygons, lines. They are drawn without anti-aliasing, multi-sampling, etc. Therefore, they have jaggy borders.
I don't have access to process of texture creation so I cannot enable multi-sampling
.
Is there a way to make those smooth AFTER drawing is done?
There are image-based anti-aliasing filters such as FXAA and MLAA that will work in this situation. I hesitate to call them anti-aliasing because they do not really avoid aliasing, they just hide it after the fact. They are more akin to intelligent blur filters.
I know from your other question that you do not want to use FBOs, so that leads me to believe you are using an OpenGL 2.1 or older codebase. FXAA can be implemented in GLSL 1.20, but it works better in 1.30 (GL 3.0). The one thing I do not know about is using FXAA on an image that includes transparency, it expects luminance to be encoded in the alpha channel (or sRGB, which is not a GL 2.1 feature).
You will probably not want to apply FXAA to your texture directly, rather you would need to draw into a PBuffer and apply FXAA after you blend your input texture.

is there any way to bin an 3d surface created in CUDA to an OpenGL texture?

here is the scenario:
I pass an 3D OpenGL Texture to CUDA by cudaBindTextureToArray transforming it with a non rigid transformation and writed it to a 3d surface and then I want to pass it by a texture to GLSL shader for volume rendering. and GLSL only knows texture id?how I use this 3d surface as an ordinary OpenGL texture?
pseudo code
craete a texture with opengl like this
glTexImage3D(GL_TEXTURE_3D, 0,............);
pass it to cuda
create and fill a surface with
cutilSafeCall(cudaBindSurfaceToArray(volumeTexOut, outTexture->content));
......
..
cutilSafeCall( cudaMalloc3DArray(&vol->content, &vol->channelDesc, dataSize, cudaArraySurfaceLoadStore ) );
after transformation,..
surf3Dwrite(short(voxel), volumeTexOut, sizeof(short)*x1,y1, z1);
and now i want to use this surface as an opengl Texture and pass it to GLSL
Update: The APIs suggested below are quite old and have been deprecated. Please see the current Graphics Interop APIs for CUDA
CUDA OpenGL interop is (unfortunately) a one-way API: to interoperate between CUDA and OpenGL you must allocate all memory needed in your GL code using OpenGL, and then bind it to CUDA arrays or device pointers in order to access it in CUDA. You cannot do the opposite (allocate memory with CUDA and access it from OpenGL). This goes for data that is either read or written by CUDA.
So for your output, you want to allocate the 3D texture in OpenGL, not with cudaMalloc3DArray(). Then you want to call cudaGraphicsGLRegisterImage with the cudaGraphicsRegisterFlagsSurfaceLoadStore, and then bind a surface to the resulting array using cudaBindSurfaceToArray. This is discussed in section 3.2.11.1 of the CUDA 4.2 CUDA C Programming Guide. The CUDA reference guide provides full documentation on the functions I mentioned.
Note that surface writes require a compute capability 2.0 or higher GPU.

How to make textured fullscreen quad in OpenGL 2.0 using SDL?

Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.

OpenGL concept question

I'm just starting OpenGL programming in Win32 C++ so don't be too hard on me :) I've been wandering along the NeHe tutorials and 'the red book' a bit now, but I'm confused. So far I've been able to set up an OpenGL window, draw some triangles etc, no problem. But now I want to build a model and view it from different angles. So do we:
Load a model into memory (saving triangles/quads coordinates in structs on the heap) and in each scene render we draw all stuff we have to the screen using glVertex3f and so on.
Load/draw the model once using glVertex3f etc and we can just change the viewing position in each scene.
Other...?
It seems to me option 1 is most plausible from all I read so far, however it seems a bit ehh.. dumb! Do we have to decide which objects are visible, and only draw those. Isn't that very slow? Option 2 might seem more attractive :)
EDIT: Thanks for all the help, I've decided to do: read my model from file, then load it into the GPU memory using glBufferData and then feed that data to the render function using glVertexPointer and glDrawArrays.
First you need to understand, that OpenGL actually doesn't understand the term "model", all what OpenGL sees is a stream of vertices coming in and depending on the current mode it uses those streams of vertices to draw triangles to the screen.
Every frame drawing iteration follows some outline like this:
clear all buffers
for each window element (main scene, HUD, minimap, etc.):
set scissor and viewport
conditionally clear depth and/or stencil
set projection matrix
set modelview matrix for initial view
for each model
apply model transformation onto matrix stack
bind model data (textures, vertices, etc.)
issue model drawing commands
swap buffers
OpenGL does not remember what's happening up there. There was (is) some facility, called Display Lists but they are not able to store all kinds of commands – also they got deprecated and removed from recent OpenGL versions. The immediate mode commands glBegin, glEnd, glVertex, glNormal and glTexCoord have been removed as well.
So the idea is to upload some data (textures, vertex arrays, etc.) into OpenGL buffer objects. However only textures are directly understood by OpenGL as what they are (images). All other kinds of buffers require you telling OpenGL how to deal with them. This is done by calls to gl{Vertex,Color,TexCoord,Normal,Attrib}Pointer to set data access parameters and glDraw{Arrays,Elements} to trigger OpenGL fetching a stream of vertices to be fed to the rasterizer.
You should upload the data to the GPU memory once, and then draw each frame using as few commands as possible.
Previously, this was done using display lists. Nowadays, it's all about vertex buffer objects (a.k.a. VBOs), so look into those.
Here's a tutorial about VBOs, written before they were only an extension, and not a core part of OpenGL.