Render different OpenGL point sprite at each vertex - opengl

Basic question: with OpenGL, can a shader program be made to use a single glDrawArrays call (with GL_POINTS) to draw a different point sprite at each vertex?
More info:
I have an OpenGL desktop program (using SharpGL in a WPF application) that can display thousands of 2D tracks. In part, a track is a series of time stamped points and a portion of the points are displayed depending on a time span around a changeable CurrentTime. Other properties of each point determine the point's color. Each track binds it's vertex array, color array, and a time stamp array and calls a glDrawArrays to render its points. A shader program does the rest.
I've recently started using point sprites to give different types of tracks different symbols for their points. I'd like to give different points on a track different symbols depending on other attributes of each point. I'd like to do this with a single glDrawArrays call for each track. So the thought is that an array of sprites would do the trick (applying a different sprite to each vertex). Is this possible? Am I missing a better solution?

Sure. OpenGL automatically generates texture coords from 0.0 to 1.0 across each sprite, but there's no reason your fragment shader can't change them. I'd put all the sprite images into one large texture atlas, and pass the attribute(s) that determine which image to use into the fragment shader.
In OpenGL 3.3 or better, there is a built-in gl_PrimitiveID variable you can use within the fragment shader, which is a counter incremented for each point, line, or triangle drawn. (It's a bit more complicated with geometry or tessellation shaders.) This might also be useful.
Hope this helps.

Related

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

Comparing textures in OpenGL ES 2.0

I am working on a painting app using the LibGDX framework, though this should be primarily OpenGL related.
Basically, I am looking for a way to prevent the sprites I use to draw from overlapping each other when they aren't fully opaque, as this creates a lot of unpleasant effects. Drawing the sprites at 1.0 alpha onto a texture and then drawing that texture back at the desired alpha gives the effect I want, but that method would involve constantly recreating the texture as the user is drawing, which is far too intensive to be viable.
From what I can see, the best option for me, in basic terms, is to sort of subtract one of these sprites from the other in the fragment shader. I am quite certain this route would work, but I cannot figure out how to get to the point where I can actually compare them in the fragment shader. Both will always use the same single texture, but they will be positioned in different spots. Is it at all possible to actually compare them like that, or is there a suitable alternative?
It's not actually possible to compare 2 textures that are applied to different geometry (sprites) in the fragment or vertex shader that way, because they will be rendered on different iterations of the shaders, at different points in time.
You could have two or more texture units to sample and subtract multiple textures, but they would have to be applied to the same vertices (sprites), which I think is not what you want.
A better approach would be to compute the proximity of the sprites before they are rendered. You could then either change their positions, or pass the proxmity as a uniform value into the shaders, which could then be used to change the alpha of the fragment pixels for the sprites.

OpenGL: create complex and smoothed polygons

In my OpenGL project, I want to dynamically create smoothed polygons, similiar like this one:
The problem relies mainly in the smoothing process. My procedure up to this point, is firstly to create a VBO with randomly placed vertices.
Then, in my fragment shader, (I'm using the programmable function pipeline) there should happen the smoothing process, or in other words, created the curves out of the previously defined "lines" between the vertices.
And exactly here is the problem: I am not very familiar with thoose complex mathematical algorithms, which would examine, if a point is inside the "smoothed polygon" or not.
First up, you can't really do it in the fragment shader. The fragment shader is limited to setting the final(ish) color of a "pixel" (which is basically, but not exactly, an actual pixel) before it gets written to the screen. It can't create new points on a curve.
This page gives a nice overview of the different algorithms for creating smooth curves.
The general approach is to break a couple of points into multiple points using a geometry shader, and then render them just like a normal polygon. But I don't know the details. Try a google search for bezier geometry shader for example.
Wait, I lie. I found a program here that does it in the fragment shader.

OpenGL VBO shader

I have a 2D VBO object that represent points in 2D space. What is the best way to draw an arbitrary shape at that point? Lets say I wanted to draw a red 'X' at each.
Can I use a shader to do this?
You don't neccessarily need a special shader for that, you might just use point sprites. This would basically mean to draw the VBO as a point set (using glDrawArrays(GL_POINTS, ...)) and enabling point sprites to draw a textured square (with a texture of the 'X') at the position of each point, assuming a point size of more than 1 pixel.
For actually generating geometry at the location of each point you could use the geometry shader. This way you also render the VBO as point set and generate two lines (the 'X') or whatever geometry for each point inside the geometry shader.
An alternative to the geometry shader are instanced arrays (requiring the same GL3/DX10 hardware as neccessary for geometry shaders). This way you draw multiple instances of the 'X' shape and source the instances' individual positions from the point VBO by using an attribute whose index is advanced once per instance.
The last alternative would be to generate the shapes' geometries manually on the CPU, so that you end up with a line set or a quad set conatining all the 'X's as lines or sprites or whatever.
But the easiest (and maybe fastest, not sure about that) way should be the point sprite approach mentioned first, as their usual clipping problems shouldn't be that much of a problem in your case and you don't seem to need 3d shapes anyway. This way you neither need to generate the geometry yourself on the CPU, nor do you need special shaders or GL3/DX10 hardware (although this is quite common nowadays). All you need is a texture of the shape and enable point sprites (which should be core since GL 1.5).
If all these general ideas don't tell you anything, you might want to delve a little deeper into OpenGL and real-time computer graphics in general.

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.