How to obtain visible vertex indices and face indices using OpenGL? - opengl

Is it possible to obtain visible vertex and/or face indices in OpenGL?
I have heard of unprojecting screen points from mouse clicks and then using some space-search algorithm to do the rest of the job. This seems to be somewhat inefficient.
I was wondering, but could not find on the internet, if it is possible to obtain the vertices directly from a GPU buffer.

Related

Draw part of sphere limited by set of vertices

What's the best way to draw part of sphere in, for example, OpenGL, considering I have vertices of boundaries of region that should be rendered?
I'm drawing sphere using octahedron transformation (described here: https://stackoverflow.com/a/7687312/1840136) and I can draw arcs that represent boundaries in same way by creating intermediate vertices and then "normalizing" them.
To create triangles out of plane I can use something from this answer: https://math.stackexchange.com/a/1814637, but thing is it will be still flat something. To get part of sphere, I definitely need another bunch of intermediate vertices for additional triangles. What is the algorithms for such task? And, as I already may have triangles forming original sphere, can I use this data somehow?

Opengl: coloring a world map?

Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?

In OpenGL, how to clear drawn primitives in a 3D region

Let us say, I draw 3 points with glVertex3f at (0,0,0), (9,0,0) and (10,0,0)
I would like to clear all Vertex3f points in the bounding box region (2,-1,-1) to (15, 1, 1) which should include the last two points.
How does one do this in OpenGL?
Manage your point drawing outside of OpenGL. IE Don't use OpenGL to accomplish this. OpenGL is used for drawing data, not keeping track of it. If you want to get rid of certain objects, don't tell OpenGL to draw them. There are various space-partitioning data structures at your disposal for efficiently finding intersections.
The naive way is to check to see that all the points in your scene are outside that region before you draw them.
A better way is to use a kd-tree or an octree to exponentially narrow down the number of comparisons you must do.

How to draw an array of pixels directly to the screen with OpenGL?

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.
I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:
Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.
How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?
Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.
First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.
Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).
Also textures must be square, and most screens are not square, so I'd have to handle that problem.
Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.
However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.
However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.
However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.