opengl instanced drawing - 3D Arrows [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have to draw millions of arrows. The information I have is as below.
location of each arrow
direction of each arrow (vector direction)
length if each arrow
With this information, can I use opengl instanced drawing to draw arrows.
I have gone through the instanced examples. In all those examples, they have explained matrix transformations for each instance etc... .
But, I am not clear, with the above data, whether it is possible to do or not.

Given that the arrow is a vector, you can just insert all your vector data into a uniform array** and use gl_InstanceID to look them up in your vertex shader and simply pass them over to gl_Position.
If you need to apply a transformation to the arrows (by looking at your data: translation for the location, rotation for the direction and scaling for the length), you would issue the instanced drawing statement on a single set of vertices (your base arrow), use a uniform array of matrices for the transformation and look those matrices up in a similar way in your vertex shader.
**Depending on how many instances you have though,the data may not fit into a uniform array. So you can look into using a Uniform block (which allows you to store more data than a simple uniform variable) and if that is also not enough, a GL_TEXTURE_BUFFER will do the trick.
Don't let the name fool you, GL_TEXTURE_BUFFER can hold arbitrary data, not just texture data.
Uniform block is backed by https://www.khronos.org/opengl/wiki/Uniform_Buffer_Object
For texture buffer, read out https://www.khronos.org/opengl/wiki/Buffer_Texture

Related

Loading many images into OpenGL and rendering them to the screen [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?
Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

Simple Shadertoy to regular glsl [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want the code in glslSandBox to a regular glsl. Suppose I have a quad mesh 3D displaced in a 3D scene. I want to create this "shadertoy-like" texture and apply to it. I'm aware about the transformations it requires : Screen - NDC - CLIPSPACE - WORLDSPACE, but still I'm struggling. http://glslsandbox.com/e#61091.0 this is an extremely simple shader, can you please demonstrate the normal vs and fs shader it would take to apply it to a 3D mesh in a 3D scene?
The shader you linked is a fragment shader drawing on a flat screen plane.
Typically, the corresponding vertex setup would be two triangles (potentially as a strip, meaning 4 vertices in total), covering the entire screen. In fact, you don't really have to be concerning yourself with any transformations, especially given that the fs uses gl_FragCoord and the resolution is being passed as a uniform.
VS:
#version 450
in vec4 position;
void main() {
gl_Position = position
}
Vertices (example, use with GL_TRIANGLE_STRIP):
-1, -1
-1, 1
1, -1
1, 1
After you cover the entire screen, you can now just switch this setup to render to texture; create a framebuffer, attach a texture to it, and render in the same way. Then you'll be able to use that texture on your 3D model. This will work well if the generated image rarely changes.
If you actually wanted to draw that in one pass, then you'll need to pass the texture coordinate as a varying variable, and use it instead of gl_FragCoord; no other changes should be necessary.

How to draw a 3d rendered Image (perspective proj) back to another viewport with orthogonal proj. simultaniously using multiple Viewports and OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My problem is that i want to take a kind of snapshot of a 3d scene manipulate that snapshot and draw it back to another viewport of the scene,
I just read the image using the glReadPixel method.
Now I want to draw back that image to a specified viewport but with the usage of modern OpenGL.
I read about FrameBufferObject (fbo) and PixelBufferObject (pbo) and the solution to write back the FrameBufferObject contents into a gl2DTexture and pass it to the FragementShader as simple texture.
Is this way correct or can anyone provide a simple example of how to render the image back to the scene using modern OpenGL and not the deprecated glDrawPixel method?
The overall process you want to do will look something like this
Create an FBO with a color and depth attachment. Bind it.
Render your scene
Copy the contents out of its color attachment to client memory to do the operations you want on it.*
Copy the image back into an OpenGL texture (may as well keep the same one).
Bind the default framebuffer (0)
Render a full screen quad using your image as a texture map. (Possibly using a different shader or switching shader functionality).
Possible questions you may have:
Do I have to render a full screen quad? Yup. You can't bypass the vertex shader. So somewhere just go make four vertices with texture coordinates in a VBO, yada yada.
My vertex shader deals with projecting things, how do I deal with that quad? You can create a subroutine that toggles how you deal with vertices in your vertex shader. One can be for regular 3D rendering (ie transforming from model space into world/view/screen space) and one can just be a pass through that sends along your vertices unmodified. You'll just want your vertices at the four corners of the square on (-1,-1) to (1,1). Send those along to your fragment shader and it'll do what you want. You can optionally just set all your matrices to identity if you don't feel like using subroutines.
*If you can find a way do your texture operations in a shader, I'd highly recommend it. GPUs are quite literally built for this.

opengl texturing a height map [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I can't seem to find any tutorials or information on how to properly texture a terrain generated from a height map.
Texture mapping is not the problem, what I want to know is how to texture the entire terrain with difference elevations having different textures.
What I came up with was having a texture that looks like this:
Depending on the elevation of the triangle a different texture is assigned to it.
The result:
What I'm trying to do is create an environment like skyrim, were textures don't need to repeat constantly, a convincing landscape!
The question is how do I create some thing like this:
The textures blend together seamlessly at different elevations! How is this done? What is the technique used?
Example Video: http://www.youtube.com/watch?v=qzkBnCBpQAM
One way would be to use a 3D-Texture for your terrain. Each layer of the texture is a different material (sand, rock, grass for example). Every vertex has a third uv-component that specifies the blending between two adjacent textures, you could also use the height of the vertex here. Note that a blending between grass and sand in our example is not possible with this approach because rock 'lies in between', but it is certainly the easiest and fastest method. An other method would be to use individual 2-dimensional textures instead of a single 3D one. You would then bind the textures sand and grass for example and draw all vertices that need a blending between the two. Bind two other textures and repeat. That is certainly more complicated and slower but allows blending between any two textures.
There might be more methods but these two are the ones I can think off right now.
Professional game engines usually use more advanced methods, I've seen designers painting multiple materials on a terrain like in photoshop but that's a different story.

Million mesh programmatically? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a flat surface drawn with a single fullscreen GL_QUAD.
I want to deform this surface at each point specified by my GL_Texture2D, preferably through some kind of shader.
In my mind, black could correspond to flat and white could correspond to a hill.
I want to have about 4 million points on my terrain and update them at each step in my program.
How would I use a geometry shader to do this? Is a shader able to generate new veritices?
The simplest way would be to generate a large triangle strip grid, upload it to a VBO and draw it, using the vertex shader to alter just the up coordinate. The vertex shader can also generate normals from the heightmap (or supply a normal map), which then get passed to the fragment shader for lighting.
To avoid storing a huge amount of data for the vertices, use glVertexID to generate the vertex positions from scratch in the vertex shader. Don't bind any buffers, simply call glDrawArrays(GL_TRIANGLE_STRIP, 0, lots).
As GuyRT mentioned, a tessellation shader would be good too and allow you to vary the tessellation detail based on the camera's distance to the mesh. This would be more work though.