Loading many images into OpenGL and rendering them to the screen [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have an image database on my computer, and I would like to load each of the images up and render them in 3D space, in OpenGL.
I'm thinking of instantiating a VBO for each image, as well as a VAO for each one of the VBO's.
What would be the most efficient way to do this?

Here's the best way:
Create just one quad out of 4 vertices.
Use a transformation matrix (not 3D transform; just transforming 2D position and size) to move the quad around the screen and resize it if you want.
This way you can use 1 vertex array (of the quad) and texture Coordinates array and 1 VAO and do the same vertex bindings for every drawcall however for each drawcall there is a different texture.
Note: the texture coordinates will also have to be transformed with the vertices.
I think the conversion between the vertex coordinate system (2D) and texture coordinate system is vertex vPos = texturePos / 2 + 0.5, therefore texturePos = (vPos - 0.5) * 2
OpenGL's textureCoords system goes from 0 - 1 (with the axes starting at the bottom left of the screen):
while the vertex (screen) coordinate system goes from -1 to 1 (with axes starting in the middle of the screen)
This way you can correctly transform textureCoords to your already transformed vertices.
OR
if you do not understand this method, your proposed method is alright but be careful not to have way too many textures or else you will rendering lots of VAOs!
This might be hard to understand, so feel free to ask questions below in the comments!
EDIT:
Also, noticing #Botje helpful comment below, I realised the textureCoords array is not needed. This is because if your textureCoords are calculated relative to the vertex positions through the method above, it can be directly performed in the vertex shader. Make sure to have the vertices transformed first though.

Related

Simple Shadertoy to regular glsl [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want the code in glslSandBox to a regular glsl. Suppose I have a quad mesh 3D displaced in a 3D scene. I want to create this "shadertoy-like" texture and apply to it. I'm aware about the transformations it requires : Screen - NDC - CLIPSPACE - WORLDSPACE, but still I'm struggling. http://glslsandbox.com/e#61091.0 this is an extremely simple shader, can you please demonstrate the normal vs and fs shader it would take to apply it to a 3D mesh in a 3D scene?
The shader you linked is a fragment shader drawing on a flat screen plane.
Typically, the corresponding vertex setup would be two triangles (potentially as a strip, meaning 4 vertices in total), covering the entire screen. In fact, you don't really have to be concerning yourself with any transformations, especially given that the fs uses gl_FragCoord and the resolution is being passed as a uniform.
VS:
#version 450
in vec4 position;
void main() {
gl_Position = position
}
Vertices (example, use with GL_TRIANGLE_STRIP):
-1, -1
-1, 1
1, -1
1, 1
After you cover the entire screen, you can now just switch this setup to render to texture; create a framebuffer, attach a texture to it, and render in the same way. Then you'll be able to use that texture on your 3D model. This will work well if the generated image rarely changes.
If you actually wanted to draw that in one pass, then you'll need to pass the texture coordinate as a varying variable, and use it instead of gl_FragCoord; no other changes should be necessary.

opengl instanced drawing - 3D Arrows [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have to draw millions of arrows. The information I have is as below.
location of each arrow
direction of each arrow (vector direction)
length if each arrow
With this information, can I use opengl instanced drawing to draw arrows.
I have gone through the instanced examples. In all those examples, they have explained matrix transformations for each instance etc... .
But, I am not clear, with the above data, whether it is possible to do or not.
Given that the arrow is a vector, you can just insert all your vector data into a uniform array** and use gl_InstanceID to look them up in your vertex shader and simply pass them over to gl_Position.
If you need to apply a transformation to the arrows (by looking at your data: translation for the location, rotation for the direction and scaling for the length), you would issue the instanced drawing statement on a single set of vertices (your base arrow), use a uniform array of matrices for the transformation and look those matrices up in a similar way in your vertex shader.
**Depending on how many instances you have though,the data may not fit into a uniform array. So you can look into using a Uniform block (which allows you to store more data than a simple uniform variable) and if that is also not enough, a GL_TEXTURE_BUFFER will do the trick.
Don't let the name fool you, GL_TEXTURE_BUFFER can hold arbitrary data, not just texture data.
Uniform block is backed by https://www.khronos.org/opengl/wiki/Uniform_Buffer_Object
For texture buffer, read out https://www.khronos.org/opengl/wiki/Buffer_Texture

OpenGL - How to show the occluded region of a sprite as a silhouette [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm making a 2D game with OpenGL, using textured quads to display 2D sprites. I'd like to create an effect whereby any character sprite that's partially hidden by a terrain sprite will have the occluded region visible as a solid-colored silhouette, as demonstrated by the pastoral scene in this image.
I'm really not sure how to achieve this. I'm guessing the solution will involve some trick with the fragment shader, but as for specifics I'm stumped. Can anyone point me in the right direction?
Here's what I've done in the past
Draw the world/terrain (everything you want the silhouette to show through)
Disable depth test
disable draw to depth buffer
Draw sprites in silhouette mode (a different shader or texture)
enable depth test
enable draw to depth buffer
draw sprites in normal mode
draw anything else that should go on top (like the HUD)
Explanation:
When you draw the first time (in silhouette mode) it will draw over everything, but not affect the depth buffer, so that when you draw the 2nd time you won't get z-fighting. When you draw the 2nd time, some of it will be behind the terrain, but where the silhouette has already been drawn.
You can do things like this using stenciling or depth buffering.
When rendering the wall make sure that it writes a different value to the stencil buffer than the background. Then render the cow twice, once passing the stencil test when not at the wall, and once otherwise. Use a different shader each time.

How to draw a 3d rendered Image (perspective proj) back to another viewport with orthogonal proj. simultaniously using multiple Viewports and OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My problem is that i want to take a kind of snapshot of a 3d scene manipulate that snapshot and draw it back to another viewport of the scene,
I just read the image using the glReadPixel method.
Now I want to draw back that image to a specified viewport but with the usage of modern OpenGL.
I read about FrameBufferObject (fbo) and PixelBufferObject (pbo) and the solution to write back the FrameBufferObject contents into a gl2DTexture and pass it to the FragementShader as simple texture.
Is this way correct or can anyone provide a simple example of how to render the image back to the scene using modern OpenGL and not the deprecated glDrawPixel method?
The overall process you want to do will look something like this
Create an FBO with a color and depth attachment. Bind it.
Render your scene
Copy the contents out of its color attachment to client memory to do the operations you want on it.*
Copy the image back into an OpenGL texture (may as well keep the same one).
Bind the default framebuffer (0)
Render a full screen quad using your image as a texture map. (Possibly using a different shader or switching shader functionality).
Possible questions you may have:
Do I have to render a full screen quad? Yup. You can't bypass the vertex shader. So somewhere just go make four vertices with texture coordinates in a VBO, yada yada.
My vertex shader deals with projecting things, how do I deal with that quad? You can create a subroutine that toggles how you deal with vertices in your vertex shader. One can be for regular 3D rendering (ie transforming from model space into world/view/screen space) and one can just be a pass through that sends along your vertices unmodified. You'll just want your vertices at the four corners of the square on (-1,-1) to (1,1). Send those along to your fragment shader and it'll do what you want. You can optionally just set all your matrices to identity if you don't feel like using subroutines.
*If you can find a way do your texture operations in a shader, I'd highly recommend it. GPUs are quite literally built for this.

Million mesh programmatically? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a flat surface drawn with a single fullscreen GL_QUAD.
I want to deform this surface at each point specified by my GL_Texture2D, preferably through some kind of shader.
In my mind, black could correspond to flat and white could correspond to a hill.
I want to have about 4 million points on my terrain and update them at each step in my program.
How would I use a geometry shader to do this? Is a shader able to generate new veritices?
The simplest way would be to generate a large triangle strip grid, upload it to a VBO and draw it, using the vertex shader to alter just the up coordinate. The vertex shader can also generate normals from the heightmap (or supply a normal map), which then get passed to the fragment shader for lighting.
To avoid storing a huge amount of data for the vertices, use glVertexID to generate the vertex positions from scratch in the vertex shader. Don't bind any buffers, simply call glDrawArrays(GL_TRIANGLE_STRIP, 0, lots).
As GuyRT mentioned, a tessellation shader would be good too and allow you to vary the tessellation detail based on the camera's distance to the mesh. This would be more work though.