Is it possible to create a texture only in Fragment shader? - c++

I have a 2 pass rendering pipeline - Deferred shading - for point cloud rendering. (GLSL 4.30 & c++17)
Shader pipeline:
Pointloud.vertex --> Pointcloud.fragment --> FullscreenQuad.vertex --> Deferred.fragment
What I want to achieve is to gather some data from Pointloud.vertex --> Pointcloud.fragment state and as a texture send it into Deferred.fragment shader.
Datas like:
Vertex_ID, Frag-Coord.z and texture coordinates (availible in Pointloud.vertex part )
Basically I want to create 2 texture in Pointcloud.fragment shader,on the given texture coord position store the dept information and in the another texture store the vertex ID on the same coords.
Is it possible to create and write into textures locally in shaders? Important is to solve this without c++ impact.

Shaders cannot allocate resources like textures and buffers. They can use resources, but they cannot create them ex nihilo. You have to create any such resources in the application. If you don't have the ability to modify the application's code, then there's nothing to be done.

Related

OpenGL Multiple Render Targets with multiple gl_Position output

I'm looking for a MRT where I can write to my buffers at different position.
Example
Buffer 0 :
gl_Position[0] = vec4(uv,0.,1.);
gl_FragData[0] = vec4(1.);
Buffer 1 :
gl_Position[1] = MVP * pos;
gl_FragData[1] = vec4(0.);
Is it possible to have multiple output in a vertex shader ?
I can't find any resources about that..
Is it possible to have multiple output in a vertex shader ?
No, but that doesn't mean you can't get the effect of what you want. Well, you didn't describe in any real detail what you wanted, but this is as close as OpenGL can provide.
What you want seems rather like layered rendering. It is the ability of a Geometry Shader to generate primitives that go to different layers. So you can generate one triangle that renders to one layer, then generate a second triangle that goes to a different layer.
Of course, that raises a question: what's a layer? Well, that has to do with layered framebuffers. See, if you attach a layered image to a framebuffer (an array texture or cubemap texture), each array layer/cubemap face represents a different 2D layer that can be rendered to. The Geometry Shader can send each output primitive to a specific layer in the layered framebuffer. So if you have 3 array layers in an image, your GS can output a primitive to layer 0, 1, or 2, and that primitive will only be rendered to that particular image in the array texture.
Depth buffers can be layered as well, and you must use a layered depth buffer if you want to have depth testing work at all with layered rendering. All aspects of the primitive's rendering are governed by the layer it gets send to. So when the fragment shader is run, the outputs will go only to the layer it was rendered to. When the depth test is done, the read for that test will only read from that layer of the depth buffer. And so on, including blending.
Of course, using layered framebuffers means that all of the layers in a particular image attachment have to be from the same texture. And therefore they must have the same Image Format. So there are limitations. But overall, it more or less does what you asked.

Get data back from OpenGL shader?

My computer doesn't support OpenCL on the GPU or OpenGL compute shaders so I was wondering if it would be a straight forward process to get data from a vertex or fragment shader?
My goal is to pass 2 textures to the shader and have the shader computer the locations where one texture exists in the other. Where there is a pixel match. I need to retrieve the locations of possible matches from the shader.
Is this plausible? If so, how would I go about it? I have the basic OpenGL knowledge, I have set up a program that draws polygons with colors. I really just need a way to get position values back from the shader.
You can render to memory instead of to screen, and then fetch data from it.
Create and bind a Framebuffer Object
Create a Renderbuffer Object and attach it to the Framebuffer Object
Render your scene. The result will end up in the bound Framebuffer Object instead of on the screen.
Use glReadPixels to pull data from the Framebuffer Object.
Be aware that glReadPixels, like most methods of fetching data from GPU memory back to main memory, is slow and likely unsuitable for real-time applications. But it's the best you can do if you don't have features intended for that, like Compute Shaders, or are willing to do it asynchronously with Pixel Buffer Objects.
You can read more about Framebuffers here.

OpenGL / GLSL Terrain Blending Textures Solution

I`m trying to get a map editor to work. My idea was to create a texture array for blending multiple terrain textures. One single texture channel (r for example) is bound to a terrains texture alpha.
The question is: Is it possible to create kinda Buffer that can be read like a texture sampler and store as many channels as i need ?
For example :
texture2D(buffer, uv)[0].rgb
Is this too far-fetched ?
This would be faster than create 7 textures and send them to the glsl shader.
You can use a texture array and access the individual textures using texture2D with 3rd coordinate specifying the layer.

Creating more than 1 different objects in OpenGL

Well, I have been learning openGL with this tutorial: opengl-tutorial.org.
That tutorial do not explain how do the shaders works. I mean, is the vertex shader readed before the fragment shader?
Ok talking about the question, I want to create two objects for practice, one box (3D Square with a texture) and a pyramid (3D Triangle with a texture), I don't know how to do it, I know how to draw it with C++/OpenGl but talking about the GLSL.... Do I need to create another program? How can I do that?
(OpenGL 3.3)
OpenGL does not maintain "objects" in such a way as you seem to assume (the term object is used to refer to "something" internal that OpenGL uses and that you can refer to via an identifier. A vertex buffer, a texture, or a shader are all examples of "objects"). OpenGL is not a scenegraph.
You need to create the vertex data for each of your objects in your application (or load that data from a file) and to provide OpenGL with that by feeding a buffer object with that data.
Then you tell OpenGL to draw a number of vertices from that buffer. OpenGL does not care what that data is or how to draw it. It will only do exactly what you tell it to do. If you tell it "take this block of data that contains vertex coordinates, and now draw 5 triangles", then it will just do that.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.