A crossplatform way to render particle system using 1D texture? - opengl

Im trying to use a 1D texture from Opengl to fill it with the position of the particles to be rendred by the shader . I have no i idea how to do it , can someone tell me ( or give me a code example in C/C++ / glsl-glsl-es ) how to define a 1D texture with the reqiured properities and how to fill it with a float array and how to use it from the shader .

Related

How to loop over every pixel in a 3D texture/buffer without using compute shaders

I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.
Question: How would this work for a 3D buffer?
You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.
What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.
#version 330 core
out vec4 FragColor;
uniform sampler3D volume;
void main()
{
// computing the fluid density, velocity, and center of mass
// output the values to the 3D buffer to diffrent color channels:
fragColor = vec4(density, velocity.xy, centerOfMass);
}
At some point in the fragment shader, you're going to write some statement of the form:
vec4 value = texture(my_texture, TexCoords);
Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.
Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.
If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).

C++/OpenGL - 2D texture in front of 3D object

I'm fairly new to OpenGL. I have 3D object and 2D image drew as HUD. At this moment, it looks like this. What I want to do now is to put 2D texture from HUD on a visible part of 3D object (in this case - front of a skull). As far as I know what I need to do is:
check which vertices are visible (again, as far as I know and after StackOverflow searching I think this question can answer my question about how to check if vertex is visible)
If vertex is visible transform this 3D point into 2D point (just use gluProject to get 2D coordinates)
I know 2D coordinates of vertex, so I can compare it to pixels on texture, which brings me directly to texturing.
And here's the problem - I don't have any idea how to do action in point 3. I have of visible 3D vertices in 2D, I have 2D texture and no idea how to use this. I was thinking to use it in similar way as 2D draw, but I have much more restrictive points than in 2D quad texturing.

Strange result after Texture Mapping with OpenGL

Step : Find 68 Landmarks on a 2D image (with dlib)
So i know all 68 Coordinates of each landmark!
Create a 3D mask of a generical face (with OpenGL) -> Result
I know all the 3d Coordinates of the face model as well!
Now i want to use this Tutorial to texture map all triangles from the 2d image to the 3D generic Facemodel
Does anyone know an answer of my problem ? If you need more information just give me a message and i will send you what you need. Thanks everybody!
EDIT: After finding this tutorial i changed the size of my picture to get a width and a height which is power of two.
And then a divide all my picture coords (landmarks)with the size:
landmark(x) / height and landmark(y) / width
Picture :
Result:
As bigger the width and the height is as better is the image definition!
What you're seeing looks like you passed all your vertices directly to glDrawArrays without any reuse. So each vertex is used for a single triangle in your result, rather than being used in 6 or more triangles in the original picture.
You need to use an element buffer to describe how all your triangles are made up of the vertices you have, and use glDrawElements to draw them.
Also note that some of your polygons on the original image are in fact not triangles. You'll probably want to insert additional triangles for those polygons (the inside of the eyes).

OpenGL / GLSL Terrain Blending Textures Solution

I`m trying to get a map editor to work. My idea was to create a texture array for blending multiple terrain textures. One single texture channel (r for example) is bound to a terrains texture alpha.
The question is: Is it possible to create kinda Buffer that can be read like a texture sampler and store as many channels as i need ?
For example :
texture2D(buffer, uv)[0].rgb
Is this too far-fetched ?
This would be faster than create 7 textures and send them to the glsl shader.
You can use a texture array and access the individual textures using texture2D with 3rd coordinate specifying the layer.

In OpenGL 2.1, is it safe to use a 3D tex coord, even in 2d space (with a 2D texture)?

Can I use the 3D tex coord function for 2D textures by setting the Z value to 0 in OpenGL 2.1? Are there any 3D functions that I can not use for 2D? I can't use 2D functions because this is for a 2D/3D wrapper and it's too inefficient to try and guess if the user is inputting 2D or 3D coords when both functions could be used together.
This is going to sound funny, but all texture coordinates in OpenGL are always 4D. As are vertex positions.
To help explain this better, consider a deprecated function like glTexCoord2f (...) - it automatically assigns r = 0.0 and q = 1.0. This behavior is extended to vertex arrays, if you have a texture coordinate pointer that only supply 2 components, GL automatically assigns 0.0 and 1.0 to the remaining 2 as illustrated above.
I would suggest you use the nomenclature strq when referring to texture coordinates, by the way. You can access them as xyzw in a shader, but the (fixed-function) pipeline is always documented as referring to them by strq.