C++/OpenGL - 2D texture in front of 3D object - c++

I'm fairly new to OpenGL. I have 3D object and 2D image drew as HUD. At this moment, it looks like this. What I want to do now is to put 2D texture from HUD on a visible part of 3D object (in this case - front of a skull). As far as I know what I need to do is:
check which vertices are visible (again, as far as I know and after StackOverflow searching I think this question can answer my question about how to check if vertex is visible)
If vertex is visible transform this 3D point into 2D point (just use gluProject to get 2D coordinates)
I know 2D coordinates of vertex, so I can compare it to pixels on texture, which brings me directly to texturing.
And here's the problem - I don't have any idea how to do action in point 3. I have of visible 3D vertices in 2D, I have 2D texture and no idea how to use this. I was thinking to use it in similar way as 2D draw, but I have much more restrictive points than in 2D quad texturing.

Related

Texturing 3d polygons in OpenGL with C++

I'm new to OpenGL and programming in general and I'm trying to paste any picture on all surfaces of a cube and a pyramid so the picture can move with them and rotate with them
I tried a lot of tutorials but most of them focus on 2d or with other programming languages like C# .
how can I make texturing to my polygons
[...] but most of them focus on 2d [...]
You have to wrap 2d textures around the 3d mesh. Put different parts of a 2d texture on the faces of the 3d mesh. Each face of the 3d object is 2 dimensional. You have to define the texture coordinate attributes for each vertex of a face (respectively primitive), to select an area of the 2d texture, to display on the primitive.
See also How do opengl texture coordinates work?.
Note, 3d textures contain Voxels that's something completely different.

How do I map a texture correctly onto a convex polygon in SFML or OpenGL?

I want to represent my Objects as textured convex Polygons. For the most part those will just be rotated rectangles but i want to support convex shapes too and thats where the problems arise.
I worked with Blender a while ago and there you could unwrap the 3D-Objects and explicetely tell Blender which vertex of the Shape has which Position on the Texture.
Would it maybe be better to just request the Texture to have the size of the bounding Rectangle of the Shape so I can just apply the texture with SFML?
PS: Im sorry i cant post pictures to clarify my question.
or OpenGL
In OpenGL, typically you'll have two (or more!) vertex attributes: position and texture coordinate. That's basically saying which vertex of the Shape has which Position on the Texture.
That's what SFML has to be doing internally, and since its Open-Source, you might just peek inside and see if your "bounding rectangle" idea has a chance of working (my guess is that it indeed does).

OpenGL - Convert 2D Texture Coordinates into 3D Coordinates

I've got a 2D Texture on a 3D Sphere and I want to know how to transfer a 2D coordinate on the Texture into a 3D coordinate. I know it has to do with the clipping of the texture : I'm using the auto clipping function of OpenGL to put the texture on the Sphere.
Edit:
To clarify the problem:
I have a 2D plane which is an image containing borders drawn in red now I put objects on this plane, that have a collision radius and are wildly moving around. Whenever the objects collide with the red border they bounce back.
Now I take this 2D plane and warp it around a 3D sphere. At the position of the circles I want to put 3D-Models that move on the sphere. The problem now is to get from the "simple" 2D coordinates on the plane to the more complicates 3D coordinates on the sphere to position the 3D-Models correctly.
My first approach would be to map 2D coordinates to spherical coordinates which can easily be transferred into 3D coordinates but how would I do this?
You don't "convert" the 2D coordinate to a 3D coordinate. The 2D coordinates you have are UV coordinates (from 0 to 1) and they represent a position in the texture space. What you do is to map these UV coordinates to the vertices.
You can read more about UV mapping here.
In OpenGL, it depends on which version are you using. Either you use glTexCoord calls before the glVertex calls (for old versions of OpenGL), or you set it in a VBO to be processed at the fragment shader on newer versions of OpenGL.
If you are planning to use gluSphere() function, you don't need to worry about calculating UV texture coordinates since opengl does it for you with the right functions.
Here you can check the gluSphere() documentation
Here there is an example code
If you are planning to render your own sphere, check this question

Texture mapping with cylinder intermediate surface manually

I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:

Convert stack of 2d images into 3d image, volume rendering

I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.