OpenGL cube map using not static texture - opengl

What i am trying to understand how to do is lets say we have an interactive application ( game, similation etc. ). Now instead of using a image loaded from memory to texture the sides of a cube, how do i instead use the rendered content taken from that direction and instead use it as a texture.
So imagine a camera being placed inside the cube pointing towards the +z direction. The side immediatly infront of the camera would be textured with an image of what exists infront of the camera. The top side of the cube would be textured with an image of what exists above the camera. etc.
Hope this is clear.
Thanks

That's called render to texture, and in OpenGL it is achieved using Framebuffer Objects (FBOs).

On OpenGL implementations that don't support EXT_framebuffer_object you can render into a subset of the primary framebuffer using glViewport() and then copy that image into a texture via glCopyTexSubImage2D().

Related

Select source rectangle from video texturing

I am doing video texturing to a rectangle surface created. I need to create 2 more rectangle of say different size and then copy a part of the texturing video running on the 1st surface (for eg: middle part of the video ) and play it on the new surface created. Is this possible using OpenGL ES ? Through my native video surface renderer, I can do this functionality and can map it to OGLES application. I was just wondering whether it is possible to do directly from OGL app itself, by copying selected rectangle from one of the video texturing surface ?
If your texture is full motion video, you should not copy the texture data because that will be too slow too keep up with video frame rates. You should avoid using glTexImage2D() and instead use the EGL Image Extensions as detailed in my third article here:
http://montgomery1.com/opengl/
But either way, once you have the image in a texture and the texture is bound with glBindTexture(), then any number of rectangles you draw will be textured with that same currently-bound texture, without more copying. These rectangles are actually geometry constructed of triangles and not "surfaces". The framebuffer is the surface. The texture coordinates can be different for each rectangle, which allows you to crop and/or scale the texture mapping uniquely for each.

opengl selecting area on model

I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.

How to texture Opengl glut objects (C++)

I have already tried and succeeded loading a texture from a bmp file, and drawing quads and triangles with texture. However i need to apply the loaded texture to an object drawn with glutSolidDodecahedron and glutSolidSphere. How can i do this? Please include some code if possible
Note: I HAVE to use those functions, I'm not allowed to draw them from scratch.
Neither glutSolidDodecahedron nor glutSolidSphere specifies texture coordinates, at least not according to any documentation that a quick web search turns up. I had a quick look at the FreeGLUT implementations and those do indeed not specify texture coordinates.
If you can use shaders, you can derive the 2D texture coordinates from the 3D location of the vertices. Spheres and dodecahedrons are pretty regular shapes, so you can simply do a spherical projection (convert the vertex position to spherical coordinates and drop the radius component).

Background image in OpenGL

I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.

Can you render two quads with transparency at the same point?

I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.