I am using SDL2 to create a context for OpenGL. I use SDL_image to load the images, and I bind them to OpenGL textures. But because the coordinate system isn't the same the textures are flipped.
I found two ways to correct this:
Modify the texture after loading it
Advantage: Only done once per texture
Disadvantage: Done using the CPU which slows down the loading of each texture
Apply a rotation of 180° on the Y and Z axis when rendering
Advantage: Using super fast functions
Disadvantage: Needs to be done multiple times per frame
Is there another way to flip back the textures after they have been loaded with SDL_Image? And if not, which method is usually used?
There are a bunch of options. Some that come to mind:
Edit original assets
You can flip the image files upside down with an image processing tool, and use the flipped images as your assets. They will look upside down when viewed in an image viewer, but will then turn out correct when used as textures.
This is the ideal solution if you're in full control of the images. It obviously won't work if you get images from external sources at runtime.
Flip during image load
Some image loading libraries allow you to flip the image during loading. From the documentation of SOIL_image I could find, I did not see this option there. But you might be able to find an alternate library that supports it. And of course you can do this if you write your own image loading.
This is a good solution. The overhead is minimal sice you do the flipping while you're touching the data anyway. One common approach is that you read the data row by row, and store in the texture in the opposite order, using glTexSubImage2D().
Flip between loading and first use
You can create a flipped copy of the texture after you already loaded it. The typical way to do this would be by drawing a screen sized quad while sampling the original texture and rendering to an FBO that has the resulting flipped texture as a rendering target. Or, more elegant, use glBlitFramebuffer().
This is not very appealing because it involves copying the memory. While it should be quite efficient if you let the GPU create the copy, extra copying is always undesirable. Even if it happens only once for each texture, it can increase your startup/loading time.
Apply transformation to texture coordinates
You can apply a transformation to the texture coordinates in either the vertex or fragment shader. You're talking about rotations in your question, but the transformation you need is in fact trivial. You basically just map the y of the texture coordinate to 1.0 - y, and leave the x unchanged.
This adds a small price to shader execution. But the operation is very simple and fast compared to the texture sampling operation it goes along with. In reality, the added overhead is probably insignificant. While I don't think it's very pretty, it's a perfectly fine solution.
Invert the texture coordinates
This is similar to the previous option, but instead of inverting the texture coordinates in the shader, you already specify them inverted in the vertex attribute data.
This is often trivial to do. For example, it is very common to texture quads by using texture coordinates of (0, 0), (1, 0), (0, 1), (1, 1) for the 4 corners. Instead, you simply replace 0 with 1 and 1 with 0 in the second components of the texture coordinates.
Or say you load a model containing texture coordinates from a file. You simply replace each y in the texture coordinates by 1.0f - y during reading, and before storing away the texture coordinates for later rendering.
IMHO, this is often the best solution. It's very simple to do, and has basically no performance penalty.
I would disagree with most of the previous answer's point, except for flipping the image either on load, or before first use.
The reason being that if you are following data driven software development practices, you should never allow code to dictate the nature of data. The software should be designed to support the data accurately. Anything else is not fit for purpose.
Modifying texture coordinates is hackery, despite it's ease of use. What happens if you decide at some later stage, to use a different image library which doesn't flip the image? Now your image will be inverted again during rendering.
Instead, deal with the problem at the source, and flip the image during load or before first use (I advocate on load, as it can be integrated into the code that loads the image via SDL_Image, and therefore is more easily maintained).
To flip an image, I'll post some simple pseudo code which will illustrate how to do it:
function flip_image( char* bytes, int width, int height, int bytes_per_pixel):
char buffer[bytes_per_pixel*width]
for ( i = 0 -> height/2 ) loop
offset = bytes + bytes_per_pixel*width * i
copy row (offset -> offset + bytes_per_pixel*width) -> buffer
offset2 bytes + bytes_per_pixel * height * width;
copy row (offset2 -> offset2 + bytes_per_pixel*width) -> (offset -> offset + bytes_per_pixel*width)
copy row(buffer -> buffer + width * bytes_per_pixel ) -> offset
end loop
Here is a visual illustration of one iteration of this code loop:
Copy current row N to buffer
Copy row (rows - N) to row N
Copy buffer to row (rows - N)
Increment N and repeat until N == rows/2
However, this will only work on images which have an even number of rows, which is fine as opengl doesn't like texture with non-power of two dimensions.
It should also be noted that if the image loaded does not have power of two width, SDL_Image pads it. Therefore, the "width" passed to the function should be the pitch of the image, not it's width.
Related
I am trying to create a spectogram efficently. Currently I am doing everything on CPU using a texture buffer by looping through the whole texture buffer and pushing the new data to the "queue". However, this costs me alot of CPU time. I want to add new column of pixel data to the texture, move old data to right, so the new data appear on left side while the old data moves to right. This will create a waterfall/sidescrolling effect if I do it each frame.
I am using glTexSubImage2D() to add new data, but this will not advance old data to right. How can I achieve this by using OpenGL?
I don't see a need to move any data around. Simply treat the texture as circular in the horizontal direction, basically a circular buffer of columns. Then take scare of the scrolling during rendering by choosing the texture coordinates accordingly.
Say you want to display n columns at a time. Create a texture of width n, and in each step k store the data in column k % n of the texture:
glTexSubImage(GL_TEXTURE_2D, 0, k % n, 0, 1, height, ...);
Then use texture coordinates in the range 1 + (k % n) / n to (k % n) / n in the horizontal direction, with the texture wrap mode set to GL_REPEAT. Or pass an offset to your shader, and add it to the texture coordinates in the GLSL code.
And of course, if you have all data ahead of time, and it's not very large, you can simply store all of it in a texture right from the start, and scroll through it by shifting the texture coordinates.
If you do this the way you say you want to do this, it will be expensive. You will need to copy the data from the texture back onto the cpu (or keep a copy on the cpu), and then add your data onto it there, then use glTexSubImage2D to copy the whole new image back again.
An alternative if you already know all the data is to place it all in the texture, then slowly move the texture to the right. If you need to you could make a black square to cover parts of the texture you don't want visible.
You could also go in between and create multiple textures, a new one each time you get enough data, and move them in succession.
it can be done by the fragment shader (or pixel shader).This shader is executed by the GPU of your system.There are different shading languages are available.one from the nVidia( cg Shader ) and others are GLSL or HLSL provided by the Microsoft. With shaders its possible( They are used for this kind of purposes).
And definitely your CPU time will be reduced because its executed by the GPU.
as we all know, openGL uses a pixel-data orientation that has 0/0 at left/bottom, whereas the rest of the world (including virtually all image formats) uses left/top.
this has been a source of endless worries (at least for me) for years, and i still have not been able to come up with a good solution.
in my application i want to support following image data as textures:
image data from various image sources (including still-images, video-files and live-video)
image data acquired via copying the framebuffer to main memory (glReadPixels)
image data acquired via grabbing the framebuffer to texture (glCopyTexImage)
(case #1 delivers images with top-down orientation (in about 98% of the cases; for the sake of simplicity let's assume that all "external images" have top-down orientation); #2 and #3 have bottom-up orientation)
i want to be able to apply all of these textures onto various arbitrarily complex objects (e.g. 3D-models read from disk, that have texture coordinate information stored).
thus i want a single representation of the texture_coords of an object. when rendering the object, i do not want to be bothered with the orientation of the image source.
(until now, i have always carried a topdown-flag alongside the texture id, that get's used when the texture coordinates are actually set. i want to get rid of this clumsy hack!
basically i see three ways to solve the problem.
make sure all image data is in the "correct" (in openGL terms this
is upside down) orientation, converting all the "incorrect" data, before passing it to openGL
provide different texture-coordinates depending on the image-orientation (0..1 for bottom-up images, 1..0 for top-down images)
flip the images on the gfx-card
in the olde times i've been doing #1, but it turned out to be too slow. we want to avoid the copy of the pixel-buffer at all cost.
so i've switched to #2 a couple of years ago, but it is way to complicated to maintain. i don't really understand why i should carry metadata of the original image around, once i transfered the image to the gfx-card and have a nice little abstract "texture"-object.
i'm in the process of finally converting my code to VBOs, and would like to avoit having to update my texcoord arrays, just because i'm using an image of the same size but with different orientation!
which leaves #3, which i never managed to work for me (but i believe it must be quite simple).
intuitively i though about using something like glPixelZoom().
this works great with glDrawPixels() (but who is using that in real life?), and afaik it should work with glReadPixels().
the latter is great as it allows me to at least force a reasonably fast homogenous pixel orientation (top-down) for all images in main memory.
however, it seems thatglPixelZoom() has no effect on data transfered via glTexImage2D, let alone glCopyTex2D(), so the textures generated from main-memory pixels will all be upside down (which i could live with, as this only means that i have to convert all incoming texcoords to top-down when loading them).
now the remaining problem is, that i haven't found a way yet to copy a framebuffer to a texture (using glCopyTex(Sub)Image) that can be used with those top-down texcoords (that is: how to flip the image when using glCopyTexImage())
is there a solution for this simple problem? something that is fast, easy to maintain and runs on openGL-1.1 through 4.x?
ah, and ideally it would work with both power-of-two and non-power-of-two (or rectangle) textures. (as far as this is possible...)
is there a solution for this simple problem? something that is fast, easy to maintain and runs on openGL-1.1 through 4.x?
No.
There is no method to change the orientation of pixel data at pixel upload time. There is no method to change the orientation of a texture in-situ. The only method for changing the orientation of a texture (besides downloading, flipping and re-uploading) is to use an upside-down framebuffer blit from a framebuffer containing a source texture to a framebuffer containing a destination texture. And glFramebufferBlit is not available on any hardware that's so old it doesn't support GL 2.x.
So you're going to have to do what everyone else does: flip your textures before uploading them. Or better yet, flip the textures on disk, then load them without flipping them.
However, if you really, really want to not flip data, you could simply have all of your shaders take a uniform that tells them whether or not to invert the Y of their texture coordinate data. Inversion shouldn't be anything more than a multiply/add operation. This could be done in the vertex shader to minimize processing time.
Or, if you're coding in the dark ages of fixed-function, you can apply a texture matrix that inverts the Y.
why arent you change the way how you map the texture to the polygone ?
I use this mapping coordinates { 0, 1, 1, 1, 0, 0, 1, 0 } for origin top left
and this mapping coordinates { 0, 0, 1, 0, 0, 1, 1, 1 } for origin bottom left.
Then you dont need to manualy switch your pictures.
more details about mapping textures to a polygone could be found here:
http://iphonedevelopment.blogspot.de/2009/05/opengl-es-from-ground-up-part-6_25.html
In OpenGL fix function programming, can I possibly map different textures on different objects, but that texture in generated from one image only. For e.g. I have 1024 X 1024 image. I have four rectangles in my scene. Now I would want to slice image into 256 X 256 *4 and map these sliced images as textures.
How can I do this. One option is to off course pre-slice the image. But can this be done using glTexSubImage2D or some similar/different API?
Yes, you can use texture coordinates, which indicate which parts of the texture you wish to be mapped onto your object, rather than mapping the whole thing.
Read more: http://www.glprogramming.com/red/chapter09.html#name6
EDIT : Shaders are required to use array textures : http://www.opengl.org/registry/specs/EXT/texture_array.txt. I'll leave the answer as it might still be useful info.
I guess you can use an Array Texture for this : http://www.opengl.org/wiki/Array_Texture
You make a 256 x 2048 texture. When loading the texture you specify a layer size (256 x 256) and layer count (4). Your texture will then be split up in four layers.
You can access the texture using UVs : [x, y, LayerId]
Note that if you want to generate mipmaps, you need to define the number of levels when you allocate the storage with glTexStorage3D.
I think you have three options.
Use different texture coordinates for each different rectangle.
Transform the texture coordinates using glMatrixMode(GL_TEXTURE) and a different matrix between drawing each rectangle.
Create four different OpenGL textures from your original big texture. I don't think OpenGL offers you any help here. You have to either use a paint package to do it (easiest option if you only have to do this a few times), or copy parts of the image into a new buffer before calling glTexImage2D.
I think the first option is the easiest, with the advantage that you don't have to change any state between drawing the rectangles.
Lets say I have this image and in it is an object (a cube). That object is being tracked (with labels) and I manage to render a virtual cube onto it (augmented reality). Now that I can render a virtual cube onto it I want to be able to make the object 'disappear' with some really basic diminished-reality technique called "inpainting". The inpaint in question is pretty simple (it has to be or the FPS will suffer) and it requires me to do some operations on pixels and their neighbors (like with Gaussian blur or other basic image processing).
To do that I first need:
A mask: black background with a white cube in it.
Access each pixel of the initial image (at coordinates x and y) as well as its neighborhood and do stuff based on the pixel value of the mask at the same x and y coordinates. So basically the mask serves as a way to say ignore this pixel or use this pixel.
How do I do this using OpenGL? I want to be able to access pixel values 1 by 1 preferably in 2D because of the neighbors.
Do I use FBOs or PBOs? I've read many things about buffers and methods like glDrawPixels() but I'm having trouble putting them all together. The paper I saw this method in used the GL_BACK buffer but mine is already used. Some sample code (C++) would be really appreciated with all the formalities (OpenG` calls) since I'm still a beginner in OpenGL.
I'm even thinking of using OpenCV if pixel manipulation is too hard in OpenGL since my AR library (Aruco) works on top of OpenCV. In that case I will still need to get the mask (white cube on black background), convert it to a cv::Mat and then do my processing.
I know this approach is inefficient (going back and forth from the GPU/CPU) but my goal (for now) is to at least make the basics work.
Setup a framebuffer object to render your original image + virtual cube. Here's a tutorial.
Next you can attach that framebuffer texture as a input (sampler) texture of your next stage and render a quad (two triangles) that cover your mask.
In the fragment shader you should be able to sample your "screen coordinate" by reading the variable gl_FragCoord. Setting up the texture filter functions as GL_NEAREST, you can access the exact texture coordinates. Also the neighboring pixels are available with a displacement (deltaX = 2/Width, deltaY=2/Height).
Using a previous framebuffer texture as source is mandatory, as the currently active framebuffer is write only.
I've searched for a while and I've heard of different ways to do this, so I thought I'd come here and see what I should do,
From what I've gathered I should use.. glBitmap and 0s and 0xFF values in the array to make the terrain. Any input on this?
I tried switching it to quads, but I'm not sure that is efficient and the way its meant to be done.
I want the terrain to be able to have tunnels, such as worms. 2 Dimensional.
Here is what I've tried so far,
I've tried to make a glBitmap, so..
pixels = pow(2 * radius, 2);
ras = new GLubyte[pixels];
and then set them all to 0xFF, and drew it using glBitmap(x, y, 0, 0, ras);
This could be then checked for explosions and what not and whatever pixels could be set to zero. Is this a plausible approach? I'm not too good with opengl, can I put a texture on a glBitmap? From what I've seen it I don't think you can.
I would suggest you to use the stencil buffer. You mark destroyed parts of the terrain in the stencil buffer and then draw your terrain with stencil testing enabled with a simple quad without manually testing each pixel.
OK, this is a high-level overview, and I'm assuming you're familiar with OpenGL basics like buffer objects already. Let me know if something doesn't make sense or if you'd like more details.
The most common way to represent terrain in computer graphics is a heightfield: a grid of points that are spaced regularly on the X and Y axes, but whose Z (height) can vary. A heightfield can only have one Z value per (X,Y) grid point, so you can't have "overhangs" in the terrain, but it's usually sufficient anyway.
A simple way to draw a heightfield terrain is with a triangle strip (or quads, but they're deprecated). For simplicity, start in one corner and issue vertices in a zig-zag order down the column, then go back to the top and do the next column, and so on. There are optimizations that can be done for better performance, and more sophisticated ways of constructing the geometry for better appearance, but that'll get you started.
(I'm assuming a rectangular terrain here since that's how it's commonly done; if you really want a circle, you can substitute 𝑟 and 𝛩 for X and Y so you have a polar grid.)
The coordinates for each vertex will need to be stored in a buffer object, as usual. When you call glBufferData() to load the vertex data into the GPU, specify a usage parameter of either GL_STREAM_DRAW if the terrain will usually change from one frame to the next, or GL_DYNAMIC_DRAW if it will change often but not (close to) every frame. To change the terrain, call glBufferData() again to copy a different set of vertex data to the GPU.
For the vertex data itself, you can specify all three coordinates (X, Y, and Z) for each vertex; that's the simplest thing to do. Or, if you're using a recent enough GL version and you want to be sophisticated, you should be able to calculate the X and Y coordinates in the vertex shader using gl_VertexID and the dimensions of the grid (passed to the shader as a uniform value). That way, you only have to store the Z values in the buffer, which means less GPU memory and bandwidth consumed.