Texture Sampling in Open GL - c++

i need to get the color at a particular coordinate from a texture. There are 2 ways i can do this, by getting and looking at the raw png data, or by sampling my generated opengl texture. Is it possible to sample an opengl texture to get the color (RGBA) at a given UV or XY coord? If so, how?

Off the top of my head, your options are
Fetch the entire texture using glGetTexImage() and check the texel you're interested in.
Draw the texel you're interested in (eg. by rendering a GL_POINTS primitive), then grab the pixel where you rendered it from the framebuffer by using glReadPixels.
Keep a copy of the texture image handy and leave OpenGL out of it.
Options 1 and 2 are horribly inefficient (although you could speed 2 up somewhat by using pixel-buffer-objects and doing the copy asynchronously). So my favourite by FAR is option 3.
Edit: If you have the GL_APPLE_client_storage extension (ie. you're on a Mac or iPhone) then that's option 4 which is the winner by a long way.

The most efficient way I've found to do it is to access the texture data (you should have our PNG decoded to make into a texture anyway) and interpolate between the texels yourself. Assuming your texcoords are [0,1], multiply texwidthu and texheightv and then use that to find the position on the texture. If they're whole numbers, just use the pixel directly, otherwise use the int parts to find the bordering pixels and interpolate between them based on the fractional parts.
Here's some HLSL-like psuedocode for it. Should be fairly clear:
float3 sample(float2 coord, texture tex) {
float x = tex.w * coord.x; // Get X coord in texture
int ix = (int) x; // Get X coord as whole number
float y = tex.h * coord.y;
int iy = (int) y;
float3 x1 = getTexel(ix, iy); // Get top-left pixel
float3 x2 = getTexel(ix+1, iy); // Get top-right pixel
float3 y1 = getTexel(ix, iy+1); // Get bottom-left pixel
float3 y2 = getTexel(ix+1, iy+1); // Get bottom-right pixel
float3 top = interpolate(x1, x2, frac(x)); // Interpolate between top two pixels based on the fractional part of the X coord
float3 bottom = interpolate(y1, y2, frac(x)); // Interpolate between bottom two pixels
return interpolate(top, bottom, frac(y)); // Interpolate between top and bottom based on fractional Y coord
}

As others have suggested, reading back a texture from VRAM is horribly inefficient and should be avoided like the plague if you're even remotely interested in performance.
Two workable solutions as far as I know:
Keep a copy of the pixeldata handy (wastes memory though)
Do it using a shader

Related

Limiting texture coordinate at edge of polygon

I've written a GLSL shader to emulate a vintage arcade game's indexed color tile-based graphics. I made a couple of shaders, one that does this with point sprites, and another using polygons. The point sprite shader converts gl_PointCoord to a pixel coordinate within each tile like so:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(int(pixelFloat.x), int(pixelFloat.y));
// pixel is now used in conjunction with a tile 'ID' uniform
// to locate indexed colors with a texture lookup from a
// large texture representing the game's ROM, with GL_NEAREST filtering.
// very clever 😋
The polygon shader instead uses an attribute buffer to pass pixel coordinates (which range {0.0 … 32.0} for a 32-pixel square tile, for example). After conversion to int, each fragment within the tile sees pixel coordinate values ranging x {0 … 31} y {0 … 31}, except:
This worked fine apart from artefacts sometimes showing at the edge of the tile with the higher numbered pixel coordinate at certain resolutions. I guessed that would be due to the fragment being at just the right location to be right on the maximum value of either gl_PointCoord or the vertex attribute value of 32.0, causing that fragment to sample the wrong tile.
These artefacts went away when I clamped the pixel ivec like this:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(
min(int(pixelFloat.x), tileSizeInPixels - 1),
min(int(pixelFloat.y), tileSizeInPixels - 1));
which solved the problem and didn't introduce any new artefacts.
My question is: Is there some way of controlling the interpolation of gl_PointCoord or my pixel coordinate attribute such that we can guarantee the interpolated value will range
minimum value <= interpolated value < maximum value
as opposed to
minimum value <= interpolated value <= maximum value
Is there some way I can avoid using min() here?
NB: GL_CLAMP_* is not an option here, as the pixel coordinate is used to look up the pixel's index color from a much larger texture, which is essentially the game's sprite ROM loaded into a single large texture buffer.

cylindrical texture mapping opengl

I am trying to do a texture mapping in opengl, using a cylinder as an intermediate surface, that is,
theta =(atan2(z1,x1)) + M_PI ;
h = (y1);
Here, x1, y1, z1 are the x,y,z of a vertex.
u = theta , v = h
Here is the texture I am using
This is how the cup got textured:
Why is there a discontinuous patch in the texture map?
Why is there a discontinuous patch in the texture map?
Because you're wrapping your texture coordinates from something close to 1 back to 0. The "gap" is there, because you didn't add a gap into your geometry. You'll have to split up the geometry and add a seam where your angular texture coordinate goes to 1.

OpenGL ES coordinates to screen pixels

I am trying to make an advertising application in openGL es 2.0.
Minimizing the problem here, i can explain as an example that I created a rectangle animated cube with having some advertising images on top of it. model and animation is created in 3DS Max and converted into .pod and it is coming in the Tv screen perfectly.
Now I want to know how much screen it is covering in pixels, if my projection is 1280x720, because scaling and translation has been given in the hands of advertiser and he don't know coordinates. advertiser only knows the language of pixels. So if he increase the X axis scale in pixels, I need to convert those to OpenGL coordinates and also have to adjust the translation by myself, so that cube not goes out of screen.
In short, how can I get the no of pixels taken by cube in screen? Is there any easy way?
It's the MVP matrix which gets applied by rendering pipeline to the 'OpenGL coordinates/vertices' to finally extract the screen coordinates.
So it's possible to use it's inverse to compute vertices.
Now the problem is multiple combinations of vertices, view and projection matrices can give the same screen coordinates, i.e. the mapping from vertex position to screen coordinates is not unique.
So we have to reduce the unknowns in the equation to just x and y by fixing all the other variables (in case of translation) and probably to just z (in case of scaling).
For translation, for example, the code could be:
Point3D get3dPoint(Point2D point2D, int width,
int height, Matrix viewMatrix, Matrix projectionMatrix) {
double x = 2.0 * point2D.x / clientWidth - 1;
double y = - 2.0 * point2D.y / clientHeight + 1;
Matrix4 viewProjectionInverse = inverse(projectionMatrix *
viewMatrix);
double fixedZ = 1.0;
Point3D point3D = new Point3D(x, y, fixedZ);
return viewProjectionInverse.multiply(point3D);
}

Wrapping texture co-ordinates on a variable-size quad?

Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;

fwidth(uv) giving strange results in glsl

I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.