Are 1D Textures Supported in WebGL yet? - glsl

I've been trying to find a clear answer, but it seems no one has clearly asked the question.
Can I use a 1D sampler and 1D texture in WebGL Chrome, Firefox, Safari, IE, etc?
EDIT
Understandably 1 is indeed a power of 2 (2^0=1) meaning you could effectively use a 2D sampler and texture using a height of 1 and a width of 256 or 512 etc. to replicate a 1D texture.
1D textures are not moot, they exist because they not only have a purpose, but are intended to translate into optimizations on the GPU itself (as opposed to a 2D texture). Remember that each parameter takes time to load onto the call stack, and almost all GPU programming is an art of optimizing every possible operation.
Compute shaders have frequent need for a single list of floats without the extra dimension, using a 1D texture and sampler provides the same clarity strong typing provides. Ie representing 1D data in a 1D structure, and representing 2D data in a 2D structure. It also removes extra operations required in index to row/column translations.
The questions wasn't if there is a good reason for them, it was are they supported yet.
In WebGL 1.0 based on OpenGL ES 2.0 as of 09/MAY/2014
There is currently no 1D texture or sampler support.

Why do you need 1D textures? Just make a 2D texture N pixels wide and 1 pixel tall.
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
// 3x1 pixel 1d texture
var oneDTextureTexels = new Uint8Array([
255,0,0,255,
0,255,0,255,
0,0,255,255,
]);
var width = 3;
var height = 1;
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE,
oneDTextureTexels);
Either generatemips or set filtering so no mips are needed
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_W, gl.CLAMP_TO_EDGE);
Sample it with with 0.5 for y
uniform sampler2D u_texture;
varying float v_texcoord;
void main() {
vec4 color = texture2D(u_texture, vec2(v_texcoord, 0.5));
...
Here's a sample using 1D textures. It uses the dot product of a typical lighting calculation to look up a value from a 1d ramp texture to shade the objects.
In direct answer to your question. There will be no 1D textures in WebGL because WebGL is based on OpenGL ES 2.0 and OpenGL ES 2.0 does not support 1D textures. Neither does OpenGL ES 3.0 nor 3.1. I'd be surprised if they didn't remove 1D textures completely when they merge OpenGL and OpenGL ES

WebGL 1.0 is based on OpenGL ES 2.0 which does not support 1D textures. The Texture Objects section in the WebGL specification reflects this by only having texImage2D and compressedTexImage2D methods.
You can use a texture with a height of one instead.

As Jens Nolte said it's not supported in WebGL since it's based on OpenGL ES. You can use 2D textures with a unit width or height.
For example (256 width and 1 height):
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 1, 0,
GL_RGBA, GL_UNSIGNED_BYTE, ColorMap.optimalIB);
Then in the sampler you can sample the texture by using any value for the height (since it doesn't matter).

Related

How to make a 1D lut in C++ for GLSL

I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.
Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.

Directx 11, send multiple textures to shader

using this code I can send one texture to the shader:
devcon->PSSetShaderResources(0, 1, &pTexture);
Of course i made the pTexture by: D3DX11CreateShaderResourceViewFromFile
Shader:
Texture2D Texture;
return color * Texture.Sample(ss, texcoord);
I'm currently only sending one texture to the shader, but I would like to send multiple textures, how is this possible?
Thank You.
You can use multiple textures as long as their count does not exceed your shader profile specs. Here is an example:
HLSL Code:
Texture2D diffuseTexture : register(t0);
Texture2D anotherTexture : register(t1);
C++ Code:
devcon->V[P|D|G|C|H]SSetShaderResources(texture_index, 1, &texture);
So for example for above HLSL code it will be:
devcon->PSSetShaderResources(0, 1, &diffuseTextureSRV);
devcon->PSSetShaderResources(1, 1, &anotherTextureSRV); (SRV stands for Shader Texture View)
OR:
ID3D11ShaderResourceView * textures[] = { diffuseTextureSRV, anotherTextureSRV};
devcon->PSSetShaderResources(0, 2, &textures);
HLSL names can be arbitrary and doesn't have to correspond to any specific name - only indexes matter. While "register(tXX);" statements are not required, I'd recommend you to use them to avoid confusion as to which texture corresponds to which slot.
By using Texture Arrays. When you fill out your D3D11_TEXTURE2D_DESC look at the ArraySize member. This desc struct is the one that gets passed to ID3D11Device::CreateTexture2D. Then in your shader you use a 3rd texcoord sampling index which indicates which 2D texture in the array you are referring to.
Update: I just realised you might be talking about doing it over multiple calls (i.e. for different geo), in which case you update the shader's texture resource view. If you are using the effects framework you can use ID3DX11EffectShaderResourceVariable::SetResource, or alternatively rebind a new texture using PSSetShaderResources. However, if you are trying to blend between multiple textures, then you should use texture arrays.
You may also want to look into 3D textures, which provide a natural way to interpolate between adjacent textures in the array (whereas 2D arrays are automatically clamped to the nearest integer) via the 3rd element in the texcoord. See the HLSL sample remarks.

OpenGL still tries to blur even with GL_NEAREST (GL_TEXTURE_2D)

An image says a thousand words, so what about two? I have this map art:
In order to actually use this as a map I scale this texture 6 times. This however didn't go as expected:
All the OpenGL code is in my homebrew 2D OpenGL rendering library, and since OpenGL is a state machine it is hard to actually document the whole rendering process. But here is +/- what I do (the code is Python):
width, height = window.get_size()
glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glOrtho(0.0, width, height, 0.0, 0.0, 1.0)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
glDisable(GL_DEPTH_TEST)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glEnable(self.texture.target) # GL_TEXTURE_2D
glBindTexture(self.texture.target, self.texture.id)
glTexParameteri(self.texture.target, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameteri(self.texture.target, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glPushMatrix()
glTranslatef(self.x, self.y, 0.0) # self.x and self.y are negative int offsets
glScalef(self.scale_x, self.scale_y, 1.0) # scale_x and scale_y are both 6
glBegin(GL_QUADS)
glTexCoord2i(0, 0)
glVertex2i(0, 0)
glTexCoord2i(0, 1)
glVertex2i(0, self.texture.height)
glTexCoord2i(1, 1)
glVertex2i(self.texture.width, self.texture.height)
glTexCoord2i(self.texture.width, 0)
glVertex2i(self.texture.width, 0)
glEnd()
glPopMatrix()
glDisable(self.texture.target)
However, this "blurring" bug doesn't occur when I use GL_TEXTURE_RECTANGLE_ARB. I'd like to also be able to use GL_TEXTURE_2D, so can someone please point out how to stop this from happening?
When in doubt - replace your texture with same size black'n'white checkerboard (black/white 1px). It will give you a good sense of what is going - is it going to be uniformly gray (displacement is wrong) or is it going to have waves (scaling is wrong).
Make sure you don't have mip-maps automatically generated and used.
Generally you don't need any special displacement, texels match pixels with properly setup glOrtho.
Another important issue - use PowerOfTwo textures, as older GPU could used various schemes to support NPOT textures (scaling or padding transparently to user) which could result in just that sort of blurring.
To manually work with NPOT textures you will need to pad them with clear pixels till next POT size and scale your UV values glTextCoord2f(u,v) by the factor npotSize / potSize. Note that this is not compatible with tiling, but judging from your art you don't need it.
# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
Unfortunately this is not enough since you also need to scale the texture. For a unscaled, untranslated texture matrix, to address a certain pixel i from a texture with dimension N you need to apply the formula
(2i + 1)/(2N)
You can derive your scaling and translation from that – or determine the texture coordinates directly.
EDIT due to comment.
Okay, let's say your texture is 300 pixels wide, and you want to address exactly pixels 20 to 250, then the texture coordinates to choose for a identity texture matrix would be
(2*20 + 1)/(2*300) = 41/600 = 0.0650
and
(2*250 + 1)/(2*300) = 501/600 = 0.8350
But you could apply a transformation through the texture matrix as well. You want to map the pixels 0…299 (for 300 pixels width the index goes from 0 to 300-1 = 299) to 0…1. So let's put in those figures:
(2*0 + 1)/(2*300) = 1/600 =~= 0.0017 = a
(2*299 + 1)/(2*300) = 599/600 =~= 0.9984 = b
b-a =~= 0.9967
So you have to scale down the range 0…1 by 0.9967 and offset it by 0.0017 in the width. The same calculation goes for the height←→t coordinates. Remember that order of transformations matters. You must first scale then translate when performing the transformation, so in the matrix multiplications the translation is multiplied first:
// for a texture 300 pixels wide
glTranslatef(0.0017, …, …);
glScale(0.9967, …, …);
If you want use pixels instead of the range 0…1, further divide the scale by the texture width.
A BIG HOWEVER:
OpenGL-3 completely discarded the whole matrix manipulation functions and expect you to supply it ready to use matrices and shaders. And for OpenGL fragment shaders there is a nice function texelFetch which you can use to directly fetch texture pixels with absolute coordinates. Using that would make things a lot easier!

Opengl Rgba to grayscale from 1 component

Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);

How to use GL_REPEAT to repeat only a selection of a texture atlas? (OpenGL)

How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...