gl_MultiTexCoord0 not allowed in vert shader in cocos2d v2 alpha? - cocos2d-iphone

I'm trying to use to gl_MultiTexCoord0 in the main of my .vert shader, but the shader won't link with no descriptive error. Removing the reference to gl_MultiTexCoord0 results in no error. Is this not supported? Is there a known workaround?
This is on cocos2d v2 alpha for the iPhone.

There are no gl_MultiTexCoordN in OpenGL ES (like lots of other built-in variables), so you have to pass the texture coordinates as an attribute:
glVertexAttribPointer(texture_id, 2, GL_FLOAT, 0, 0, texture_coords_ptr);
glEnableVertexAttribArray(texture_id);
To receive readable diagnostics from your shader you have to get an error using the glGetShaderInfoLog.

Related

When using glVertexAttribPointer, what index should I use for the gl_Normal attribute?

I buffer normal data to a VBO, then point to it using glVertexAttribPointer:
glVertexAttribPointer(<INDEX?>, 3, GL_FLOAT, GL_FALSE, 0, NULL);
However, what value should I use for the first parameter, the index, if I wish the data to be bound to the gl_Normal attribute in the shaders?
I am using an NVidia card, and I read here https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php that gl_Normal is always at index 2 for these types of cards. But how do I know that gl_Normal is at this index for other cards?
Additionally, using an index of 2 doesn't seem to be working, and the gl_Normal data in the shader is all (0,0,0).
I am aware of glGetAttribLocation and glBindAttribLocation, however the documentation specifically says the function will throw an error if attempted with one of the built in vertex attributes that begin with 'gl_'.
EDIT:
Using OpenGL 3.0 with GLSL 130.
You don't. When using the core profile and VAOs, none of the fixed-function vertex attributes exist.
Define your own vertex attribute for normals in your shader:
in vec3 myVertexNormal;
Get the attribute location (or bind it to a location of your choice):
normalsLocation = glGetAttribLocation(program, "myVertexNormal");
Then use glVertexAttribPointer with the location:
glVertexAttribPointer(normalsLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL);
In the core profile, you must also do this for positions, texture coordinates, etc. as well. OpenGL doesn't actually care what the data is, as long as your vertex shader assigns something to gl_Position and your fragment shader assigns something to its output(s).
If you insist on using the deprecated fixed-function attributes and gl_Normal, use glNormalPointer instead.

OpenGL 4.5 Buffer Texture : extensions support

I use OpenGL Version 4.5.0, but somehow I can not make texture_buffer_object extensions work for me ("GL_EXT_texture_buffer_object" or "GL_ARB_texture_buffer_object"). I am quite new with OpenGL, but if I understand right, these extensions are quite old and even already included in the core functionality...
I looked for the extensions with "OpenGL Extensions Viewer 4.1", it says they are supported on my computer, glewGetExtension("GL_EXT_texture_buffer_object") and glewGetExtension("GL_ARB_texture_buffer_object") both also return true.
But the data from the buffer does not appear in the texture sample (in the fragment shader the texture contains only zeros).
So, I thought maybe the extensions are somehow disabled by default, and I included enabling these extensions in my fragment shader:
#version 440 core
#extension GL_ARB_texture_buffer_object : enable
#extension GL_EXT_texture_buffer_object : enable
And now I get such warnings at run-time:
***GLSL Linker Log:
Fragment info
-------------
0(3) : warning C7508: extension ARB_texture_buffer_object not supported
0(4) : warning C7508: extension EXT_texture_buffer_object not supported
Please see the code example below:
//#define GL_TEXTURE_BIND_TARGET GL_TEXTURE2D
#define GL_TEXTURE_BIND_TARGET GL_TEXTURE_BUFFER_EXT
.....
glGenTextures(1, &texObject);
glBindTexture(GL_TEXTURE_BIND_TARGET, texObject);
GLuint bufferObject;
glGenBuffers(1, &bufferObject);
// Make this the current UNPACK buffer (OpenGL is state-based)
glBindBuffer(GL_TEXTURE_BIND_TARGET, bufferObject);
glBufferData(GL_TEXTURE_BIND_TARGET, nWidth*nHeight*4*sizeof(float), NULL, GL_DYNAMIC_DRAW);
float *test = (float *)glMapBuffer(GL_TEXTURE_BIND_TARGET, GL_READ_WRITE);
for(int i=0; i<nWidth*nHeight*4; i++)
test[i] = i/(nWidth*nHeight*4.0);
glUnmapBuffer(GL_TEXTURE_BIND_TARGET);
glTexBufferEXT(GL_TEXTURE_BIND_TARGET, GL_RGBA32F_ARB, bufferObject);
//glTexImage2D(GL_TEXTURE_BIND_TARGET, 0, components, nWidth, nHeight,
// 0, format, GL_UNSIGNED_BYTE, data);
............
So if I use GL_TEXTURE2D target and load some data array directly to the texture, everything works fine. If I use GL_TEXTURE_BUFFER_EXT target and try to load texture from the buffer, then I get an empty texture in the shader.
Note: I have to load texture data from the buffer because in my real project I generate the data on the CUDA side, and the only way (that I know of) to visualize data from CUDA is using such texture buffers.
So, the questions are :
1) why I become no data in the texture, although the OpenGL version is ok, and Extensions Viewer shows the Extensions as supported ?
2) why trying to enable the extensions in the shader fails ?
Edit details : I updated the post, because I found out the reason for "Invalid Enum" error about that I mentioned first, it was caused by glTexParameteri that is not allowed for buffer textures.
I solved this. I was in a hurry and stupidly missed a very important thing on a wiki page:
https://www.opengl.org/wiki/Buffer_Texture
Access in shaders
In GLSL, buffer textures can only be accessed with the texelFetch​ function. This function takes pixel offsets into the texture rather than normalized texture coordinates. The sampler types for buffer textures are samplerBuffer​.
So in GLSL we should use buffer textures like this:
uniform samplerBuffer myTexture;
void main (void)
{
vec4 color = texelFetch(myTexture, [index]);
not like usual textures:
uniform sampler1D myTexture;
void main (void)
{
vec4 color = texture(myTexture, gl_FragCoord.x);
Warnings about not supported extensions : I think I get them because this functionality is included in the core since OpenGL 3.1, so they should not be enabled additionally any more.

Directx 11, send multiple textures to shader

using this code I can send one texture to the shader:
devcon->PSSetShaderResources(0, 1, &pTexture);
Of course i made the pTexture by: D3DX11CreateShaderResourceViewFromFile
Shader:
Texture2D Texture;
return color * Texture.Sample(ss, texcoord);
I'm currently only sending one texture to the shader, but I would like to send multiple textures, how is this possible?
Thank You.
You can use multiple textures as long as their count does not exceed your shader profile specs. Here is an example:
HLSL Code:
Texture2D diffuseTexture : register(t0);
Texture2D anotherTexture : register(t1);
C++ Code:
devcon->V[P|D|G|C|H]SSetShaderResources(texture_index, 1, &texture);
So for example for above HLSL code it will be:
devcon->PSSetShaderResources(0, 1, &diffuseTextureSRV);
devcon->PSSetShaderResources(1, 1, &anotherTextureSRV); (SRV stands for Shader Texture View)
OR:
ID3D11ShaderResourceView * textures[] = { diffuseTextureSRV, anotherTextureSRV};
devcon->PSSetShaderResources(0, 2, &textures);
HLSL names can be arbitrary and doesn't have to correspond to any specific name - only indexes matter. While "register(tXX);" statements are not required, I'd recommend you to use them to avoid confusion as to which texture corresponds to which slot.
By using Texture Arrays. When you fill out your D3D11_TEXTURE2D_DESC look at the ArraySize member. This desc struct is the one that gets passed to ID3D11Device::CreateTexture2D. Then in your shader you use a 3rd texcoord sampling index which indicates which 2D texture in the array you are referring to.
Update: I just realised you might be talking about doing it over multiple calls (i.e. for different geo), in which case you update the shader's texture resource view. If you are using the effects framework you can use ID3DX11EffectShaderResourceVariable::SetResource, or alternatively rebind a new texture using PSSetShaderResources. However, if you are trying to blend between multiple textures, then you should use texture arrays.
You may also want to look into 3D textures, which provide a natural way to interpolate between adjacent textures in the array (whereas 2D arrays are automatically clamped to the nearest integer) via the 3rd element in the texcoord. See the HLSL sample remarks.

How to make textured fullscreen quad in OpenGL 2.0 using SDL?

Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.

Cg and OpenGL 3

I'm currently learning the differences between OpenGL 2 and 3, and I noticed that many functions like glVertex, glVertexPointer, glColor, glColorPointer, etc. have disappeared.
I'm used to using Cg to handle shaders. For example I'd write this simple vertex shader:
void main(in inPos : POSITION, out outPos : POSITION) {
outPos = inPos;
}
And then I'd use either glVertex or glVertexPointer to set the values of inPos.
But since these functions are no longer available in OpenGL 3, how are you supposed to do the bindings?
First I'll recommend you to take a look at the answer to this question: What's so different about OpenGL 3.x?
Secondly, Norbert Nopper has lots of examples on using OpenGL 3 and GLSL here
Finally here's a simple GLSL example which shows you how to bind both a vertex and a fragment shader program.