I'm trying to get custom shaders working with Forge Viewer.
I've read these posts:
Forge Viewer Custom Shaders
Custom shader materials in Forge Viewer
This non-shader material works perfectly fine and renders everything in magenta:
let customMaterial = new THREE.MeshPhongMaterial({
color: new THREE.Color("#FF00FF"),
name: `not-built-white`,
side: THREE.DoubleSide,
});
But when I try to make a material with shaders, it gives me this error:
[.WebGL-0x11817735400] GL_INVALID_OPERATION: Active draw buffers with missing fragment shader outputs.
And all of my meshes are invisible (but can still be clicked/selected).
It doesn't seem to matter if it's a ShaderMaterial or a RawShaderMaterial, or what I put in the shaders. I've tried hundreds of variants and Googled my poor little heart out. I tried including all of the #DEFINE stuff for MRIT detection, I've tried setting layout(location = x) to various values, using gl_FragData, etc.
Here's my current material setup:
let VertexShader = `
attribute vec3 position;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`;
let FragmentShader = `
void main() {
gl_FragColor = vec4(1.0, 1.0, 0.0, 1.0); // simply output a solid yellow color
}
`;
let customShaderMaterial = new THREE.RawShaderMaterial({
vertexShader: VertexShader,
fragmentShader: FragmentShader,
name: 'custom-shader-material',
side: THREE.DoubleSide,
});
Here's my forge version:
"#types/forge-viewer#^7.5.7":
version "7.5.7"
resolved "https://registry.npmjs.org/#types/forge-viewer/-/forge-viewer-7.5.7.tgz"
dependencies:
"#types/three" "^0.93.30"
What does this error mean, and how do I fix it or even debug it?
I figured it out, all I had to do was add this to the material properties:
supportsMrtNormals: true
Then put all the #defines from the second blog post back into the top of the fragment shader:
#ifdef _LMVWEBGL2_
#if defined(MRT_NORMALS)
layout(location = 1) out vec4 outNormal;
#if defined(MRT_ID_BUFFER)
layout(location = 2) out vec4 outId;
#if defined(MODEL_COLOR)
layout(location = 3) out vec4 outModelId;
#endif
#endif
#elif defined(MRT_ID_BUFFER)
layout(location = 1) out vec4 outId;
#if defined(MODEL_COLOR)
layout(location = 2) out vec4 outModelId;
#endif
#endif
#else
#define gl_FragColor gl_FragData[0]
#if defined(MRT_NORMALS)
#define outNormal gl_FragData[1]
#if defined(MRT_ID_BUFFER)
#define outId gl_FragData[2]
#if defined(MODEL_COLOR)
#define outModelId gl_FragData[3]
#endif
#endif
#elif defined(MRT_ID_BUFFER)
#define outId gl_FragData[1]
#if defined(MODEL_COLOR)
#define outModelId gl_FragData[2]
#endif
#endif
#endif
Related
I'm using cocos 3.17 on Xcode 11 on Mac.
These are my fragment and vertex shaders.
myShader.frag
#ifdef GL_ES
precision lowp float;
#define LOWP lowp
#else
#define LOWP
#endif
uniform sampler2D u_texture;
varying LOWP vec4 v_fragmentColor;
uniform mat4 u_rotation;
void main()
{
mat4 t1= mat4(1);
mat4 t2= mat4(1);
t1[3] = vec4(-0.5,-0.5,1,1);
t2[3] = vec4(+0.5,+0.5,1,1);
vec2 pos = (t2 * u_rotation * t1 * vec4(gl_PointCoord, 0, 1)).xy;
gl_FragColor = v_fragmentColor * texture2D(u_texture, pos);
}
myShader.vert
#ifdef GL_ES
#define LOWP lowp
#else
#define LOWP
#endif
attribute vec4 a_position;
uniform float u_pointSize;
uniform LOWP vec4 u_fragmentColor;
varying LOWP vec4 v_fragmentColor;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
gl_PointSize = u_pointSize;
v_fragmentColor = u_fragmentColor;
}
When I run it as a Mac app, it gives me this error:
cocos2d:
ERROR: 0:36: Use of undeclared identifier 'gl_PointCoord'
ERROR: 0:37: Use of undeclared identifier 'pos'
someone can help me to figure out why?
After some troubles, I found the solution.
A GLSL shader that does not have a #version directive at the top is assumed to be 1.10, but I really need version 1.2. So, I need to add '#version 120\n' as a compileTimeHeader string in the initialization:
GLProgram *program = new GLProgram();
program->initWithByteArrays(myShader_vert, myShader_frag, "#version 120\n", "");
There is a mistake in the OpenGL docs which state that gl_PointCoord is a 1.1 feature, whereas in fact it was introduced in 1.2.
I am trying to assign texture unit 0 to a sampler2D uniform but the uniform's value does not change.
My program is coloring points based on their elevation (Y coordinates). Their color is looked up in a texture.
Here is my vertex shader code :
#version 330 core
#define ELEVATION_MODE
layout (location = 0) in vec3 position;
layout (location = 1) in float intensity;
uniform mat4 vpMat;
flat out vec4 f_color;
#ifdef ELEVATION_MODE
uniform sampler2D elevationTex;
#endif
#ifdef INTENSITY_MODE
uniform sampler2D intensityTex;
#endif
// texCoords is the result of calculations done on vertex coords, I removed the calculation for clarity
vec4 elevationColor() {
return vec4(textureLod(elevationTex, elevationTexCoords, 0), 1.0);
}
vec4 intensityColor() {
return vec4(textureLod(elevationTex, intensityTexCoords, 0), 1.0);
}
int main() {
gl_Position = vpMat * vec4(position.xyz, 1.0);
#ifdef ELEVATION_MODE
f_color = elevationColor();
#endif
#ifdef COLOR_LODDEPTH
f_color = getNodeDepthColor();
#endif
}
Here is my fragment shader :
#version 330 core
out vec4 color;
flat in vec4 f_color;
void main() {
color = f_color;
}
When this shader is executed, I have 2 textures bound :
elevation texture in texture unit 0
intensity texture in texture unit 1
I am using glUniform1i to set the uniform's value :
glUniform1i(elevationTexLocation, (GLuint)0);
But when I run my program, the value of the uniform elevationTex is 1 instead of 0.
If I remove the glUniform1i call, the uniform value does not change (still 1) so I think the call is doing nothing (but generates no error).
If I change the uniform's type to float and the call from glUniform1i to :
glUniform1f(elevationYexLocation, 15.0f);
The value in the uniform is now 15.0f. So there is no problem in my program with the location from which I call glUniform1i, it just has no impact on the uniform's value.
Any idea about what I could be doing wrong ?
I could give you more code but it is not really accessible so if you know the answer without it that's great. If you need the C++ part of the code, ask, I'll try to retrieve the important parts
I'm using the latest cocos2d-x v3.9 (JSBinding).
And here is a code (add a sprite and attach with a simple gray shader):
this.winSize = cc.director.getWinSize();
var sprite = new cc.Sprite(res.png_building_3);
sprite.setPosition(this.winSize.width / 2, this.winSize.height / 2);
var shaderProgram = new cc.GLProgram();
shaderProgram.init("GrayScaleShader.vsh", "GrayScaleShader.fsh");
shaderProgram.addAttribute(cc.ATTRIBUTE_NAME_POSITION, cc.VERTEX_ATTRIB_POSITION);
shaderProgram.addAttribute(cc.ATTRIBUTE_NAME_TEX_COORD, cc.VERTEX_ATTRIB_TEX_COORDS);
shaderProgram.link();
shaderProgram.updateUniforms();
sprite.setShaderProgram(shaderProgram);
this.addChild(sprite);
And here is the shader vsh:
attribute vec4 a_position;
attribute vec2 a_texCoord;
#ifdef GL_ES
varying mediump vec2 v_texCoord;
#else
varying vec2 v_texCoord;
#endif
void main()
{
gl_Position = (CC_PMatrix * CC_MVMatrix) * a_position;
v_texCoord = a_texCoord;
}
and fsh:
#ifdef GL_ES
precision lowp float;
#endif
varying vec2 v_texCoord;
void main(void) {
vec4 normalColor = texture2D(CC_Texture0, v_texCoord).rgba;
float grayColor = dot(normalColor.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayColor, grayColor, grayColor, normalColor.a);
}
This code works fine in Browser, I can see a gray sprite in the center of the screen.
But in Mac and iOS (JSBinding) the sprite is gray but the position of the sprite is in the right top of the screen (not in the center where it should be).
Not sure what is going wrong here, any advice will be appreciated, thanks :)
For some reason in browser works with:
gl_Position = (CC_PMatrix * CC_MVMatrix) * a_position;
but in native iOS you have to modify the vertex shader to:
gl_Position = CC_PMatrix * a_position;
Basically I want to create one file which contains my vertex shader as well as fragment shader.
like this.
#ifdef VERTEX
attribute vec4 a_pos;
attribute vec2 a_texCoords;
uniform mat4 combined;
varying vec2 v_texCoords;
void main(){
v_texCoords = a_texCoords;
gl_Position = combined * a_pos;
}
#endif
#ifdef FRAGMENT
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main() {
gl_FragColor = texture2D(u_texture, v_texCoords);
//gl_FragColor = vec4(1,0,0,1);
}
#endif
but How to pass directives while compiling the shader like make_file in c/c++?
I just prepended #define VERTEX\n to the start of the code string when using it as vertex shader source code, and #define FRAGMENT\n when using it as fragment shader source code
like this:-
void compileShader(int TYPE, String source){
switch(TYPE){
case GL20.GL_VERTEX_SHADER:
source = "#define VERTEX \n" + source;
case GL20.GL_FRAGMENT_SHADER:
source = "#define FRAGMENT \n" + source;
}
//then compile source
}
I created a basic quad drawing shader using a single point and a geometry shader.
I've read many posts and articles suggesting that I would not need to use glProgramParameteriEXT and could use the layout keyword so long as I was using a shader #version 150 or higher. Some suggested #version 400 or #version 420. My computer will not support #version 420 or higher.
If I use only layout and #version 150 or higher, nothing draws. If I remove layout (or even keep it; it does not seem to care because it will compile) and use glProgramParameteriEXT, it renders.
In code, this does nothing:
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
This is the only code that works:
glProgramParameteriEXT( id, GL_GEOMETRY_INPUT_TYPE_EXT, GL_POINTS );
glProgramParameteriEXT( id, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP );
glProgramParameteriEXT( id, GL_GEOMETRY_VERTICES_OUT_EXT, 4 );
The alternative is to create a parser that creates the parameters via shader source.
Source for quad rendering via geometry shader:
#version 330
#ifdef VERTEX_SHADER
in vec4 aTexture0;
in vec4 aColor;
in mat4 aMatrix;
out vec4 gvTex0;
out vec4 gvColor;
out mat4 gvMatrix;
void main()
{
// Texture color
gvTex0 = aTexture0;
// Vertex color
gvColor = aColor;
// Matrix
gvMatrix = aMatrix;
}
#endif
#ifdef GEOMETRY_SHADER
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
in vec4 gvTex0[1];
in vec4 gvColor[1];
in mat4 gvMatrix[1];
out vec2 vTex0;
out vec4 vColor;
void main()
{
vColor = gvColor[0];
// Top right.
//
gl_Position = gvMatrix[0] * vec4(1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].y);
EmitVertex();
// Top left.
//
gl_Position = gvMatrix[0] * vec4(-1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].y);
EmitVertex();
// Bottom right.
//
gl_Position = gvMatrix[0] * vec4(1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].w);
EmitVertex();
// Bottom left.
//
gl_Position = gvMatrix[0] * vec4(-1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].w);
EmitVertex();
EndPrimitive();
}
#endif
#ifdef FRAGMENT_SHADER
uniform sampler2D tex0;
in vec2 vTex0;
in vec4 vColor;
out vec4 vFragColor;
void main()
{
vFragColor = clamp(texture2D(tex0, vTex0) * vColor, 0.0, 1.0);
}
#endif
I am looking for suggestions as to why something like this might happen.