Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm looking for a GLSL editor program. I did find some by googling, but I wanna know if there are any preferred ones based on user experience.
Possible features:
Syntax Highlighting
Intellisense
Automatic compile and link
P.S.
I'm not even sure if it's meaningful/possible for GLSL to be compiled automatically (any comments?).
EDIT:
Here's what I found:
Shader Maker
Try out KickJS's Shader Editor. It currently supports syntax highlight and compiles the code as you write.
http://www.kickjs.org/example/shader_editor/shader_editor.html
If you are running OS/X you should try out the OpenGL Shader Builder, even though this tool feels a little out-dated:
/Developer/Applications/Graphics Tools/OpenGL Shader Builder.app
There is also the GLman, which maybe is more a GLSL sandbox environment than a editor. A good introduction to the program is found in the excellent book: 'Graphics Shaders - Theory and practice - Second edition'.
http://web.engr.oregonstate.edu/~mjb/glman/
I found shader toy to be helpful. Contains some predefined shaders you can tweak and see instant results. All online and covers WebGL, OpenGL ES 1.1 / (some) 2.0, probably OpenGL various versions too.
https://www.shadertoy.com/
It passes in some predefined uniforms as well as up to 4 textures you can hyperlink too.
Here are the following inputs:
uniform vec4 mouse: xy contain the current pixel coords (if LMB is down). zw contain the click pixel.
uniform vec2 resolution: the rendering vieport resolution.
uniform float time: current time in seconds.
uniform sampler2D tex0: sampler for input texture 0.
uniform sampler2D tex1: sampler for input texture 1.
uniform sampler2D tex2: sampler for input texture 2.
uniform sampler2D tex3: sampler for input texture 3.
I've found http://glsl.heroku.com interesting, you can edit only the fragment shader, but it's quite useful for testing some effects.
And it's open source! You can get the source on github: https://github.com/mrdoob/glsl-sandbox
Example of a shader using this editor: http://glsl.heroku.com/e#7310.0 (it's not mine, btw)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want the code in glslSandBox to a regular glsl. Suppose I have a quad mesh 3D displaced in a 3D scene. I want to create this "shadertoy-like" texture and apply to it. I'm aware about the transformations it requires : Screen - NDC - CLIPSPACE - WORLDSPACE, but still I'm struggling. http://glslsandbox.com/e#61091.0 this is an extremely simple shader, can you please demonstrate the normal vs and fs shader it would take to apply it to a 3D mesh in a 3D scene?
The shader you linked is a fragment shader drawing on a flat screen plane.
Typically, the corresponding vertex setup would be two triangles (potentially as a strip, meaning 4 vertices in total), covering the entire screen. In fact, you don't really have to be concerning yourself with any transformations, especially given that the fs uses gl_FragCoord and the resolution is being passed as a uniform.
VS:
#version 450
in vec4 position;
void main() {
gl_Position = position
}
Vertices (example, use with GL_TRIANGLE_STRIP):
-1, -1
-1, 1
1, -1
1, 1
After you cover the entire screen, you can now just switch this setup to render to texture; create a framebuffer, attach a texture to it, and render in the same way. Then you'll be able to use that texture on your 3D model. This will work well if the generated image rarely changes.
If you actually wanted to draw that in one pass, then you'll need to pass the texture coordinate as a varying variable, and use it instead of gl_FragCoord; no other changes should be necessary.
This question already has an answer here:
Reflection and refraction impossible without recursive ray tracing?
(1 answer)
Closed 4 years ago.
I want to render a 2D image rendered from a 3D model with model, view and perspective projection transforms. However for each pixel/fragment in the output image, I want to store a representation of the index of the vertex in the original mesh which is physically closest to the point where the ray from the camera centre intersects the mesh.
I have a good understanding of the math involved in doing this and can build appropriate raytracing code 'longhand' to get this result but wanted to see if it was possible to achieve in OpenGL, via e.g. a fragment shader.
I'm not an OpenGL expert but my initial reading suggests a possible approach being to set a specific render target for the fragment shader that supports integral values (to store indices) and passing the entire mesh coordinates as a uniform to the fragment shader then performing a search for the nearest coordinate after back transforming gl_FragCoord to model space.
My concern is that this would perform hideously - my mesh has about 10,000 vertices.
My question is: Does this seem like a poor use case for OpenGL? If not, is my approach reasonable> If not, what would you suggest insetad.
Edit: While the indicated answer does contain the kernel of a solution to this question it is not in any way a duplicate question; it's a different question with a different answer that have common elements (raytracing). Someone searching for n answer to this question is highly unlikely to find the proposed duplicate.
First, 10000 vertices as a uniform is not a good practice in glsl. You may use ubo or create a data texture then upload this texture as an uniform texture, see the following post:
GLSL: Replace large uniform int array with buffer or texture
I am not familiar with raytracing. However, I think in most of cases, uploading large number of data to gpu need to use texture uniform.
https://www.opengl.org/discussion_boards/showthread.php/200487-Ray-intersection-with-GLSL?p=1292112&viewfull=1#post1292112
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I want to use shaders to use in WebGL and specifically three.js. Is there a specific version of GLSL that WebGL and three.js uses?
WebGL shaders follow the GLSL ES 1.017 spec
https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf
That's different than Desktop OpenGL in several ways. One it's the 1.0 version of GLSL ES where as desktop GL at version 4.2 of GLSL (not ES)
One big difference between WebGL GLSL and many articles found about shaders on the internet is there's no fixed function pipeline in OpenGL ES 2.0 and therefore no fixed function pipeline in WebGL.
The fixed function pipeline is left over from OpenGL 1.0 where you'd use commands like glLight and glVertex and glNormal. And then your shaders needed a way to reference that data. In OpenGL ES and WebGL all that is gone because everything a shader does is 100% up to the user. All WebGL does is let you define your own inputs (attributes, uniforms) and name them whatever you want.
WebGL2 shaders follow the GLSL ES 3.00 spec
https://www.khronos.org/registry/OpenGL/specs/es/3.0/es_spec_3.0.pdf
As for three.js, three.js is a 3d engine and provides its own set of standard inputs, names, and other features when it generates a shader. See the docs for some of the details. The uniforms and attributes provided by default are documented here. You can also look at the source or check out an example.
Three.js also provides something called a RawShaderMaterial which does not add any predefined things apparently in which case you just write standard WebGL GLSL.
You can find three.js's standard attributes and uniforms here.
As for a place to learn GLSL I don't really have a suggestion. It really depends on your level of experience with programming in general and how you like to learn. I learn by looking at examples better than reading manuals. Maybe someone else can add some links.
Shaders as a concept are pretty simple. You create a pair of shaders, setup them up with inputs, call gl.drawArrays or gl.drawElements and pass in a count. Your vertex shader will be called count times and needs to set gl_Position. Every 1 to 3 times it's called WebGL will then draw a point, line, or triangle. To do this it will call your fragment shader asking for each pixel it's about to draw what color to make that pixel. The fragment shader needs to set gl_FragColor. The shaders get data from attributes, uniforms, textures and varyings. attributes are per vertex data. They pull their data from buffers, one piece of data per attribute per iteration of your vertex shader. Uniforms are like setting global variables before the shader runs. You can pass data from a vertex shader to a fragment shader with varying. That data will be interpolated or varied ;) between the values set for each vertex of a primitive (triangle) as the fragment shader is called to provide a color for each pixel.
It's up to you to creatively supply data to the shader and use that data creatively to set gl_Position and gl_FragColor. I get most ideas from looking at examples.
GLSL itself is pretty straight forward. There's a few types int, float, vec2, vec3, vec4, mat2, mat3, mat4. They respond to operators +, -, *, / etc. There's some built in functions.
You can find a terse version of all GLSL info on the last 2 pages of the WebGL Reference Card.
That was enough for me. That and looking at working programs.
The one interesting thing for me vs most languages was synonyms for vec fields and the swizzling. A vec4 for example
vec4 v = vec4(1.0, 2.0, 3.0, 4.0);
You can reference the various components of v using x,y,z,w or s,t,u,v or r,g,b,a or array style. So for example
float red = v.r; // OR
float red = v.x; // same thing OR
float red = v.s; // same thing OR
float red = v[0]; // same thing
The other thing you can do is swizzle
vec4 color = v.bgra; // swap red and blue
vec4 bw = v.ggga; // make a monotone color just green & keep alpha
And you can also get subcomponents
vec2 just_xy = v.xy;
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a flat surface drawn with a single fullscreen GL_QUAD.
I want to deform this surface at each point specified by my GL_Texture2D, preferably through some kind of shader.
In my mind, black could correspond to flat and white could correspond to a hill.
I want to have about 4 million points on my terrain and update them at each step in my program.
How would I use a geometry shader to do this? Is a shader able to generate new veritices?
The simplest way would be to generate a large triangle strip grid, upload it to a VBO and draw it, using the vertex shader to alter just the up coordinate. The vertex shader can also generate normals from the heightmap (or supply a normal map), which then get passed to the fragment shader for lighting.
To avoid storing a huge amount of data for the vertices, use glVertexID to generate the vertex positions from scratch in the vertex shader. Don't bind any buffers, simply call glDrawArrays(GL_TRIANGLE_STRIP, 0, lots).
As GuyRT mentioned, a tessellation shader would be good too and allow you to vary the tessellation detail based on the camera's distance to the mesh. This would be more work though.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have a simple GLSL 1.2 shader which accumulates values from five textures.
The shader compiles just fine.
My problem is , when rendering a simple triangle with this shader activated, nothing gets drawn, and glGetError() returns invalid operation.
However, if I only utilize three of the textures, everything runs just fine. If I activate another shader before rendering the triangle, everything runs fine aswell.
Seems like there is something about textures and shaders I don't seem to know, any ideas on why I would get an invalid operation from running a shader?
Update:
The problems occurs on both Nvidia, ATI and software MESA rendering.
There are no glErrors while uploading textures, activating textures, activating the shader, setting uniforms to the shader, the gl error occurs after rendering one simple triangle.
The textures are accesed only in the fragment shader.
There is a limited number of textures you can use in a shader.
The limit is specific of vertex stage, fragment stage and an overall combined limit.
With the most probability, you are exceeding this limit. The limit is system-dependent, indeed you have to query that limits by using glGet routines with the following parameters:
GL_MAX_{VERTEX|FRAGMENT}_TEXTURE_IMAGE_UNITS for specific shader stage
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS for an entire shader program
if more than one of the vertex or fragment stage access the same texture unit, each access counts separately against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.