tl;dr:
I'm a shader newbie and I'm trying to port an HLSL shader to GLSL.
What is the GLSL equivalent for RWTexture2D<float4> whatever; ?
I need to programmatically create a texture inside a shader.
Longer version:
I'm trying to port the "slime shader" from this video to GLSL, and load it in a web page (at the moment I'm using Three.js).
I managed to code the pseudo-random hash function and display the noise on screen, but now I'm stuck.
(Here's the HLSL shader code)
In the original shader there is this: RWTexture2D<float4> TrailMap; and I can't find a way to make something similar in my shader. All the info that I have found online is about loading external textures, but what I need is a texture that is created and modified inside the shader (and it seems to me that the way GLSL handles textures is not very beginner friendly).
I also tried using this converter. What I get is uniform image2D TrailMap; but it gives me this error:
'image2D' : Illegal use of reserved word
What am I missing?
WebGL doesn't have access to image load/store, the ability to modify image data arbitrarily within shaders. The converter is doing the right thing, but WebGL simply doesn't provide access to this hardware functionality.
Related
I'm trying to write a shader for Minecraft bedrock edition, and since I don't have access to the code of the game, just the GLSLes shaders, I was wondering if there was any way for me to access a texture from the fragment shader without having to pass it through the c++ code.
The answer is as stated in the comments above : there's no way. GLSL doesn't work this way.
Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]
A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.
I have some working OpenGL code that I was asked to port to Direct3D 11.
In my code i am using Shader Storage Buffer Objects (SSBOs) to read and write data in a geometry shader.
I am pretty new of Direct3D programming. Thanks to google I've been able to identify the D3D equivalent of SSBOs, RWStructuredBuffer (I think).
The problem is that I am not sure at all I can use them in a geometry shader in D3D11, which, from what i understand, can generally only use up to 4 "stream out"s (are these some sort of transform feedback buffer?).
The question is: is there any way with D3D11/11.1 to do what I'm doing in OpenGL (that is writing to SSBOs from the geometry shader)?
UPDATE:
Just found this page: http://msdn.microsoft.com/en-us/library/windows/desktop/hh404562%28v=vs.85%29.aspx
If i understand correctly the section "Use UAVs at every pipeline stage", it seems that accessing such buffers is allowed in all shader stages.
Then i discovered that DX11.1 are available only on Windows 8, but some features are also ported to Windows 7.
Is this part of Direct3D included in those features available on Windows 7?
RWBuffers are not related to the geometry shader outputting geometry, they are found in compute shader mostly and in a less percentage in pixel shader, and as you spot, other stages needs D3D 11.1 and Windows 8.
What you are looking for is stream output. The API to bind buffers to the output of the geometry shader stage is ID3D11DeviceContext::SOSetTargets and buffers need to be created with the flag D3D11_BIND_STREAM_OUTPUT
Also, outputting geometry with a geometry shader was an addition from D3D10, in D3D11, it is often possible to have something at least as efficient and simpler with compute shaders. That's not an absolute advice of course.
The geometry shader is processed once per assembled primitive and can generate one or more primitives as a result.
The output of the geometry shader can be redirected towards an output buffer instead of passed on further for rasterization.
See this overview diagram of the pipeline and this description of the pipeline stages.
A geometry shader has access to other resources, bound via the GSSetShaderResources method on the device context. However, these are generally resources that are "fixed" at shader execution time such as constants and textures. The data that varies for each execution of the geometry shader is the input primitive to the shader.
just been pointed to this page:
http://nvidia.custhelp.com/app/answers/detail/a_id/3196/~/fermi-and-kepler-directx-api-support .
In short, nvidia does not support the feature on cards < Maxell.
This pretty much answers my question. :/
I would like to access different levels of detail in my GLSL fragment program. I'm currently stuck with using legacy OpenGL, including GLSL 1.2. Unfortunately, I don't have control over that.
I see that the texture2DLod() method exists, but it appears it can only be used in a vertex program.
I have read this question, but they appear to be working with GLSL 1.4 or later. Unfortunately, I do not have that option.
Is there any way in a GLSL 1.2 fragment program to sample a specific mipmap level?
If there's no function for doing it directly, is it possible to send the mipmaps in as separate textures without doing 8 copies?
It is not possible for a fragment shader (in GLSL 1.20) to access a specific texture mipmap. However, you can always change the base/max mipmap levels of a texture before you use it. By setting them both to the same level, you force any texture accesses from that texture to use a specific mipmap level.
Now, you can't make these separate textures (unless you're using NVIDIA hardware and have access to ARB_texture_view). So you'll have to live with changing the texture's base/max level every time you want to pick a new mipmap.
I'm looking for a way to develop shaders for my game engine on the Mac. I've come across the OpenGL Shader Builder app, which seemed really awesome apart from one thing. I couldn't figure out a way to use generic vertex attributes with the builder.
GLSL Shader Builder works fine with all the pre-defined OpenGL vertex attributes (gl_Normal, gl_Color, gl_TexCoord, gl_MultiTexCoordX etc.), but I want to be able to define my own vertex attributes in the shader, and then have some way of passing in a set of generic vertex data which makes up a model. That way I would be able to develop the shaders independently using the GLSL shader builder, so long as I have a set of vertex attributes to pass in to the shader.