According to Apple’s documentation “To update a value once, use key-value coding: Call the setValue:forKey: method, providing the uniform name from shader source code as the key and an appropriate type of data as the value.” (taken from SCNProgram Class Reference).
However, I can’t get this to work.
I have a SCNMaterial subclass, set an new SCNProgram instance, load vertex and fragment shader. I have been using handleBindingOfSymbol to set custom variables, such as
self.stringMaterial.handleBindingOfSymbol("waveAmplitude") {
programID, location, renderedNode, renderer in
glUniform1f(GLint(location), Float(self.amplitude))
}
This is working correctly.
For efficiency I want to move to being able to use key-value coding. But if I replace the above code with
self.stringMaterial.setValue(Float(self.amplitude), forKey: "waveAmplitude")
the uniform’s value in the shader is 0.0.
Does anyone have any experience with this? (I'm doing this on MacOS but I expect it’s the same on iOS.)
Related
Ok, from what I understand materials can be created for .dae or any 3D model being used as an SCNNode in Xcode here, in the model's editor:
The topmost material gets applied automatically and all is well. My problem is that I want to programmatically SWITCH between these materials that have been created throughout my game.
I tried to get an array of these materials by doing:
node.geometry?.materials
however this only returns that first material. Ive tried everybting but cant find a way to get the other materials and switch to them. Right now I am trying:
childNode.geometry?.materials = [(childNode.geometry?.material(named: "test"))!]
//childNode is the node
where test was that second material but it finds that as nil. How can I programmatically switch between multiple materials?
If the material is not actually assigned to one of the material slots (such as diffuse) it’s not part of the geometry either.
You could assign the second material to another slot and then reset its color in code after you read the material into a property to be used later.
Another option I used myself is assigning multiple materials to different faces of the same model (in third party 3d software). Exported as dae, added to xcode which then automatically divided the geometry into separate elements with each their own material. Which I can then adjust in xcode, and itterate in the same way you are trying to do.
I have an HLSL shader that defines some resources, say a constant buffer:
cbuffer MyCB : register(b0);
If I compile my shader, I will then be able to query the register through the reflection API. But is it possible to change the register (for instance, to b3) in a compiled shader blob in a similar manner you can assign bind points to resources in a compiled OpenGL program?
There is no API to change the shader bindings at runtime in a compiled shader.
If you jumped through many hoops, you might be able to achieve this with dynamic shader linking in Shader Model 5.0, although it would be lots of work and not really worth it, when there is a very easy alternative - simply create a new compiled shader with the bindings you want.
You can accomplish this in direct3d12 by specifying a BaseShaderRegister other than zero, or using different RegisterSpace values, in the D3D12_DESCRIPTOR_RANGE struct. If code changes are not feasible, you can isolate each set of registers implicitly by setting the root parameter's ShaderVisibility property. This will isolate, for example, VS b0 from PS b0. For more details, you can check out the developer video on the topic.
The only time you will run into trouble is if you've actually explicitly bound two resources to the same slot and register space (by explicitly specifying it using shader model 5.1 syntax). In this case, you are expected to understand that in D3D12, registers are shared cross-stage, and it's up to you to make sure you have no collisions.
In D3D11, this problem does not occur as each stage has its own register space (VS b0 is not the same as PS b0) and you can't share even if you wanted to. Still, if you for some reason have a component hard-coded to attach data to VS b0 but your vertex shader has already been compiled to expect it at b1, there's not much you can do.
I have a software renderer that is similar designed to the OpenGL 2.0+ rendering pipeline, however, my software renderer is quite static in its functionality. I would like to design it so I can put in custom vertex- and fragment-"shaders" (written as C++ "functions", not in the OpenGL language), however, I'm not sure how to implement a good, reusable, extensible solution.
Basically I want to choose between a custom "function" that is then called in my renderer to process every vertex (or fragment). So maybe I could work with a function object passed to the renderer, or work out some inheritance-based solution or I'm thinking this could be a case for a template-based solution.
I imagine it like this:
for every vertex
// call the vertex-shading function given by the user, with the standard
// arguments plus the custom ones given in user code. May produce some custom
// output that has to be fed into the fragment shader below
end
// do some generic rendering-stuff like clipping etc.
for every triangle
for every pixel in the triangle
// call the fragment-shading function given by the user, with the standard
// arguments, plus the custom ones from the vertex shader and the ones
// given in user code
end
end
I can program C++ quite well, however I don't have much practical experience with templates and the more advanced stuff - I have read a lot and watched a lot of videos though.
There's a few requirements like that one of those "shader-functions" can have multiple (different) input and output variables. There is 2-3 parameters that are not optional and always the same (like the input to a vertex-shader is obviously the triangle, and the output is the transformed position), but one shader could e.g. also require an additional weight-parameter or barycentric coordinates as input. Also, it should be possible to feed one of such custom outputs of the vertex shader into a corresponding fragment shader (like in OpenGL where the output of a variable in the vertex shader is fed into the fragment shader).
At the same time, I would also prefer a simple solution - it shouldn't be too advanced (like I don't want to mimic the GLSL compiler, or have my own DSL). It should just be something like - write VertexShaderA and VertexShaderB and be able to plug them both into my Renderer, along with some parameters depending on the shader.
I would like if the solution uses "modern" C++, as in basically everything that compiles with VS2013 and gcc-4.8.
So to rephrase my main question:
How can I accomplish this "passing of custom functions to my renderer", with the additional functionality mentioned?
If possible, I would welcome a bit of example code to help get me started.
TinyRenderer is a very simple but rather elegant implementation of around 500 lines and it has some wiki with a tutorial. See https://github.com/ssloy/tinyrenderer and the actual shader https://github.com/ssloy/tinyrenderer/blob/master/main.cpp
I was wondering what would be the quickest / most performant way to clear a 2D texture using DirectX 11?
Context: I am using an RWTexture object as a head pointer to implement linked lists on the GPU (essentially, to implement Order-Independent Transparency as known from the AMD Tech Demo) and I need to reset this buffer to a fixed value every Frame.
The following ideas come to my mind:
Declare it as a Render Target and use ClearRenderTargetView to set it. Seems unnatural to me since I don't actually render to it directly, also I am not sure if it actually works with an uint datatype
Actually Map it as a render target and render a fullscreen quad, using the pixel shader to set the value
Use a compute shader to set the fixed value
Is there some obvious way I am missing or an API for this that I am not aware of?
As pointed out by user galop1n, ClearUnorderedAccessViewUint and ClearUnorderedAccessViewFloat are the way to go here.
I'm trying to use the transform feedback functionality of OpenGL. I've written a minimalistic vertex shader and created a program with it (there's no fragment shader). I've also made a call to glTransformFeedbackVaryings with a single output varying name and I've set the buffer mode to be GL_INTERLEAVED_ATTRIBS. The shader program compiles and links fine (I also make sure I link after the glTransformFeedbackVaryings call.
I've enabled a single vertex attrib array using glEnableVertexAttribArray, allocated a VBO for the generic vertex attributes and made a call to glVertexAttribPointer for the attribute.
I've bound the TRANSFORM_FEEDBACK_BUFFER to another buffer which I've generated and created a data store which should be plenty big enough to be written to during transform feedback.
I then enable transform feedback and call glDrawArrays(GL_POINTS, 0, 1000). I don't get any crashes throughout the running of the program.
The problem is that I'm getting no indication that the transform feedback is writing anything to the TRANSFORM_FEEDBACK_BUFFER during the glDrawArrays call. I set up a query which monitors GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN and this always returns 0. No matter what I try I can't seem to get the transform feedback to write ANYTHING (never mind anything meaningful!)
If anyone has any suggestions as to how I could get the transform feedback to write anything, or things that I should check for please let me know!
Note: I can't use transform feedback objects and I'm not using vertex array objects.
I think the problem ended up being how I was calling glBindBufferBase. Given that I can't see this function call in the original question it may have been that I omitted it altogether.
Certainly I didn't realise that the GL_TRANSFORM_FEEDBACK_BUFFER also has to be bound with a call to glBindBuffer to the correct buffer object before calling glBindBufferBase.