The popular way of using GLSL shaders in WebGL seems to be to embed them in the main html file.
The vertex and fragments shaders are embedded in tags like:
<script id="shader-fs" type="x-shader/x-fragment">
This is the same convention I see in the WebGL samples in the Mozilla Developer Network page.
This works fine for simple apps, but when you have a complex app with a number of shaders, the html file gets cluttered. (I keep editing the wrong shader!) Also if you want to reuse your shaders, this scheme is inconvenient.
So I was thinking about putting these shaders in a separate XML files and loading them using XMLHttpRequest(). Then I saw that someone else had the same idea:
http://webreflection.blogspot.com/2010/09/fragment-and-vertex-shaders-my-way-to.html
I like the suggestion to use .c files, since that gives you syntax highlighting and other editor conveniences for GLSL.
But the issue with the above approach is that (as far as I understand) XMLHttpRequest() cannot load a local .c file - ie, on the client side - while you are developing and testing the WebGL app. But it is cumbersome to keep uploading it to the server during this process.
So if I want to keep the shaders out of the html file, is the only option to embed them as strings in the code? But that would make it hard to write as well as debug...
I'd appreciate any suggestions on managing multiple GLSL shaders in WebGL apps.
Regards
Edit (May 05 2011)
Since I use a Mac for development, I decided to enable Apache server, and put my webgl code under http://localhost/~username/. This sidesteps the issue of file: protocol being disabled during development. Now the javascript file loading code works locally since http: is used, rather than file:. Just thought I'd put this up here in case anyone finds it useful.
Yup, a local server's really the only way to go if you want to use XHR. I've written a bunch of WebGL lessons, and have often considered moving away from embedding the shaders in the HTML, but have been scared off by amount of explanation about web security I'd need to write...
Fortunately it's super easy to run a server. Just open a shell then
cd path-to-files
python -m SimpleHTTPServer
Then point your browser to
http://localhost:8000
That works for simple cases like textures and GLSL. For video and audio streaming see
What is a faster alternative to Python's http.server (or SimpleHTTPServer)?
On the other hand every browser that supports WebGL supports ES6 mutli-line template literals so if you don't care about old browsers you can just put you shaders in JavaScript using backticks like this
var vertexShaderSource = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
EDIT 2021 this answer is antiquated. You should probably look for something else.
I've been using require.js's text plugin.
Here's a snippet:
define(
/* Dependencies (I also loaded the gl-matrix library) */
["glmatrix", "text!shaders/fragment.shader", "text!shaders/vertex.shader"],
/* Callback when all has been loaded */
function(glmatrix, fragmentShaderCode, vertexShaderCode) {
....
var vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertexShaderCode);
gl.compileShader(vertexShader);
....
}
);
The directory structure is as follows:
~require-gl-shaders/
|~js/
| |+lib/
| |~shaders/
| | |-fragment.shader
| | `-vertex.shader
| |-glmatrix.js - gl-matrix library
| |-shader.js
| |-text.js - require.js's text plugin
|-index.html
|-main.js
`-require.js - the require.js library
Personally, I had a little bit of learning curve with require, but it really helped me keep cleaner code.
Following #droidballoon's hint I ended up using stack.gl which "is an open software ecosystem for WebGL, built on top of browserify and npm".
Its glslify provides a browserify transform which can be used in conjunction with gl-shader in order to load shaders. The Javascript would look something like this:
var glslify = require('glslify');
var loadShader = require('gl-shader');
var createContext = require('gl-context');
var canvas = document.createElement('canvas');
var gl = createContext(canvas);
var shader = loadShader(
gl,
glslify('./shader.vert'),
glslify('./shader.frag')
);
My buddy created a nice utils object with some handy functions for this type of scenario. You would store your shaders in plain text files in a folder called "shaders":
filename : vertex.shader
attribute vec3 blah;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
void main(void) {
magic goes here
}
filename : fragment.shader
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vYadaYada;
uniform sampler2D uSampler;
void main(void) {
fragic magic goes here
}
And you simply call this to create a new program with these shader files:
var shaderProgram = utils.addShaderProg(gl, 'vertex.shader', 'fragment.shader');
And here is the sweet util object to handle biz:
utils = {};
utils.allShaders = {};
utils.SHADER_TYPE_FRAGMENT = "x-shader/x-fragment";
utils.SHADER_TYPE_VERTEX = "x-shader/x-vertex";
utils.addShaderProg = function (gl, vertex, fragment) {
utils.loadShader(vertex, utils.SHADER_TYPE_VERTEX);
utils.loadShader(fragment, utils.SHADER_TYPE_FRAGMENT);
var vertexShader = utils.getShader(gl, vertex);
var fragmentShader = utils.getShader(gl, fragment);
var prog = gl.createProgram();
gl.attachShader(prog, vertexShader);
gl.attachShader(prog, fragmentShader);
gl.linkProgram(prog);
if (!gl.getProgramParameter(prog, gl.LINK_STATUS)) {alert("Could not initialise main shaders");}
return prog;
};
utils.loadShader = function(file, type) {
var cache, shader;
$.ajax({
async: false, // need to wait... todo: deferred?
url: "shaders/" + file, //todo: use global config for shaders folder?
success: function(result) {
cache = {script: result, type: type};
}
});
// store in global cache
uilts.allShaders[file] = cache;
};
utils.getShader = function (gl, id) {
//get the shader object from our main.shaders repository
var shaderObj = utils.allShaders[id];
var shaderScript = shaderObj.script;
var shaderType = shaderObj.type;
//create the right shader
var shader;
if (shaderType == "x-shader/x-fragment") {
shader = gl.createShader(gl.FRAGMENT_SHADER);
} else if (shaderType == "x-shader/x-vertex") {
shader = gl.createShader(gl.VERTEX_SHADER);
} else {
return null;
}
//wire up the shader and compile
gl.shaderSource(shader, shaderScript);
gl.compileShader(shader);
//if things didn't go so well alert
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert(gl.getShaderInfoLog(shader));
return null;
}
//return the shader reference
return shader;
};//end:getShader
Thanks buddy for the sweet codeezy.. enjoy his contribution to the webgl community.. makes it way easier to simplify program / shader management.
I am using this:
https://www.npmjs.com/package/webpack-glsl-loader
It fits priority to keep syntax highlighting from having proper glsl files instead of text fragments.
I'll report later how it works.
[edit Aug-17, 2015] This approach is working fine for me. It assumes webpack is in your build flow, but that's not such a bad thing.
[edit 11-June-2016] https://github.com/kulicuu/Spacewar_WebGL_React has a working example for importing glsl files through a Webpack build. The game itself should be developed over the coming week.
A nice way of doing it is through the browserify-shader extension to Browserify.
If you can use server-side scripting, you could write a small script that reads in the shader files and returns a JavaScript file with the scripts in a global object. That way you can include it using plain-old <script src="shader?prefix=foo"> and edit the scripts as .c files.
Something like this Ruby CGI script
require 'cgi'
require 'json'
cgi = CGI.new
prefix = File.expand_path(cgi["prefix"])
cwd = Dir.getwd + "/"
exit!(1) unless prefix.start_with?(cwd)
shader = prefix + ".c"
source = File.read(shader)
cgi.out("text/javascript") {
<<-EOF
if (typeof Shaders == 'undefined') Shaders = {};
Shaders[#{cgi["prefix"]}] = #{source.to_json};
EOF
}
You can place your shaders in different files just like you put your javascript code in different files. This library https://github.com/codecruzer/webgl-shader-loader-js accomplishes that with a familiar syntax:
Example Usage (taken verbatim from above page):
[index.html]:
<script data-src="shaders/particles/vertex.js" data-name="particles"
type="x-shader/x-vertex"></script>
<script data-src="shaders/particles/fragment.js" data-name="particles"
type="x-shader/x-fragment"></script>
[example.js]:
SHADER_LOADER.load (
function (data)
{
var particlesVertexShader = data.particles.vertex;
var particlesFragmentShader = data.particles.fragment;
}
);
Is not the exact solution, but is good for me.
I use Pug (old Jade) for compile HTML, and I use includes inside shaders script tags
script#vertexShader(type="x-shader/x-vertex")
include shader.vert
script#fragmentShader(type="x-shader/x-fragment")
include shader.frag
The result is the same, a HTML with the code inline, but you can work the shader separately.
I've also been using Require.js to organize my files, but rather than use the text plugin, like #Vlr suggests, I have a script which takes the shaders and converts it into a Require.js module which I can then use elsewhere. So a shader file, simple.frag like this:
uniform vec3 uColor;
void main() {
gl_FragColor = vec4(uColor, 1.0);
}
Will be converted into a file shader.js:
define( [], function() {
return {
fragment: {
simple: [
"uniform vec3 uColor;",
"void main() {",
" gl_FragColor = vec4(uColor, 1.0);",
"}",
].join("\n"),
},
}
} );
Which looks messy, but the idea isn't that it is human readable. Then if I want to use this shader someplace, I just pull in the shader module and access it using shader.fragment.simple, like so:
var simple = new THREE.ShaderMaterial( {
vertexShader: shader.vertex.simple,
fragmentShader: shader.fragment.simple
} );
I've written up a blog post with more details and links to demo code here: http://www.pheelicks.com/2013/12/webgl-working-with-glsl-source-files/
Might not be the best way but I'm using php. I put the shaders in a separate file and then you just use:
<?php include('shaders.html'); ?>
works great for me.
Use C macro #include and gcc -E (-E key runs preprocessor without the compiler)
Add this to your js file:
const shader = `
#include "shader.fg"
`
and use shell after:
mov main.js main.c
gcc -E --no-warnings main.c | sed '/^#.*/d' > main.js
sed here just deletes extra comments generated by preprocessor
It works! ;)
You can use JSONP as an alternative to XMLHttpRequest, which is fine for local file system based loading.
Your shader files can be named anything you like (for automatic syntax highlighting)
vertex-shader.glsl:
JSONP('vertex-shader',`
attribute vec4 a_position;
void main() {
gl_Position = a_position;
}
`)
Then we have our index.html file which includes the 2 script files
index.html
<canvas id="c"></canvas>
<script src="vertex-shader.glsl">
<script src="main.js"></script>
Inside main.js you include the JSONP function, which handles the shader files
var shaders = {}
function JSONP(name,contents) {
shaders[name] = contents
}
// ... main webGL handler code
You could dynamically insert new script tags if you wanted to load new shaders in.
Related
I'm trying to figure out how to use debugPrintfEXT but with no luck.
First I've enabled the extension in my vertex shader
#version 450
#extension GL_EXT_debug_printf : enable
void main()
{
debugPrintfEXT("Test");
// ... some more stuff here ...
}
Then I specify the necessary extensions for the Vulkan instance
VkValidationFeatureEnableEXT enables[] = {VK_VALIDATION_FEATURE_ENABLE_DEBUG_PRINTF_EXT};
VkValidationFeaturesEXT features = {};
features.sType = VK_STRUCTURE_TYPE_VALIDATION_FEATURES_EXT;
features.enabledValidationFeatureCount = 1;
features.pEnabledValidationFeatures = enables;
VkInstanceCreateInfo info = {};
info.pNext = &features;
In the info.ppEnabledExtensionNames field I specified VK_EXT_validation_features and VK_EXT_debug_utils among other things.
When I run my app, I get the following logs
VUID_Undefined(ERROR / SPEC): msgNum: 2044605652 - Validation Error: [ VUID_Undefined ] Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x79de34d4 | vkCreateDebugUtilsMessengerEXT: value of pCreateInfo->pNext must be NULL. This error is based on the Valid Usage documentation for version 182 of the Vulkan header. It is possible that you are using a struct from a private extension or an extension that was added to a later version of the Vulkan header, in which case the use of pCreateInfo->pNext is undefined and may not work correctly with validation enabled
Objects: 1
[0] 0, type: 3, name: NULL
[Debug][Error][Validation]"Validation Error: [ VUID-VkShaderModuleCreateInfo-pCode-04147 ] Object 0: handle = 0x5651b647e828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x3d492883 | vkCreateShaderModule(): The SPIR-V Extension (SPV_KHR_non_semantic_info) was declared, but none of the requirements were met to use it. The Vulkan spec states: If pCode declares any of the SPIR-V extensions listed in the SPIR-V Environment appendix, one of the corresponding requirements must be satisfied (https://vulkan.lunarg.com/doc/view/1.2.182.0/linux/1.2-extensions/vkspec.html#VUID-VkShaderModuleCreateInfo-pCode-04147)"
What more should I do? And what does
one of the corresponding requirements must be satisfied
mean? Is there something that I'm missing?
Edit:
As suggested by Karl Schultz, it's necessary to add VK_KHR_shader_non_semantic_info to info.ppEnabledExtensionNames.
Also, make sure to set log level to INFO with VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT in VkDebugUtilsMessengerCreateInfoEXT::messageSeverity. By default all the output produced by debugPrintfEXT has INFO level.
You may also see
Printf message was truncated, likely due to a buffer size that was too small for the message
if you spawn too many threads, each printing its own long logs.
You also need to enable the VK_KHR_shader_non_semantic_info device extension in your application code when you create the device.
LunarG has also recently published a white paper about debugPrintfEXT.
my express app uses one path ("/") to display a huge page which was build using several includes. To avoid confusion, I would like to test each and every ejs file on its own. But as these ejs-includes are not reachable from the express app, I have no clue how to perform that task.
- index.ejs
- mainMenu.ejs
- systemsTable.ejs
- systemRow.ejs
- systemStatusIcon.ejs
- systemName.ejs
- ....
The more complex the index.ejs file gets the more I would like to test its parts. But how can I test the result of systemStatusIcon.ejs?
Thanks
Without any knowledge of what your code looks like, the I suggest setting up a configuration JSON file of test data objects each containing:
the EJS file path to be tested
render input data
an array of regular expressions to test against the rendered output
Then your test code can iterate over these objects and render and test each in turn.
Example Template Rendering
A simple piece of code to render a template looks like this:
const ejs = require('ejs');
let template = `
value is: <%- value %>
`;
const renderData = {
value: 123
};
const output = ejs.render(template, renderData);
console.log(output);
With output:
> node index.js
value is: 123
What renderData will be required is dependent on your existing templates. Template errors will be thrown as standard JS errors.
You could also use the renderFile function to do the loading for you.
ejs.renderFile(filename, data, options, function(err, str){
// str => Rendered HTML string
});
EJS docs can be found at: https://ejs.co/
In summary
File.writeFile() creates a PNG file of 0 bytes when trying to write a Blob made from base64 data.
In my application, I am trying to create a file that consists of base64 data stored in the db. The rendered equivalent of the data is a small anti-aliased graph curve in black on a transparent background (never more that 300 x 320 pixels) that has previously been created and stored from a canvas element. I have independently verified that the stored base64 data is indeed correct by rendering it at one of various base64 encoders/decoders available online.
Output from "Ionic Info"
--------------------------------
Your system information:
Cordova CLI: 6.3.1
Gulp version: CLI version 3.9.1
Gulp local:
Ionic Framework Version: 2.0.0-rc.2
Ionic CLI Version: 2.1.1
Ionic App Lib Version: 2.1.1
Ionic App Scripts Version: 0.0.39
OS:
Node Version: v6.7.0
--------------------------------
The development platform is Windows 10, and I've been testing directly on a Samsung Galaxy S7 and S4 so far.
I know that the base64 data has to be converted into binary data (as a Blob) first, as File does not yet support writing base64 directly in to an image file. I found various techniques with which to do this, and the code which seems to suit my needs the most (and reflects a similar way I would have done it in java is illustrated below):
Main code from constructor:
this.platform.ready().then(() => {
this.graphDataService.getDataItem(this.job.id).then((data) =>{
console.log("getpic:");
let imgWithMeta = data.split(",")
// base64 data
let imgData = imgWithMeta[1].trim();
// content type
let imgType = imgWithMeta[0].trim().split(";")[0].split(":")[1];
console.log("imgData:",imgData);
console.log("imgMeta:",imgType);
console.log("aftergetpic:");
// this.fs is correctly set to cordova.file.externalDataDirectory
let folderpath = this.fs;
let filename = "dotd_test.png";
File.resolveLocalFilesystemUrl(this.fs).then( (dirEntry) => {
console.log("resolved dir with:", dirEntry);
this.savebase64AsImageFile(dirEntry.nativeURL,filename,imgData,imgType);
});
});
});
Helper to convert base64 to Blob:
// convert base64 to Blob
b64toBlob(b64Data, contentType, sliceSize) {
//console.log("data packet:",b64Data);
//console.log("content type:",contentType);
//console.log("slice size:",sliceSize);
let byteCharacters = atob(b64Data);
let byteArrays = [];
for (let offset = 0; offset < byteCharacters.length; offset += sliceSize) {
let slice = byteCharacters.slice(offset, offset + sliceSize);
let byteNumbers = new Array(slice.length);
for (let i = 0; i < slice.length; i++) {
byteNumbers[i] = slice.charCodeAt(i);
}
let byteArray = new Uint8Array(byteNumbers);
byteArrays.push(byteArray);
}
console.log("size of bytearray before blobbing:", byteArrays.length);
console.log("blob content type:", contentType);
let blob = new Blob(byteArrays, {type: contentType});
// alternative way WITHOUT chunking the base64 data
// let blob = new Blob([atob(b64Data)], {type: contentType});
return blob;
}
save the image with File.writeFile()
// save the image with File.writeFile()
savebase64AsImageFile(folderpath,filename,content,contentType){
// Convert the base64 string in a Blob
let data:Blob = this.b64toBlob(content,contentType,512);
console.log("file location attempt is:",folderpath + filename);
File.writeFile(
folderpath,
filename,
data,
{replace: true}
).then(
_ => console.log("write complete")
).catch(
err => console.log("file create failed:",err);
);
}
I have tried dozens of different decoding techniques, but the effect is the same. However, if I hardcode simple text data into the writeFile() section, like so:
File.writeFile(
folderpath,
"test.txt",
"the quick brown fox jumps over the lazy dog",
{replace: true}
)
A text file IS created correctly in the expected location with the text string above in it.
However, I've noticed that whether the file is the 0 bytes PNG, or the working text file above, in both cases the ".then()" consequence clause of the File Promise never fires.
Additionally, I swapped the above method and used the Ionic 2 native Base64-To-Gallery library to create the images, which worked without a problem. However, having the images in the user's picture gallery or camera roll is not an option for me as I do not wish to risk a user's own pictures while marshalling / packing / transmitting / deleting the data-rendered images. The images should be created and managed as part of the app.
User marcus-robinson seems to have experienced a similar issue outlined here, but it was across all file types, and not just binary types as seems to be the case here. Also, the issue seems to have been closed:
https://github.com/driftyco/ionic/issues/5638
Anybody experiencing something similar, or possibly spot some error I might have caused? I've tried dozens of alternatives but none seem to work.
I had similar behaviour saving media files which worked perfectly on iOS. Nonetheless, I had the issue of 0 bytes file creation on some Android devices in release build (dev build works perfectly). After very long search, I followed the following solution
I moved the polyfills.js script tag to the top of the index.html in the ionic project before the cordova.js tag. This re-ordering somehow the issue is resolved.
So the order should look like:
<script src="build/polyfills.js"></script>
<script type="text/javascript" src="cordova.js"></script>
Works on ionic 3 and ionic 4.
The credits go to 1
I got that working with most of your code:
this.file.writeFile(this.file.cacheDirectory, "currentCached.jpeg", this.b64toBlob(src, "image/jpg", 512) ,{replace: true})
The only difference i had was:
let byteCharacters = atob(b64Data.replace(/^data:image\/(png|jpeg|jpg);base64,/, ''));
instead of your
let byteCharacters = atob(b64Data);
Note: I did not use other trimming etc. like those techniques you used in your constructor class.
I have a Sitecore 7.1 solution using MVC and am rendering fields using #Html.Sitecore().Field("FieldName", ContentItem). Because I want multiline text fields to render out <br/> tags I have removed the GetTextFieldValue processor from the renderField processor section in web.config using an App_Config\Include patch file as described here: http://laubplusco.net/sitecore-update-bummer/. What I'm finding is that if I use Sitecore.Web.UI.WebControls.FieldRenderer.Render() this produces the output with the line breaks as expected, but if I use the Html.Sitecore().Field() extension method no line break is rendered.
I found that you can write
#Html.Sitecore().Field("FieldName", item, new { linebreaks = "<br/>" })
which seems to do the job.
Is there some other config which needs to be set to make the Field() extension method behave in the same way as FieldRenderer.Render, or do I just have to use the method above?
The implementation of field rendering seems to be different for MVC and WebForms. You can check the code of Sitecore.Web.UI.WebControls.FieldRenderer.RenderField() in .NET Reflector. It adds line breaks for multi-line fields to the RenderFieldArgs, before calling the render field pipeline:
if (item.Fields[this.FieldName].TypeKey == "multi-line text")
{
args.RenderParameters["linebreaks"] = "<br/>";
}
In #Html.Sitecore().Field(...) the rendering pipeline is called in a similar way, but the rendering arguments are set up differently, "linebreaks" is not added. To make the behavior the same for every rendering you could add your own processor with the same logic as the code above in Sitecore.Web.UI.WebControls.FieldRenderer.RenderField(). Somehow like this:
public void Process(RenderFieldArgs args)
{
Assert.ArgumentNotNull((object)args, "args");
if (args.FieldTypeKey == "multi-line text")
args.RenderParameters["linebreaks"] = "<br/>";
}
and add this to your render field pipeline with an include file.
My template layout blade doesn't render when the url is a subfolder.
I made a test example to check:
URL/tests is okay
but
URL/tests/1/edit loses the outer layout template and only renders the content.
Testcontroller:
class TestController extends AdminController {
protected $layout = 'layouts.admin';
public function index()
{
// load the view
$this->layout->content=View::make('tests.index');
}
public function edit($id)
{
//
$course=Course::find($id);
return View::make('tests.edit')->with(array('course'=>$course));
}
}
layout admin.blade.php
<html><body>
{{ $content }}
</body>
</html>
tests/index.blade.php
hello
/tests renders source full layout html code and works fine on proper site examples
tests/edit.blade.php
edit
/tests/1/edit renders with NO layout HTML
There are various ways of using blade but I thought the easiest was with protected layout but there seem to be issues?
Any help appreciated.
In the edit method instead of
return View::make('tests.edit')->with(array('course'=>$course));
use:
$this->layout->content= View::make('tests.edit')->with(array('course'=>$course));
In your AdminController which is the base controller of your TestController, add the layout setings, put this code in your AdminController
protected $layout = 'layouts.master';
protected function setupLayout()
{
if ( ! is_null($this->layout))
{
$this->layout = View::make($this->layout);
}
}
Now you use any view with layout using something like this:
$this->layout->content = View::make('tests.edit')->with(array('course'=>$course));
Here, tests.edit means that edit.blade.php (also could edit.php if not blade) file is in app/views/tests/ directory.
In your index method you have used:
$this->layout->content=View::make('tests.index');
So the layout showed up because you set data to layout but in other example you didn't set data to layout so layout is not rendered, it's returning only the view as given below:
return View::make('tests.edit')->with(array('course'=>$course));
So, setup the layout in the base controller class so in your every controller you don't have to setup the layout, but always set data to the content variable of layout using this:
$this->layout->content = 'your data';