Can someone provide an example on how to use X.shaders in XTK ?
I need to use custom shaders to apply texture and color with alpha component for vertices.
After reading the code : for the moment you cannot. Initialy the shader class was not done to be public, but I submited an issue to Haehn to export it as public so you can instantiate one shader. In addition it needs 2 setters for the fragment and vertex sources and to remove the test for all attributes/uniforms to be used in the shader sources.
Notice that with the current code you cannot add parameters to your shaders (btw there should be enough for any use, you can see them here in the "attributes" and "uniforms").
To use it, after that, I'd say :
var r = new X.renderer3D(); //create a renderer
r.init(); //initiate it
var sh = new X.shaders(); // create a new par of shaders
/* here use the futur setters to set sources from a string or a file */
r.addShaders(sh); // this set the shaders for the renderer and try to compile them
// DO NOT call init anymore or it would erase the current shaders and replace them by default ones
/*
Any code to fill the scene, etc...
*/
r.render();
But it needs to wait the 3 changes I said at the beginning of this post. I'm waiting for Haehn's news.
#Ricola3D is right.
The discussion regarding this is here:
https://github.com/xtk/X/issues/69
But if you want an alpha channel for vertices, you can use X.object.opacity.
Related
I have an OpenGL project that has previously used OpenGL 3.0-based methods for drawing arrays and I'm trying to convert it to use newer methods (at least available as of OpenGL 4.3). However, so far I have not been able to make it work.
The piece of code I'll use for explanation creates groupings of points and draws lines between them. It can also fill the resulting polygon (using triangles). I'll give an example using the point-drawing routines. Here's the pseudo-code for how it used to work:
[Original] When points are first generated (happens once):
// ORIGINAL, WORKING CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId
// Set the vertex buffer as the current buffer and fill it
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
[Original] Within the loop that does the drawing:
// ORIGINAL, WORKING CODE EXECUTED DURING EACH DRAW LOOP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Note: positionAttributeId (below) was derived from the active
// shader program via glGetAttribLocation
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.EnableVertexAttribArray(positionAttributeId);
gl.VertexAttribPointer(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
That code has worked for quite a while now, but I'm trying to move to more modern techniques. Most of my code didn't change at all, and this is the new code that replaced the above:
[After Mods] When points are first generated (happens once):
// MODIFIED CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId.
// Set the vertex buffer as the current buffer
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
// ^^^ ALL CODE ABOVE THIS LINE IS IDENTIAL TO THE ORIGINAL ^^^
// Generate Vertex Arrays
// METHOD CODE NOT SHOWN, but uses glGenVertexArrays to fill class-member array IDs
GenerateVertexArrays(gl); // Note: we now have a PointsArrayId
// My understanding: I'm telling OpenGL to associate following calls
// with the vertex array generated with the ID PointsArrayId...
gl.BindVertexArray(PointsArrayId);
// Here I associate the positionAttributeId (found from the shader program)
// with the currently bound vertex array (right?)
gl.EnableVertexAttribArray(positionAttributeId);
// Here I tell the bound vertex array about the format of the position
// attribute as it relates to that array -- I think.
gl.VertexAttribFormat(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0);
// As I understand it, I can define my own "local" buffer index
// in the following calls (?). Below I use 0, which I then bind
// to the buffer with id = this.VerticesBufferId (for the purposes
// of the currently bound vertex array)
gl.VertexAttribBinding(positionAttributeId, 0);
gl.BindVertexBuffer(0, this.VerticesBufferId, IntPtr.Zero, 0);
gl.BindVertexArray(0); // we no longer want to be bound to PointsArrayId
[After Mods] Within the loop that does the drawing:
// MODIFIED CODE EXECUTED DURING EACH DRAW LOOP::
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Here I tell OpenGL to bind the VertexArray I established previously
// (which should understand how to fill the position attribute in the
// shader program using the "zeroth" buffer index tied to the
// VerticesBufferId data buffer -- because I went through all the trouble
// of telling it that above, right?)
gl.BindVertexArray(this.PointsArrayId);
// \/ \/ \/ NOTE: NO CODE CHANGES IN THE CODE BELOW ThIS LINE \/ \/ \/
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
After the modifications, the routines draw nothing to the screen. There's no exceptions thrown or indications (that I can tell) of a problem executing the commands. It just leaves a blank screen.
General questions:
Does my conversion to the newer vertex array methods look correct? Do you see any errors in the logic?
Am I supposed to do specific glEnable calls to make this method work vice the old method?
Can I mix and match between the two methods to fill attributes in the same shader program? (e.g., In addition to the above, I fill out triangle data and use it wit the same shader program. If I haven't switched that process to the new method, will that cause a problem)?
If there is there anything else I'm missing here, I'd really appreciate it if you'd let me know.
A little more sleuthing and I figured out my error:
When using glVertexAttribPointer, you can set the stride parameter to 0 (zero) and OpenGL will automatically determine the stride; however, when using glVertexAttribFormat, you must set the stride yourself.
Once I manually set the stride value in glVertexAttribFormat, everything worked as expected.
I'm finishing up my second semester of C++ programming and have wanted to spice up my output. I'm fairly familiar with C++ but very new to oF. I've been following along with the tutorials in the oF book from the site and am on the Shaders chapter working with textures: http://openframeworks.cc/ofBook/chapters/shaders.html#addingtextures
In this section, I'm getting an error (I'm using Visual Studio): class "ofTexture" has no member "getReferenceTexture".
#include "ofApp.h"
void ofApp::setup() {
// setup
plane.mapTexCoordsFromTexture(img.getTextureReference());
}
void ofApp::draw() {
// bind our texture. in our shader this will now be tex0 by default
// so we can just go ahead and access it there.
img.getTextureReference().bind();
// start our shader, in our OpenGL3 shader this will automagically set
// up a lot of matrices that we want for figuring out the texture matrix
// and the modelView matrix
shader.begin();
// get mouse position relative to center of screen
float mousePosition = ofMap(mouseX, 0, ofGetWidth(), plane.getWidth(), -plane.getWidth(), true);
shader.setUniform1f("mouseX", mousePosition);
ofPushMatrix();
ofTranslate(ofGetWidth()/2, ofGetHeight()/2);
plane.draw();
ofPopMatrix();
shader.end();
img.getTextureReference().unbind();
}
I opened up the ofTexture.h and .cpp files and sure enough there's no member called getTextureReference. I've browsed through the oF site and forum, looked through Stack Exchange, and did a google search but I'm not getting a clear picture of what this call is supposed to do to see if there's a work around or another function I should be calling.
Has ofTexture::getTextureReference been replaced with something else? Thanks!
If I interpret the openframeworks source correctly you can call bind directly on your ofTexture.
If its an instance of ofVideoGrabber you need to call getTexture
I'm learning how to render objects with libGdx. I have one square model, that creates a few model instance from them. If I have only one model it renders fine.
But if I have more instances it doesn't properly. Looks like the front objects are draw first, and the background the last one, so always the background objects are visible and the front objects you can see through them.
To render I use the following
Gdx.gl.glViewport(0,0,Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl20.glClearColor(1f, 1f, 1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mb.begin(cam);
worldManager.render(mb, environment);
mb.end();
mb variable is the ModelBatch instance, and inside worldManager.render each model instance is draw as follow:
mb.render(model, environment);
I'm not sure what is happening. But I think it is some GL attribute that I need enable
Is not 100% related to the following post because, yes it uses OPENGL like libgdx, but the solution provided in that post is not working and I think the problem comes from ModelBatch from libgdx
Reproduction of the problem
You didn't setup your camera correctly. First of all your camera's near plane is 0f, which means it is infinitely small. Set it to a value of at least 1f. Secondly you set the camera to look at it's own position, which is impossible (you can't look inside your own eyes, can you ;)).
So it would look something like:
camera = new PerspectiveCamera(90, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.position.set(0, 10, 0);
camera.lookAt(0,0,0);
camera.near = 1f;
camera.far = 100f;
camera.update();
You probably want to start here: https://xoppa.github.io/blog/basic-3d-using-libgdx/
For more information on how the camera works have a look at: http://www.badlogicgames.com/wordpress/?p=1550
Btw, calling Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); will not help at that location and should definitely not be done when mixed with ModelBatch. ModelBatch manages its own render context, see the documentation for more information: https://github.com/libgdx/libgdx/wiki/ModelBatch
There are a lot of possible answer, but I would say that
glEnable (GL_DEPTH_TEST) ;
could help if you haven't done it yet. Also, enabling depth buffer only works if you actually have a depth buffer, which means you must makes sure you have one, and the method for this depends on your window context.
for show the difference dimentions you can use the fog
When my programm start, it must display a circle on a background. Also i must controll all displaying circles. I use class VertexController and class Vertex for that purpose. In Vertex i have constructor:
Vertex::Vertex(const ci::Vec2f & CurrentLoc){
vColor = Color(Rand::randFloat(123.0f),Rand::randFloat(123.0f),Rand::randFloat(123.0f));
vRadius = Rand::randFloat(23.0f);
vLoc = CurrentLoc;
}
and in VertexController i have
VertexController::VertexController()
{
Vertex CenterVertex = Vertex(getWindowCenter());
CenterVertex.draw(); // function-member draw solid circle with random color
}
and in setup{} method i wrote
void TutorialApp::setup(){
gl::clear(Color(255,204,0));
mVertexController=VertexController();
}
Unfrtunatelly, my way didnt work.I see only background.
So the main question - in CINDER_APP_BASIC drawing possible only in draw{},update{},setup{} directly? If yes, advise a solution, else say where is my fail.
this line of code does not make any sense to me:
mVertexController=VertexController();
Anyways, you should use draw() function just for drawing circles to window. This it why by default there is gl::clear(Color(0,0,0)); to clear background and start drawing new frame from scratch (this is the way drawing in OpenGL, used by default in Cinder, works).
I suggest to use Vector container for storing all circles (this way you can add and remove circles on the fly with some effort), add the first one in VertexController constructor, and make separate function VertexController::draw() to draw all circles using for loop.
Sorry I'm a bit new with SDL and C++ development. Right now I've created a tile mapper that reads from my map.txt file. So far it works, but I want to add editing the map now.
SDL_Texture *texture;
texture= IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
SDL_RenderCopy(G_Renderer, texture, NULL, &destination);
SDL_RenderPresent(G_Renderer);
The above is the basic way I'm showing my tiles, but if I want to go in and change the texture in real time it's kind of buggy and doesn't work well. Is there a method that is best for editing a texture? Thanks for the help I appreciate everything.
The most basic way is to set up a storage container with some textures which you will use repeatedly; for example a vector or dictionary/map. Using the map approach for example you could do something like:
// remember to #include <map>
map<string, SDL_Texture> myTextures;
// assign using array-like notation:
myTextures["texture1"] = IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
myTextures["texture2"] = IMG_LoadTexture(G_Renderer,"assets/tile_2.png");
myTextures["texture3"] = IMG_LoadTexture(G_Renderer,"assets/tile_3.png");
myTextures["texture4"] = IMG_LoadTexture(G_Renderer,"assets/tile_4.png");
then to utilise a different texture, all you have to do is use something along the lines of:
SDL_RenderCopy(G_Renderer, myTextures["texture1"], NULL, &destination);
SDL_RenderPresent(G_Renderer);
which can be further controlled by changing the first line to
SDL_RenderCopy(G_Renderer, myTextures[textureName], NULL, &destination);
where textureName is a string variable which you can alter in code in realtime.
This approach means you can load all the textures you will need before-hand and simply utilise them as needed later, meaning there's no loading from file system whilst rendering:)
There is a nice explanation of map here.
Hopefully this gives you a nudge in the right direction. Let me know if you need more info:)