Error: class "ofTexture" has no member "getTextureReference" - opengl

I'm finishing up my second semester of C++ programming and have wanted to spice up my output. I'm fairly familiar with C++ but very new to oF. I've been following along with the tutorials in the oF book from the site and am on the Shaders chapter working with textures: http://openframeworks.cc/ofBook/chapters/shaders.html#addingtextures
In this section, I'm getting an error (I'm using Visual Studio): class "ofTexture" has no member "getReferenceTexture".
#include "ofApp.h"
void ofApp::setup() {
// setup
plane.mapTexCoordsFromTexture(img.getTextureReference());
}
void ofApp::draw() {
// bind our texture. in our shader this will now be tex0 by default
// so we can just go ahead and access it there.
img.getTextureReference().bind();
// start our shader, in our OpenGL3 shader this will automagically set
// up a lot of matrices that we want for figuring out the texture matrix
// and the modelView matrix
shader.begin();
// get mouse position relative to center of screen
float mousePosition = ofMap(mouseX, 0, ofGetWidth(), plane.getWidth(), -plane.getWidth(), true);
shader.setUniform1f("mouseX", mousePosition);
ofPushMatrix();
ofTranslate(ofGetWidth()/2, ofGetHeight()/2);
plane.draw();
ofPopMatrix();
shader.end();
img.getTextureReference().unbind();
}
I opened up the ofTexture.h and .cpp files and sure enough there's no member called getTextureReference. I've browsed through the oF site and forum, looked through Stack Exchange, and did a google search but I'm not getting a clear picture of what this call is supposed to do to see if there's a work around or another function I should be calling.
Has ofTexture::getTextureReference been replaced with something else? Thanks!

If I interpret the openframeworks source correctly you can call bind directly on your ofTexture.
If its an instance of ofVideoGrabber you need to call getTexture

Related

Ho do you convert an OpenGL project from older glVertexAttribPointer methods to newer glVertexAttribBinding methods?

I have an OpenGL project that has previously used OpenGL 3.0-based methods for drawing arrays and I'm trying to convert it to use newer methods (at least available as of OpenGL 4.3). However, so far I have not been able to make it work.
The piece of code I'll use for explanation creates groupings of points and draws lines between them. It can also fill the resulting polygon (using triangles). I'll give an example using the point-drawing routines. Here's the pseudo-code for how it used to work:
[Original] When points are first generated (happens once):
// ORIGINAL, WORKING CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId
// Set the vertex buffer as the current buffer and fill it
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
[Original] Within the loop that does the drawing:
// ORIGINAL, WORKING CODE EXECUTED DURING EACH DRAW LOOP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Note: positionAttributeId (below) was derived from the active
// shader program via glGetAttribLocation
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.EnableVertexAttribArray(positionAttributeId);
gl.VertexAttribPointer(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
That code has worked for quite a while now, but I'm trying to move to more modern techniques. Most of my code didn't change at all, and this is the new code that replaced the above:
[After Mods] When points are first generated (happens once):
// MODIFIED CODE RUN ONCE DURING SETUP:
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
float[] xyzArray = [CODE NOT SHOWN -- GENERATES ARRAY OF POINTS]
// Create a buffer for the vertex data
// METHOD CODE NOT SHOWN, but uses glGenBuffers to fill class-member buffer IDs
GenerateBuffers(gl); // Note: we now have a VerticesBufferId.
// Set the vertex buffer as the current buffer
gl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, this.VerticesBufferId);
gl.BufferData(OpenGL.GL_ARRAY_BUFFER, xyzArray, OpenGL.GL_STATIC_DRAW);
// ^^^ ALL CODE ABOVE THIS LINE IS IDENTIAL TO THE ORIGINAL ^^^
// Generate Vertex Arrays
// METHOD CODE NOT SHOWN, but uses glGenVertexArrays to fill class-member array IDs
GenerateVertexArrays(gl); // Note: we now have a PointsArrayId
// My understanding: I'm telling OpenGL to associate following calls
// with the vertex array generated with the ID PointsArrayId...
gl.BindVertexArray(PointsArrayId);
// Here I associate the positionAttributeId (found from the shader program)
// with the currently bound vertex array (right?)
gl.EnableVertexAttribArray(positionAttributeId);
// Here I tell the bound vertex array about the format of the position
// attribute as it relates to that array -- I think.
gl.VertexAttribFormat(positionAttributeId, 3, OpenGL.GL_FLOAT, false, 0);
// As I understand it, I can define my own "local" buffer index
// in the following calls (?). Below I use 0, which I then bind
// to the buffer with id = this.VerticesBufferId (for the purposes
// of the currently bound vertex array)
gl.VertexAttribBinding(positionAttributeId, 0);
gl.BindVertexBuffer(0, this.VerticesBufferId, IntPtr.Zero, 0);
gl.BindVertexArray(0); // we no longer want to be bound to PointsArrayId
[After Mods] Within the loop that does the drawing:
// MODIFIED CODE EXECUTED DURING EACH DRAW LOOP::
// NOTE: This code is in c# and uses a library called SharpGL
// to invoke OpenGL calls via a context herein called "gl"
// Here I tell OpenGL to bind the VertexArray I established previously
// (which should understand how to fill the position attribute in the
// shader program using the "zeroth" buffer index tied to the
// VerticesBufferId data buffer -- because I went through all the trouble
// of telling it that above, right?)
gl.BindVertexArray(this.PointsArrayId);
// \/ \/ \/ NOTE: NO CODE CHANGES IN THE CODE BELOW ThIS LINE \/ \/ \/
// Missing code sets some uniforms in the shader program and determines
// the start (iStart) and length (pointCount) of points to draw
gl.DrawArrays(OpenGL.GL_LINE_STRIP, iStart, pointCount);
After the modifications, the routines draw nothing to the screen. There's no exceptions thrown or indications (that I can tell) of a problem executing the commands. It just leaves a blank screen.
General questions:
Does my conversion to the newer vertex array methods look correct? Do you see any errors in the logic?
Am I supposed to do specific glEnable calls to make this method work vice the old method?
Can I mix and match between the two methods to fill attributes in the same shader program? (e.g., In addition to the above, I fill out triangle data and use it wit the same shader program. If I haven't switched that process to the new method, will that cause a problem)?
If there is there anything else I'm missing here, I'd really appreciate it if you'd let me know.
A little more sleuthing and I figured out my error:
When using glVertexAttribPointer, you can set the stride parameter to 0 (zero) and OpenGL will automatically determine the stride; however, when using glVertexAttribFormat, you must set the stride yourself.
Once I manually set the stride value in glVertexAttribFormat, everything worked as expected.

(UE4) How to properly render after transforming the bones of UPoseableMeshComponent

I am trying to transform bones within UE4 (4.25) using UPoseableMeshComponent. (image of initial state)
However, after I transform the bones using SetBoneTransformByName, the rendering gets into some weird state, below is not motion blur, is just a pose after applied SetBoneTransformByName (image after transform blurred rendering). Although Unlit rendering seems just fine.
After I call AActor::SetActorHiddenInGame(true) to set invisible, and then AActor::SetActorHiddenInGame(false) to show the actor again, the rendering will be fixed. (Image after hide/show)
The code is purely in c++ (no BP), I first create custom Character with SkeletalMesh and added UPoseableMeshComponent in code something like in below:
void AMyCharacter::CreatePoseableMesh() {
USkeletalMeshComponent* skeletalMesh = GetMesh();
UPoseableMeshComponent* poseMesh =
NewObject<UPoseableMeshComponent>(this, UPoseableMeshComponent::StaticClass());
if (poseMesh) {
poseMesh->RegisterComponent();
poseMesh->SetWorldLocation(location);
poseMesh->SetWorldRotation(rotation);
poseMesh->AttachToComponent(GetRootComponent(),FAttachmentTransformRules::KeepRelativeTransform);
poseMesh->SetSkeletalMesh(skeletalMesh->SkeletalMesh);
poseMesh->SetVisibility(true);
skeletalMesh->SetVisibility(false);
}
}
Are there something missing to set in UPoseableMeshComponent?
I might be wrong, but I think this is because setting bone transform manually doesn't write to the velocity buffer, and temporal AA doesn't know that something moved, causing ugly blur.
If you switch to FXAA and the problem disappears - here's your hint.
There is a material node called Previous Frame Switch - you can control the velocity buffer through it using a custom parameter.
Self solved(sort of..). I tried with BP first, where even BP needs to SetVisibility(false) then SetVisibility(true) on PoseableMeshComponent to render properly. Maybe a minor bug within UE4.
TMap<FString, FTransform> const& transforms; // given. map of bone name and its transform.
poseMesh->SetVisibility(false); // PoseableMeshComponent. Hide once
for (auto& x : transforms) {
poseMesh->SetBoneTransformByName(FName(*x.Key), x.Value, EBoneSpaces::WorldSpace);
}
poseableMesh->SetVisibility(true); // show it.
seems to be the workaround for now.

Lib Cinder method setup{} in CINDER_APP_BASIC

When my programm start, it must display a circle on a background. Also i must controll all displaying circles. I use class VertexController and class Vertex for that purpose. In Vertex i have constructor:
Vertex::Vertex(const ci::Vec2f & CurrentLoc){
vColor = Color(Rand::randFloat(123.0f),Rand::randFloat(123.0f),Rand::randFloat(123.0f));
vRadius = Rand::randFloat(23.0f);
vLoc = CurrentLoc;
}
and in VertexController i have
VertexController::VertexController()
{
Vertex CenterVertex = Vertex(getWindowCenter());
CenterVertex.draw(); // function-member draw solid circle with random color
}
and in setup{} method i wrote
void TutorialApp::setup(){
gl::clear(Color(255,204,0));
mVertexController=VertexController();
}
Unfrtunatelly, my way didnt work.I see only background.
So the main question - in CINDER_APP_BASIC drawing possible only in draw{},update{},setup{} directly? If yes, advise a solution, else say where is my fail.
this line of code does not make any sense to me:
mVertexController=VertexController();
Anyways, you should use draw() function just for drawing circles to window. This it why by default there is gl::clear(Color(0,0,0)); to clear background and start drawing new frame from scratch (this is the way drawing in OpenGL, used by default in Cinder, works).
I suggest to use Vector container for storing all circles (this way you can add and remove circles on the fly with some effort), add the first one in VertexController constructor, and make separate function VertexController::draw() to draw all circles using for loop.

Need XTK example on how to use X.shaders

Can someone provide an example on how to use X.shaders in XTK ?
I need to use custom shaders to apply texture and color with alpha component for vertices.
After reading the code : for the moment you cannot. Initialy the shader class was not done to be public, but I submited an issue to Haehn to export it as public so you can instantiate one shader. In addition it needs 2 setters for the fragment and vertex sources and to remove the test for all attributes/uniforms to be used in the shader sources.
Notice that with the current code you cannot add parameters to your shaders (btw there should be enough for any use, you can see them here in the "attributes" and "uniforms").
To use it, after that, I'd say :
var r = new X.renderer3D(); //create a renderer
r.init(); //initiate it
var sh = new X.shaders(); // create a new par of shaders
/* here use the futur setters to set sources from a string or a file */
r.addShaders(sh); // this set the shaders for the renderer and try to compile them
// DO NOT call init anymore or it would erase the current shaders and replace them by default ones
/*
Any code to fill the scene, etc...
*/
r.render();
But it needs to wait the 3 changes I said at the beginning of this post. I'm waiting for Haehn's news.
#Ricola3D is right.
The discussion regarding this is here:
https://github.com/xtk/X/issues/69
But if you want an alpha channel for vertices, you can use X.object.opacity.

What is the best way to detect mouse-location/clicks on object in OpenGL?

I am creating a simple 2D OpenGL game, and I need to know when the player clicks or mouses over an OpenGL primitive. (For example, on a GL_QUADS that serves as one of the tiles...) There doesn't seems to be a simple way to do this beyond brute force or opengl.org's suggestion of using a unique color for every one of my primitives, which seems a little hacky. Am I missing something? Thanks...
My advice, don't use OpenGL's selection mode or OpenGL rendering (brute force method you are talking about), use a CPU-based ray picking algorithm if 3D. For 2D, like in your case, it should be straightforward, it's just a test to know if a 2D point is in a 2D rectangle.
I would suggest to use the hacky method if you want a quick implementation (coding time, I mean). Especially if you don't want to implement a quadtree with moving ojects. If you are using opengl immediate mode, that should be straightforward:
// Rendering part
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
for(unsigned i=0; i<tileCout; ++i){
unsigned tileId = i+1; // we inc the tile ID in order not to pick up the black
glColor3ub(tileId &0xFF, (tileId >>8)&0xFF, (tileId >>16)&0xFF);
renderTileWithoutColorNorTextures(i);
}
// Let's retrieve the tile ID
unsigned tileId = 0;
glReadPixels(mouseX, mouseY, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE,
(unsigned char *)&tileId);
if(tileId!=0){ // if we didn't picked the black
tileId--;
// we picked the tile number tileId
}
// We don't want to show that to the user, so we clean the screen
glClearColor(...); // the color you want
glClear(GL_COLOR_BUFFER_BIT);
// Now, render your real scene
// ...
// And we swap
whateverSwapBuffers(); // might be glutSwapBuffers, glx, ...
You can use OpenGL's glRenderMode(GL_SELECT) mode. Here is some code that uses it, and it should be easy to follow (look for the _pick method)
(and here's the same code using GL_SELECT in C)
(There have been cases - in the past - of GL_SELECT being deliberately slowed down on 'non-workstation' cards in order to discourage CAD and modeling users from buying consumer 3D cards; that ought to be a bad habit of the past that ATI and NVidia have grown out of ;) )