I've been following a tutorial for creating textures using cocos2d v2 but I am now using v3 (I have just started learning). I ran into a few issues where v2 code doesn't work in v3 but I managed to get round them. I'm now stuck with just one issue:
CCTexParams tp = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
Error: Use of undeclared identifier 'CCTexParams'
Another comment suggested:
Texture2D::TexParams tp = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
but this doesn't work either (or I'm doing it wrong). Can anyone show me what I need to do?
Related
I am working on a project with: openCv, C++, and Kinect v1 and I am using the 3rd dimension in my work but the problem is that the kinect needs to be calibrated and I didn't find a way to do it simply ( especially that I don't have much time ) .
Is there any simple way to calibrate the kinect ? I really need it desperately.
PS: I already tried this code https://github.com/devkicks/OpenCVKinect it works just fine but when I use the openCv function: findCountour() I get errors with vectors.
I'm new to kinect. I'm working on a project of Augmented Reality in Visual Studio using c++.
I'm looking for a way to get all the point coordinates(X,Y,Z) through kinect, but i'm having a hard time figuring it out. Can any one help me with it?
How do i get the points from the depth stream?
I'm Using Kinect V1 for xbox 360.
Assuming Kinect v2: The simplest way of doing this is using the ICoordinateMapper::MapDepthFrameToCameraSpace (see docs here).
IKinectSensor *sensor;
UINT16 *depthPoints;
...
// initialize the sensor and grab a depth frame and get depthPoints
...
const int nPixels = 512 * 424;
std::vector<CameraSpacePoint> cameraPoints;
cameraPoints.resize(nPixels);
ICoordinateMapper *cm = dev->get_CoordinateMapper();
if (cm != nullptr) {
cm->MapDepthFrameToCameraSpace(nPixels, &depthPoints[0], nPixels, &cameraPoints[0]);
cm->Release();
}
// use XYZs in cameraPoints
Note that this only works when you have a live connection to the device. If you have previously saved raw depth frames to disk and read them back e.g. on another computer, then look at using the ICoordinateMapper function GetDepthCameraIntrinsics or GetDepthFrameToCameraSpaceTable and save that data along with the frames to allow offline conversion to XYZ.
Edit:
For easily getting started:
Kinect v1: try using Kinect Common Bridge v1 and the CoordinateMapper::MapDepthFrameToSkeletonFrame method.
Kinect v2: try using Kinect Common Bridge v2 and the KCBMapDepthFrameToCameraSpace function.
I'm trying to tesselate a simple triangle using the Golang OpenGL bindings
The library doesn't claim support for the tesselation shaders, but I looked through the source code, and adding the correct bindings didn't seem terribly tricky. So I branched it and tried adding the correct constants in gl_defs.go.
The bindings still compile just fine and so does my program, it's when I actually try to use the new bindings that things go strange. The program goes from displaying a nicely circling triangle to a black screen whenever I actually try to include the tesselation shaders.
I'm following along with the OpenGL Superbible (6th edition) and using their shaders for this project, so I don't image I'm using broken shaders (they don't spit out an error log, anyway). But in case the shaders themselves could be at fault, they can be found in the setupProgram() function here.
I'm pretty sure my graphics card supports tesselation because printing the openGL version returns 4.4.0 NVIDIA 331.38
.
So my questions:
Is there any reason adding go bindings for tesselation wouldn't work? The bindings seem quite straightforward.
Am I adding the new bindings incorrectly?
If it should work, why is it not working for me?
What am I doing wrong here?
Steps that might be worth taking:
Your driver and video card may support tessellation shaders, but the GL context that your binding is returning for you might be for an earlier version of OpenGL. Try glGetString​(GL_VERSION​) and see what you get.
Are you calling glGetError basically everywhere and actually checking its values? Does this binding provide error return values? If so, are you checking those?
hey im using GLM(made by nate robins) with SFML and opengl on mingw32 with the IDE as CodeBlocks(windows)
when loading my texture with the GLM from: http://www.3dcodingtutorial.com/Working-with-3D-models/Getting-GLM.html
hey i managed to get rid of the color problem by changing up my code to better load the textures, but now im not able to get the texture to display...
heres the NEW link to my main: http://pastebin.com/gasu1Hux
i have been looking up GLm tutorials but i cant find any correct answers about my texture not displaying at all.....
maybe im missing something?
/////////////////////OLD/////////////////////////////
also i tried the one from devernay.free.fr, but i always get a texture error
(not gonna post because everytime i do, my question gets downed...)
i had gotten a small glitch where my whole model is blue instead of the default gray...
i found out that the GLM library i have doesnt load textures by itself..
so i managed to find a texture loader from 3dcodingtutorial.com
when i load the texture its not put on the model, it just changes its color.
right now im wondering why my model is one single color instead of the texture i setup.
heres some of the code that i used to make the texture and draw the model:
ok heres the main.cpp
sorry wrong paste ._.
the paste has been updated!!
http://pastebin.com/tcwwasb9
The default GL_TEXTURE_ENV_MODE is GL_MODULATE. Make sure you aren't inadvertently setting your color state somewhere, or force the issue with glColor3ub(255,255,255) before you render something with a texture.
EDIT: GL_DECAL is also an option.
As I briefly explained in the title, my problem concerns texturing a collada export in papervision.
Basically I was exporting collada models from Cinema 4d with its uv map. I was able to see everything, but the texture was not displaying properly (hidden polygons).
So I decided to try with 3dsMax. I used the same code to display the texture :
var materials:MaterialsList = new MaterialsList();
var torusMaterial:BitmapFileMaterial = new BitmapFileMaterial("model/tex.png");
torusMaterial.precise = true;
materials.addMaterial(torusMaterial, "ID1");
Again, I can see every elements, but this time my model uses only one pixel of my texture. So if I use a red texture and if I color only the pixel at the left bottom corner in green, all my model will be green.
Any advice about how to properly wrap the texture around a 3ds export model?
Thank you.
The Autodesk Collada exporter that ships with 3ds max is problematic, and gives .dae output that Papervision doesn't expect. This will be an even worse problem when you get to exporting animation. Try the OpenCollada exporter: http://www.opencollada.org/download.html
Many people have had a lot more luck using it with Papervision3D. Unfortunately it's not yet available for 3ds max 2012 so if you might be stuck if you can't find an older version. Or maybe you can get the source and compile it against 2012? Let the project maintainers know if you do.