I have created a plane3D like this:
core::vector3df vec(0.0f, 0.0f, 0.0f);
core::vector3df vecnorm(1.0f, 1.0f, 1.0f);
core::plane3d<f32> plan= core::plane3df(vec,vecnorm);
but I can't find how to display it in a node?
Could you help please?
In Irrlicht plane3d are not renderable.For drawing planes use createHillPlaneMesh from geometry creator
SceneManager->getGeometryCreator()-> createHillPlaneMesh(....);
Related
I was going through the Surface example here
When the user clicks anywhere it draws a point
image surface
what i'd like to know is how to do this programatically,like, if a user gives 3 coordinates x, y, and z. How do I go about plotting such point?
you can add a custom item like so:
QImage color = QImage(2, 2, QImage::Format_RGB32);
color.fill(Qt::cyan);
QVector3D positionOne = QVector3D(2.0f, 2.0f, 0.0f);
QCustom3DItem* item = new QCustom3DItem(":/items/monkey.obj", positionOne,
QVector3D(0.0f, 0.0f, 0.0f),
QQuaternion::fromAxisAndAngle(0.0f, 0.0f, 0.0f, 45.0f),
color);
item->setScaling(QVector3D(0.1f, 0.1f, 0.1f));
m_graph->addCustomItem(item);
Note that the .obj file must be a mesh file, you can generate one with blender for example, just don't forget to triangulate the mesh before you export the .obj file.
I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.
CGRect targetRect = CGRect.make(target.getPosition().x - (target.getContentSize().width),
target.getPosition().y - (target.getContentSize().height),
target.getContentSize().width,
target.getContentSize().height);
target is a sprite ... and I make to create a rectangular on the border of the sprite.
I tried to do this work by draw(GL10 gl) but I'm not getting the way to call it .
So, If someone have an idea how to do this . Please help me out of this... thanks in advance
public void draw(GL10 gl){
gl.glColor4f(0.0f, 0.0f, 1.0f, 1.0f);
gl.glLineWidth(4);
CCDrawingPrimitives.ccDrawCircle(gl, centerAnchor, 20*scaleX, ccMacros.CC_DEGREES_TO_RADIANS(90), 50, true);
CCDrawingPrimitives.ccDrawCircle(gl, CGPoint.make((handposition.x-40f)*scaleX,(handposition.y+10f)*scaleY), 45*scaleX, ccMacros.CC_DEGREES_TO_RADIANS(90), 50, true);
CCDrawingPrimitives.ccDrawPoint(gl, centerAnchor);
gl.glLineWidth(1);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
gl.glPointSize(1);
}
similarly you can draw rectangle by using ccDrawRect() method
I need to add a wall to my scenegraph, and have it work so I can not go past the wall with my camera. I am creating an laboratory scene, but I am new to 3d programming in general. I have been working with the book OpensceneGraph 3.0 Beginner's guide, and so far, Ok.
I have a couple of furniture in my scene but what I would like to do is add a wall beyond which my camera should not go. My code below, from the book, Openscenegraph beginner, does not seem to do anything(page 83). I add it and I don't see a wall and I am still able to move everywhere in the scene with my camera. How do I create a wall in my application.
osg::ref_ptr<osg::Group> root = new osg::Group();
//adding walls to the lab to make it more room like -- 7/6/12
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array;
vertices->push_back(osg::Vec3(0.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(0.0f, 0.0f, 1.0f));
vertices->push_back(osg::Vec3(1.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(1.0f, 0.0f, 1.5f));
vertices->push_back(osg::Vec3(2.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(2.0f, 0.0f, 1.0f));
vertices->push_back(osg::Vec3(3.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(3.0f, 0.0f, 1.5f));
vertices->push_back(osg::Vec3(4.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(4.0f, 0.0f, 1.0f));
osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array;
normals->push_back(osg::Vec3(0.0f, -1.0f, 0.0f));
osg::ref_ptr<osg::Geometry>geom = new osg::Geometry;
geom->setVertexArray(vertices.get());
geom->setNormalArray((normals.get()));
geom->setNormalBinding((osg::Geometry::BIND_OVERALL));
geom->addPrimitiveSet((new osg::DrawArrays(GL_QUAD_STRIP,0,10)));
osg::ref_ptr<osg::Geode> wall = new osg::Geode;
wall->addDrawable(geom.get());
root->addChild(wall);
osgViewer::Viewer viewer;
viewer.setSceneData(root.get());
viewer.run();
You are already drawing the "wall" as coded above - it looks a little more like a fence than a wall, but you can easily fix that by moving the 1.0 values all the way up to 1.5 to match the others. You might not be seeing it with the rest of your scene because of differences of scale - if the dimensions of your furniture are in 100's, for example. Replace your root->addChild(wall) with this code:
// assumes your furniture is already added to root
float scale=root->getBound().radius();
osg::ref_ptr<osg::PositionAttitudeTransform> pax = new osg::PositionAttitudeTransform;
pax->addChild(wall);
pax->setScale(osg::Vec3d(scale,scale,scale));
root->addChild(pax);
Then you will see your fence. Move the position/rotation of pax to place your wall. As I mentioned in the comments, then you'll have to use some intersection code to tell the camera where to stop.
So I have begun learning OpenGL, reading from the book "OpenGL Super Bible 5 ed.". It's explains things really well, and I have been able to create my first gl program myself! Just something simple, a rotating 3d pyramid.
Now for some reason one of the faces are not rendering. I checked the vertecies (plotted it on paper first) and it seemed to be right. Found out if I changed the shader to draw a line loop, it would render. However it would not render a triangle. Can anyone explain why?
void setupRC()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
shaderManager.InitializeStockShaders();
M3DVector3f vVerts1[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
M3DVector3f vVerts2[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,-0.5f,0.0f,0.5f};
M3DVector3f vVerts3[] = {-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f};
M3DVector3f vVerts4[] = {0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
triangleBatch1.Begin(GL_LINE_LOOP, 3);
triangleBatch1.CopyVertexData3f(vVerts1);
triangleBatch1.End();
triangleBatch2.Begin(GL_TRIANGLES, 3);
triangleBatch2.CopyVertexData3f(vVerts2);
triangleBatch2.End();
triangleBatch3.Begin(GL_TRIANGLES, 3);
triangleBatch3.CopyVertexData3f(vVerts3);
triangleBatch3.End();
triangleBatch4.Begin(GL_TRIANGLES, 3);
triangleBatch4.CopyVertexData3f(vVerts4);
triangleBatch4.End();
glEnable(GL_CULL_FACE);
}
float rot = 1;
void renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
GLfloat vRed[] = {1.0f, 0.0f, 0.0f, 0.5f};
GLfloat vBlue[] = {0.0f, 1.0f, 0.0f, 0.5f};
GLfloat vGreen[] = {0.0f, 0.0f, 1.0f, 0.5f};
GLfloat vWhite[] = {1.0f, 1.0f, 1.0f, 0.5f};
M3DMatrix44f transformMatrix;
if (rot >= 360)
rot = 0;
else
rot = rot + 1;
m3dRotationMatrix44(transformMatrix,m3dDegToRad(rot),0.0f,1.0f,0.0f);
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vRed);
triangleBatch1.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vBlue);
triangleBatch2.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vGreen);
triangleBatch3.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vWhite);
triangleBatch4.Draw();
glutSwapBuffers();
glutPostRedisplay();
Sleep(10);
}
You've most likely defined the vertices in clockwise order for the triangle that isn't showing, and in counterclockwise order (normally the default) for those that are. Clockwise winding essentially creates an inward facing normal and thus OpenGL won't bother to render it when culling is enabled.
The easiest way to check this is to set glCullFace(GL_FRONT)--that should toggle it so you see the missing triangle and no longer see the other three.
The only thing I see that affects polygons here is glEnable(GL_CULL_FACE);.
You shouldn't have that, because if you plot your vertices backwards, the polygon won't render.
Remove it or actually call glDisable(GL_CULL_FACE); to be sure.
In your case, it's not likely that you want to draw a polygon that you can see from one side only.