OpenSceneGraph - How to add a Wall or 3 - c++

I need to add a wall to my scenegraph, and have it work so I can not go past the wall with my camera. I am creating an laboratory scene, but I am new to 3d programming in general. I have been working with the book OpensceneGraph 3.0 Beginner's guide, and so far, Ok.
I have a couple of furniture in my scene but what I would like to do is add a wall beyond which my camera should not go. My code below, from the book, Openscenegraph beginner, does not seem to do anything(page 83). I add it and I don't see a wall and I am still able to move everywhere in the scene with my camera. How do I create a wall in my application.
osg::ref_ptr<osg::Group> root = new osg::Group();
//adding walls to the lab to make it more room like -- 7/6/12
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array;
vertices->push_back(osg::Vec3(0.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(0.0f, 0.0f, 1.0f));
vertices->push_back(osg::Vec3(1.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(1.0f, 0.0f, 1.5f));
vertices->push_back(osg::Vec3(2.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(2.0f, 0.0f, 1.0f));
vertices->push_back(osg::Vec3(3.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(3.0f, 0.0f, 1.5f));
vertices->push_back(osg::Vec3(4.0f, 0.0f, 0.0f));
vertices->push_back(osg::Vec3(4.0f, 0.0f, 1.0f));
osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array;
normals->push_back(osg::Vec3(0.0f, -1.0f, 0.0f));
osg::ref_ptr<osg::Geometry>geom = new osg::Geometry;
geom->setVertexArray(vertices.get());
geom->setNormalArray((normals.get()));
geom->setNormalBinding((osg::Geometry::BIND_OVERALL));
geom->addPrimitiveSet((new osg::DrawArrays(GL_QUAD_STRIP,0,10)));
osg::ref_ptr<osg::Geode> wall = new osg::Geode;
wall->addDrawable(geom.get());
root->addChild(wall);
osgViewer::Viewer viewer;
viewer.setSceneData(root.get());
viewer.run();

You are already drawing the "wall" as coded above - it looks a little more like a fence than a wall, but you can easily fix that by moving the 1.0 values all the way up to 1.5 to match the others. You might not be seeing it with the rest of your scene because of differences of scale - if the dimensions of your furniture are in 100's, for example. Replace your root->addChild(wall) with this code:
// assumes your furniture is already added to root
float scale=root->getBound().radius();
osg::ref_ptr<osg::PositionAttitudeTransform> pax = new osg::PositionAttitudeTransform;
pax->addChild(wall);
pax->setScale(osg::Vec3d(scale,scale,scale));
root->addChild(pax);
Then you will see your fence. Move the position/rotation of pax to place your wall. As I mentioned in the comments, then you'll have to use some intersection code to tell the camera where to stop.

Related

C++ GDI: I am seeking explanation for color matrix such that I can create any color manipulation mask

I am trying to implement an application that can manipulate the background screen color attributes through a transparent window. Basically trying to recreate Color Oracle. I am progressing here through C++:GDI+ resources. This GDI has a Color Matrix concept. I am able to create filters for greyscale(as shown in the example in the last hyperlink), brightness tweaking, saturation tweaking; However, as advanced use cases such as color blindness filter, blue light filter, contrast tweaks - I am using the hit-n-trial approach, It will be much efficient if anyone can take me in the right direction to learn fundamentals of this color matrix.
Example Matrix is shown below which boosts saturation by a small factor while restricting brightness component.
MAGCOLOREFFECT magEffectSaturationBoost =
{ { // MagEffectBright
{ 1.02f, 0.0f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 1.02f, 0.0f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 1.02f, 0.0f, 1.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f, 0.0f },
{ -0.1f, -0.1f, -0.1f, 0.0f, 0.0f }
}
}
// Using the Matrix
ret = MagSetColorEffect(hwndMag, &magEffectSaturationBoost);
It will be much efficient if anyone can take me in the right direction to learn fundamentals of this color matrix.
Each color vector is multipled by 5x5 matrix, to make it possible color vector is 5 elements long - the fifth element is a dummy one, this allows to perform additional operations on colors (rotation, scaling, ...).
In your example each color component is multiplied by 1.02f making color brighter, after multiplication - from each color component 0.1 value is subtracted.
You can find full explanation here:
https://learn.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-using-a-color-matrix-to-transform-a-single-color-use
and:
https://learn.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-coordinate-systems-and-transformations-about
Few more links:
https://docs.rainmeter.net/tips/colormatrix-guide/
https://www.integral-domain.org/lwilliams/math150/labs/matrixcolorfilter.php

How to make a spring constraint with Bullet Physics?

I want to test the spring contraint of Bullet Physics. So I created a static box hovering above the ground and a second dynamic box hanging down from it. But activating the spring behavior does nothing! The box is indeed hanging freely. I know it because it rotates freely. But it does not oscillate or anything.
btCollisionShape *boxShape = createBoxShape(0.2f, 0.2f, 0.2f);
btRigidBody *box1 = createStatic(boxShape);
btRigidBody *box2 = createDynamic(1.0f /*mass*/, boxShape);
box1->setWorldTransform(btTransform(btQuaternion::getIdentity(), { 0.0f, 2.0f, 1.0f }));
box2->setWorldTransform(btTransform(btQuaternion::getIdentity(), { 0.0f, 1.0f, 1.0f }));
btGeneric6DofSpring2Constraint *spring = new btGeneric6DofSpring2Constraint(
*box1, *box2,
btTransform(btQuaternion::getIdentity(), { 0.0f, -1.0f, 0.0f }),
btTransform(btQuaternion::getIdentity(), { 0.0f, 0.0f, 0.0f })
);
// I thought maybe the linear movement is locked, but even using these lines do not help.
// spring->setLinearUpperLimit(btVector3(0.0f, 0.1, 0.0f));
// spring->setLinearLowerLimit(btVector3(0.0f, -0.1, 0.0f));
// Enabling the spring behavior for they y-coordinate (index = 1)
spring->enableSpring(1, true);
spring->setStiffness(1, 0.01f);
spring->setDamping (1, 0.00f);
spring->setEquilibriumPoint();
What is wrong? I played a lot with the the Stiffness and Damping parameters. But it changed nothing. Setting linear lower and upper limits makes the box movable in the y-direction, but it still not oscillates. And yes, gravity is activated.
Ok, I found a solution by checking out Bullet's provided example projects (could have come up with the idea earlier). Three things I have learned:
The spring constraint will not violate the linear limits. The problem with my former approach was that the linear movement was either locked, or limited to a too small range for the assigned spring stiffness. Now there are no more limits (by setting the lower limit above the upper one).
The stiffness was far too small, so the joined objects were acting as if they were freely movable inside the linear limits. You can check out the values in my code below, I got them from the example project.
There is a small difference in the behavior between btGeneric6DofSpringConstraint and btGeneric6DofSpring2Constraint. The former one seems to violet the non-spring-axes less (x- and z-axes in my case). The latter one seems to apply a stronger damping. But these are just first observations.
btGeneric6DofSpringConstraint *spring = new btGeneric6DofSpringConstraint(
*box1, *box2,
btTransform(btQuaternion::getIdentity(), { 0.0f, -1.0f, 0.0f }),
btTransform(btQuaternion::getIdentity(), { 0.0f, 0.0f, 0.0f }),
true
);
// Removing any restrictions on the y-coordinate of the hanging box
// by setting the lower limit above the upper one.
spring->setLinearLowerLimit(btVector3(0.0f, 1.0f, 0.0f));
spring->setLinearUpperLimit(btVector3(0.0f, 0.0f, 0.0f));
// Enabling the spring behavior for they y-coordinate (index = 1)
spring->enableSpring(1, true);
spring->setStiffness(1, 35.0f);
spring->setDamping (1, 0.5f);
spring->setEquilibriumPoint();

How to draw any plane 3D in irrlicht?

I have created a plane3D like this:
core::vector3df vec(0.0f, 0.0f, 0.0f);
core::vector3df vecnorm(1.0f, 1.0f, 1.0f);
core::plane3d<f32> plan= core::plane3df(vec,vecnorm);
but I can't find how to display it in a node?
Could you help please?
In Irrlicht plane3d are not renderable.For drawing planes use createHillPlaneMesh from geometry creator
SceneManager->getGeometryCreator()-> createHillPlaneMesh(....);

Same VBO for all sprites in a 2D game?

I'm writing a small 2D game-engine (educative purpose) in C++ and OpenGL 3.3, while writing the code I noted that almost all sprites (if not all) use the same vertexBuffer values:
const float vertexBuffer[] =
{
-1.0f, -1.0f, 0.0f, 1.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f
}
That is 2 triangles (if using VBO indexing) in model space that form a square, the indexBuffer goes like:
const unsigned short indexBuffer[] = { 0, 1, 2, 2, 0, 3 }
Why I'm using the same model-space values for all my sprites? Well I use a different MVP matrix for all of them:
P (projection): The orthogonal camera transform, usually with the same width and height of the glContext.
V (view): A lookAt transformation, it just sits in the z axis looking to the xy plane perpendicullary. This is also used to move the camera (follow the player, etc).
M (model): this matrix is created using transformations belonging to each sprite:
glm::mat4 model = <translate> * <rotate> * <scale>
Where:
<translate> is the position of the sprite in screen-space
<rotate> the rotation of the sprite
<scale> The size of the sprite in pixels divided by 2. Why? Each corner of the model-space corresponds to a vertex, and the square formed by these with its center in the origin, so if our sprite is 250x250 pixels, we scale by 125px to each side in each axis, thus transforming our model-space square to a screen-space square.
So, if I have 5 sprites I'll call glDrawElements 5 times, with differents MVPs and Textures each time, but same vertexBuffer, indexBuffer and uvCoordinates.
Do you think this is a error-prone approach for using in the future? Or should I instead apply the <translate> and <scale> transformations directly to the vertices when creating them? And leave the Model matrix with only the rotation.

Begun learning OpenGL, need help figuring out this problem

So I have begun learning OpenGL, reading from the book "OpenGL Super Bible 5 ed.". It's explains things really well, and I have been able to create my first gl program myself! Just something simple, a rotating 3d pyramid.
Now for some reason one of the faces are not rendering. I checked the vertecies (plotted it on paper first) and it seemed to be right. Found out if I changed the shader to draw a line loop, it would render. However it would not render a triangle. Can anyone explain why?
void setupRC()
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
shaderManager.InitializeStockShaders();
M3DVector3f vVerts1[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
M3DVector3f vVerts2[] = {-0.5f,0.0f,-0.5f,0.0f,0.5f,0.0f,-0.5f,0.0f,0.5f};
M3DVector3f vVerts3[] = {-0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f};
M3DVector3f vVerts4[] = {0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,0.5f,0.0f,-0.5f};
triangleBatch1.Begin(GL_LINE_LOOP, 3);
triangleBatch1.CopyVertexData3f(vVerts1);
triangleBatch1.End();
triangleBatch2.Begin(GL_TRIANGLES, 3);
triangleBatch2.CopyVertexData3f(vVerts2);
triangleBatch2.End();
triangleBatch3.Begin(GL_TRIANGLES, 3);
triangleBatch3.CopyVertexData3f(vVerts3);
triangleBatch3.End();
triangleBatch4.Begin(GL_TRIANGLES, 3);
triangleBatch4.CopyVertexData3f(vVerts4);
triangleBatch4.End();
glEnable(GL_CULL_FACE);
}
float rot = 1;
void renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
GLfloat vRed[] = {1.0f, 0.0f, 0.0f, 0.5f};
GLfloat vBlue[] = {0.0f, 1.0f, 0.0f, 0.5f};
GLfloat vGreen[] = {0.0f, 0.0f, 1.0f, 0.5f};
GLfloat vWhite[] = {1.0f, 1.0f, 1.0f, 0.5f};
M3DMatrix44f transformMatrix;
if (rot >= 360)
rot = 0;
else
rot = rot + 1;
m3dRotationMatrix44(transformMatrix,m3dDegToRad(rot),0.0f,1.0f,0.0f);
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vRed);
triangleBatch1.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vBlue);
triangleBatch2.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vGreen);
triangleBatch3.Draw();
shaderManager.UseStockShader(GLT_SHADER_FLAT, transformMatrix, vWhite);
triangleBatch4.Draw();
glutSwapBuffers();
glutPostRedisplay();
Sleep(10);
}
You've most likely defined the vertices in clockwise order for the triangle that isn't showing, and in counterclockwise order (normally the default) for those that are. Clockwise winding essentially creates an inward facing normal and thus OpenGL won't bother to render it when culling is enabled.
The easiest way to check this is to set glCullFace(GL_FRONT)--that should toggle it so you see the missing triangle and no longer see the other three.
The only thing I see that affects polygons here is glEnable(GL_CULL_FACE);.
You shouldn't have that, because if you plot your vertices backwards, the polygon won't render.
Remove it or actually call glDisable(GL_CULL_FACE); to be sure.
In your case, it's not likely that you want to draw a polygon that you can see from one side only.