How to make a physical wall using cocos2d / chipmunk? - cocos2d-iphone

How to make a physical wall using cocos2d / chipmunk ? So that sprite just can not go throught another one sprite(wall)? Sorry,if it's already answered somewhere, I can't find any information for beginners.

This is from cocos2d/chipmunk template. Your sprite should be on chipmunk body, to get that change position of sprite to position of body in your update method.
CGSize s = [[CCDirector sharedDirector] winSize];
_space = cpSpaceNew();
cpSpaceSetGravity( _space, cpv(0, -100) );
//
// rogue shapes
// We have to free them manually
//
// bottom
_walls[0] = cpSegmentShapeNew( _space->staticBody, cpv(0,0), cpv(s.width,0), 0.0f);
// top
_walls[1] = cpSegmentShapeNew( _space->staticBody, cpv(0,s.height), cpv(s.width,s.height), 0.0f);
// left
_walls[2] = cpSegmentShapeNew( _space->staticBody, cpv(0,0), cpv(0,s.height), 0.0f);
// right
_walls[3] = cpSegmentShapeNew( _space->staticBody, cpv(s.width,0), cpv(s.width,s.height), 0.0f);
for( int i=0;i<4;i++) {
cpShapeSetElasticity( _walls[i], 1.0f );
cpShapeSetFriction( _walls[i], 1.0f );
cpSpaceAddStaticShape(_space, _walls[i] );
}

Related

C++ Raytracer - Only one object appearing in scene

I am using a raytracer to render a Sphereflake, but I am having trouble trying to have more than object appear in the scene. In the scene below I am just trying to test out having two spheres in a scene, but for some reason only ever one sphere appears in the scene, and usually its the sphere with the largest radius.
Another peculiar thing is, even though there is a camera set in the scene, it seems as though the output window always shows the predominant object at the centre ((0,0) with screen coordinates between [-1,-1]->[1,1]) rather than in relevance to the camera coordinate space.
I am unsure if it is a parent hierarchy problem or how I'm rendering the objects, but any insight into why the problem is persisting would be greatly appreciated.
main.cpp (creates scenes renders objects)
#include <stdio.h>
#include <iostream>
#include <vector>
#include <math.h>
#include <glm.hpp>
#include <gtc/matrix_transform.hpp>
#include <Raytracer/Raytracer.h>
using namespace glm;
using namespace Raytracer;
using namespace Raytracer::Scenes;
using namespace Raytracer::Objects;
/**
* Places a few spheres in the scene and adds some lights.
*
* #param scene The scene
*/
Scene *BuildScene(int depth, float aspect)
{
const int materialCount = 6;
vec3 colors[materialCount] =
{
vec3(1.0f, 0.0f, 0.0f),
vec3(1.0f, 1.0f, 0.0f),
vec3(0.0f, 1.0f, 0.0f),
vec3(0.0f, 1.0f, 1.0f),
vec3(0.0f, 0.0f, 1.0f),
vec3(1.0f, 0.0f, 1.0f)
};
Material *materials[materialCount];
for (int i = 0; i < materialCount; i++)
{
materials[i] = new Material();
if (materials[i] == NULL)
return NULL;
vec3 ambient = colors[i] * 0.01f;
materials[i]->SetAmbient(ambient);
materials[i]->SetDiffuse(colors[i]);
materials[i]->SetShininess(25.0f);
}
if (depth <= 0)
return NULL;
// Create the scene.
Scene *scene = new Scene();
if (scene == NULL)
return NULL;
Sphere * s1 = new Sphere(0.33f, materials[5]);
s1->SetPosition(vec3(5.0f, 0.0f, -2.0f));
Sphere * s2 = new Sphere(0.33f, materials[1]);
s2->SetPosition(vec3((5.0f, 0.33f, -2.0f));
s1->AddChild(s2);
// Create a light.
Light *light = new PointLight(vec3(10.0f));
if (light == NULL)
{
delete scene;
return NULL;
}
light->SetPosition(vec3(-5.0f, 3.0f, 2.0f));
scene->AddChild(light);
// Create a camera.
Camera *camera = new Camera(vec3(-2.0f, 2.0f, 4.0f), vec3(0.0f, 0.0f, 0.0f),
vec3(0.0f, 1.0f, 0.0f), Camera::DefaultFov, aspect);
scene->AddChild(s1);
if (camera == NULL)
{
delete scene;
return NULL;
}
scene->AddChild(camera);
scene->SetActiveCamera(camera);
return scene;
}
/**
* Renders the scene and saves the result to a BMP file.
*
* #param fileName The name of the file
* #param width The image width
* #param height The image height
*/
void Render(const char *fileName, int width, int height)
{
if (fileName == NULL || width <= 0 || height <= 0)
return;
SimpleRenderer renderer;
renderer.SetAccelerator(new SimpleAccelerator());
renderer.SetIntegrator(new PhongIntegrator());
puts("Generiere Szene...");
Scene *scene = BuildScene(3, (float)width / height);
if (scene == NULL)
return;
puts("Rendere Bild...");
Image *image = renderer.Render(*scene, width, height);
if (image != NULL)
{
puts("Speichere Ergebnis...");
image->SaveBMP(fileName, 2.2f);
delete image;
}
delete scene;
}
/**
* The main program
*/
int main()
{
Render("image.bmp", 512, 512);
return 0;
}
example of a scene with two spheres as stated above with s1.radius = 0.33f & s2.radius = 0.33f
scene 1
another example of a scene with two spheres with s1.radius = 0.33f & s2.radius = 1.0f
scene 2
As you can see, it seems the camera is invalid as a point of perspective, as no matter what the position of the sphere is, the only difference is its lighting, but it will always be at the centre of the display window
Since s2 is attached as a child of s1, it's being drawn 5 units further down the X axis than s1:
s2->SetPosition(vec3((5.0f, 0.33f, -2.0f));
...
s1->AddChild(s2);
And since your camera is looking down the positive x axis:
Camera *camera = new Camera(vec3(-2.0f, 2.0f, 4.0f),
vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 1.0f, 0.0f), Camera::DefaultFov, aspect);
s2 is simply being drawn behind s1.
It turns out it was not a problem with my scene builder, but how the child/parent inherritance works in another class SceneObject*. Anyway, I fixed the AddChild function and now the code now works with the camera perspective and with mulitple items in the scene

C++ DirectX10 Mesh Disappearing After One Frame, Why?

First time poster on this site. But I have hit a serious block and am lost. If this is too much to read, the question is at the bottom. But I thought the background would help.
A little background on the project:
I currently have a rendering engine set up using DirectX10. It seems that after implementing my design of component(data) pointers inside of a component manager(holds pointers to components of entities inside of world) and sending that data to an interface manager (all methods for loading, creation, updating, rendering, etc.).
Here is an image of a class diagram to make it easier to visualize:
edited: (Do not have enough rep to post, so here is a link instead: http://imgur.com/L3nOyoY
edit:
My rendering engine has a weird issue of rending a cube (parsed from a Wavefront .obj file and put into an array of custom vertices) for only one frame. After the initial presentation of the back buffer, my cube disappears.
I have gone through a lot of debugging to fix this issue, but have yielded no answers. I have files of dumps of the change in vectors for position, scale, rotation, etc. as well as the world matrix. All data in regards to the position of the object in world space and the camera position in world space maintains its integrity. I have also checked to see if there were issues with other pointers either being corrupted, being modified where they ought not to be, or deleted by accident.
Stepping through the locals over several updates and renders I find no change in Movement Components,
no change in Texture components, and no change to my shader variables and pointers.
Here is the code running initially, and then the loop,
void GameWorld::Load()
{
//initialize all members here
mInterface = InterfaceManager(pDXManager);
//create entities here
mEHouse = CEntity(
CT::MESH | CT::MOVEMENT | CT::SHADER | CT::TEXTURE,
"House", &mCManager);
mECamera = CEntity(
CT::CAMERA | CT::MOVEMENT | CT::LIGHT,
"Camera", &mCManager);
//HACKS FOR TESTING ONLY
//Ideally create script to parse to create entities;
//GameWorld will have dynamic entity list;
//////////////////////////////////////////////////
tm = CMesh("../Models/Box.obj");
mCManager.mMesh[0] = &tm;
//hmmm.... how to make non-RDMS style entities...
tc = CCamera(
XMFLOAT3(0.0f, 0.0f, 1.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 1.0f, 0.0f), 1);
mCManager.mCamera[0] = &tc;
tmc = CMovement(
XMFLOAT3(0.0f, 0.0f, -10.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f),
XMFLOAT3(0.0f, 0.0f, 0.0f));
mCManager.mMovement[1] = &tmc;
////////////////////////////////////////////////////
//only after all entities are created
mInterface.onLoad(&mCManager);
}
//core game logic goes here
void GameWorld::Update(float dt)
{
mInterface.Update(dt, &mCManager);
}
//core rendering logic goes here
void GameWorld::Render()
{
pDXManager->BeginScene();
//render calls go here
mInterface.Render(&mCManager);
//disappears after end scene
pDXManager->EndScene();
}
And here is the interface render and update methods:
void InterfaceManager::onLoad(CComponentManager* pCManager)
{
//create all
for(int i = 0; i < pCManager->mTexture.size(); ++i)
{
mTexture2D.loadTextureFromFile(pDXManager->mD3DDevice, pCManager->mTexture[i]);
}
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
mMesh.loadMeshFromOBJ(pCManager->mMesh[i]);
mMesh.createMesh(pDXManager->mD3DDevice, pCManager->mMesh[i]);
}
for(int i = 0; i < pCManager->mShader.size(); ++i)
{
mShader.Init(pDXManager->mD3DDevice, pDXManager->mhWnd, pCManager->mShader[i], pCManager->mTexture[i]);
}
//TODO: put this somewhere else to maintain structure
XMMATRIX pFOVLH = XMMatrixPerspectiveFovLH((float)D3DX_PI / 4.0f, (float)pDXManager->mWindowWidth/pDXManager->mWindowHeight, 0.1f, 1000.0f);
XMStoreFloat4x4(&pCManager->mCamera[0]->mProjectionMat, pFOVLH);
}
void InterfaceManager::Update(float dt, CComponentManager* pCManager)
{
//update input
//update ai
//update collision detection
//update physics
//update movement
for(int i = 0; i < pCManager->mMovement.size(); ++i)
{
mMovement.transformToWorld(pCManager->mMovement[i]);
}
//update animations
//update camera
//There is only ever one active camera
//TODO: somehow set up for an activecamera variable
mCamera.Update(pCManager->mCamera[0], pCManager->mMovement[pCManager->mCamera[0]->mOwnerID]);
}
void InterfaceManager::Render(CComponentManager* pCManager)
{
for(int i = 0; i < pCManager->mMesh.size(); ++i)
{
//render meshes
mMesh.RenderMeshes(pDXManager->mD3DDevice, pCManager->mMesh[i]);
//set shader variables
mShader.setShaderMatrices(pCManager->mCamera[0], pCManager->mShader[i], pCManager->mMovement[i]);
mShader.setShaderLight(pCManager->mLight[i], pCManager->mShader[i]);
mShader.setShaderTexture(pCManager->mTexture[i]);
//render shader
mShader.RenderShader(pDXManager->mD3DDevice, pCManager->mShader[i], pCManager->mMesh[i]);
}
}
In short, my question could be this: Why is my cube only rendering for one frame, then disappearing?
UPDATE: I found the method causing the issues by isolating it. It lays within my update() method, before render(). It is when my camera is updated that it causes issues. Here is the code for that method, perhaps someone can see what I am not?
void ICamera::Update(CCamera* pCamera, CMovement* pMovement)
{
XMMATRIX rotMat = XMMatrixRotationRollPitchYaw(pMovement->mRotation.x,
pMovement->mRotation.y,
pMovement->mRotation.z);
XMMATRIX view = XMLoadFloat4x4(&pCamera->mViewMat);
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mEye);
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
lookAt = XMVector3TransformCoord(lookAt, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + lookAt;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, lookAt);
XMStoreFloat3(&pCamera->mUp, up);
XMStoreFloat4x4(&pCamera->mViewMat, view);
}
From the camera update code, one obvious issue is the lookAt variable.
Usually lookAt variable is a "point" not a "vector" (direction), but from your code, it seems you saved it as a point, but used it as a vector. And your camera definition is also not complete, a standard camera should at least contains: position, up direction and view direction (or lookAt point).
I assume you want to rotate and translate your camera in ICamera::Update function, so you rotate up and view direction, and translate the position.
I guess your CMovement will give you a new camera position and apply a rotation to the camera. Then, you can try to modify as below, (mEye and mLookAt is position, mUp is the direction)
XMVECTOR up = XMLoadFloat3(&pCamera->mUp);
XMVECTOR lookAt = XMLoadFloat3(&pCamera->mLookAt);
XMVECTOR oldPos = XMLoadFloat3(&pCamera->mEye);
XMVECTOR viewDir = lookAt - oldPos;
XMVECTOR pos = XMLoadFloat3(&pMovement->mPosition);
viewDir = XMVector3TransformCoord(viewDir, rotMat);
up = XMVector3TransformCoord(up, rotMat);
lookAt = pos + viewDir;
view = XMMatrixLookAtLH(pos,
lookAt,
up);
XMStoreFloat3(&pCamera->mEye, position);
XMStoreFloat3(&pCamera->mLookAt, lookAt);
XMStoreFloat3(&pCamera->mUp, up);

Did icarroussel enable 3D wheel like this?

I have to present a menu like this in the picture :
where this buttons can move arround the cercle in the center similar to 3D effect, means you can see there dimensions transformation while moving.
I remember that iCarroussel project can do such things, Could Any pne guide me to the right control that provide this animation?
Thanks.
EDIT1 :
Ok I am able to see that iCarousel is almost what I need, but how to change the carousel vertical angle to get like the first picture? see how iCarousel is by default.
What you need to do is the following (Step by step as indicated below)
Download iCarousel ...
Open the sample project under Tests/ARC iOS (open iCarouselExample.xcodeproj)
Set default to "Cylinder" instead of "Coverflow" (in the viewDidLoad function of iCarouselExampleViewController.m)
- (void)viewDidLoad
{
[super viewDidLoad];
carousel.type = iCarouselTypeCylinder;
navItem.title = #"Cylinder";
}
Since you need just six items in your custom carousel set a "number of panes" in iCarouselExampleViewContoller.h as ...
//NOTE!
#define NUMBER_OF_ITEMS 6
Use this NUMBER_OF_ITEMS to setup the carousel (i.e. in iCarouselExampleViewContoller.m change the setup function as follows (use 'wrap' as shown)
- (void)setUp
{
//set up data
wrap = YES;
self.items = [NSMutableArray array];
//NOTE! use preset number of vars in Carousel
//for (int i = 0; i < 10000; i++)
for (int i=0; i< NUMBER_OF_ITEMS; i++)
{
[items addObject:[NSNumber numberWithInt:i]];
}
}
Now provide the perspective in iCarousel by updating the _perspective parameter in the uCarousel class. Do this in the setup function of iCarousel.m:
- (void)setUp
{
_type = iCarouselTypeLinear;
//NOTE! Tweak perspective parameters
_perspective = -1.0f/750.0f;
... etc ...
}
Finally give the entire view a "tilt" about the X axis by rotating the carousel by 15 degrees about the x axis. The way to do this is to tweak the transform matrix (set transform = CATransform3DRotate(transform, -15.0f*M_PI/180.0f, 1.0f, 0.0f, 0.0f) In code, in the iCarousel.m's transformForItemView function update this as follows:
- (CATransform3D)transformForItemView:(UIView *)view withOffset:(CGFloat)offset
{
//set up base transform
CATransform3D transform = CATransform3DIdentity;
transform.m34 = _perspective;
transform = CATransform3DTranslate(transform, -_viewpointOffset.width, _viewpointOffset.height, 0.0f);
//perform transform
switch (_type)
{
.... SKIPPED THE INITIAL SECTIONS OF THIS CODE WE MAKE OUR CHANGE IN THE CYLINDER SECTION ...
case iCarouselTypeCylinder:
case iCarouselTypeInvertedCylinder:
{
CGFloat count = [self circularCarouselItemCount];
CGFloat spacing = [self valueForOption:iCarouselOptionSpacing withDefault:1.0f];
CGFloat arc = [self valueForOption:iCarouselOptionArc withDefault:M_PI * 2.0f];
CGFloat radius = [self valueForOption:iCarouselOptionRadius withDefault:fmaxf(0.01f, _itemWidth * spacing / 2.0f / tanf(arc/2.0f/count))];
CGFloat angle = [self valueForOption:iCarouselOptionAngle withDefault:offset / count * arc];
if (_type == iCarouselTypeInvertedCylinder)
{
radius = -radius;
angle = -angle;
}
if (_vertical)
{
transform = CATransform3DTranslate(transform, 0.0f, 0.0f, -radius);
transform = CATransform3DRotate(transform, angle, -1.0f, 0.0f, 0.0f);
return CATransform3DTranslate(transform, 0.0f, 0.0f, radius + 0.01f);
}
else
{
transform = CATransform3DTranslate(transform, 0.0f, 0.0f, -radius);
//NOTE! Give it a tilt about the "X" axis
transform = CATransform3DRotate(transform, -15.0f*M_PI/180.0f, 1.0f, 0.0f, 0.0f);
transform = CATransform3DRotate(transform, angle, 0.0f, 1.0f, 0.0f);
return CATransform3DTranslate(transform, 0.0f, 0.0f, radius + 0.01f);
}

OpenScenegraph sample code issue

The code below is from a book. When I try to run it, it fails on the line
osg::ref_ptr geom = new osg::Geometry();
and, the output window does not seem to contain much information on why it crashes, other than telling me that it did. Any idea what I may be doing wrong in the code below? Thanks in advance.
Here is the windows error popup when I try to run this in Visual Studio 2010(windows 7 64)
Windows has triggered a breakpoint in OSGPracticeLab.exe.
This may be due to a corruption of the heap, which indicates a bug in OSGPracticeLab.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while OSGPracticeLab.exe has focus.
The output window may have more diagnostic information.
On attempting to debug the code, I was able to trace the problem to the new function call. In the code below, it seems the while loop is skipped over, and a null value is returned for p(no memory allocated, and so my Geometry object in the code below this, is not instantiated.
void *__CRTDECL operator new(size_t size) _THROW1(_STD bad_alloc)
{ // try to allocate size bytes
void *p;
while ((p = malloc(size)) == 0)
if (_callnewh(size) == 0)
{ // report no memory
static const std::bad_alloc nomem;
_RAISE(nomem);
}
return (p);
}
Below is my Program to draw some shapes and display.
#include <osg/ShapeDrawable>
#include <osg/Geode>
#include <osgViewer/Viewer>
int main()
{
//An octahedron is a polyhedron having eight triangle faces.
//It is really a nice example to show why primitive indexing is important
// we will sketch the octahedron structure now
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array(6);
//octahedron has six vertices, each shaed by four triangles.
//withe the help of an index array and the osg::DrawElementsUInt class, we can allocate
//a vertex array with only six elements
(*vertices)[0].set( 0.0f, 0.0f, 1.0f);
(*vertices)[1].set(-0.5f,-0.5f, 0.0f);
(*vertices)[2].set( 0.5f,-0.5f, 0.0f);
(*vertices)[3].set( 0.5f, 0.5f, 0.0f);
(*vertices)[4].set(-0.5f, 0.5f, 0.0f);
(*vertices)[5].set( 0.0f, 0.0f,-1.0f);
//The osg::DrawElementsUInt accepts a size parameter besides the drawing mode parameter, too.
//After that, we will specify the indices of vertices to describe all eight triangle faces.
osg::ref_ptr<osg::DrawElementsUInt> indices = new osg::DrawElementsUInt(GL_TRIANGLES, 24);
(*indices)[0] = 0; (*indices)[1] = 1; (*indices)[2] = 2;
(*indices)[3] = 0; (*indices)[4] = 2; (*indices)[5] = 3;
(*indices)[6] = 0; (*indices)[7] = 3; (*indices)[8] = 4;
(*indices)[9] = 0; (*indices)[10]= 4; (*indices)[11]= 1;
(*indices)[12]= 5; (*indices)[13]= 2; (*indices)[14]= 1;
(*indices)[15]= 5; (*indices)[16]= 3; (*indices)[17]= 2;
(*indices)[18]= 5; (*indices)[19]= 4; (*indices)[20]= 3;
(*indices)[21]= 5; (*indices)[22]= 1; (*indices)[23]= 4;
//To create a geometry with a default white color, we only set the vertex array
//and the osg::DrawElementsUInt primitive set. The normal array is also required but is not easy
//to compute manually. We will use a smoothed normal calculator to automatically obtain it. This calculator
//will be described in the next section, Using polygonal techniques.
osg::ref_ptr<osg::Geometry> geom = new osg::Geometry();
geom->setVertexArray( vertices.get() );
geom->addPrimitiveSet( indices.get() );
//osgUtil::SmoothingVisitor::smooth( *geom );
//Add the geometry to an osg::Geode object and make it the scene root
osg::ref_ptr<osg::Geode> root = new osg::Geode;
root->addDrawable( geom.get() );
osgViewer::Viewer viewer;
viewer.setSceneData( root.get() );
return viewer.run();
}
int drawShapeUsingVertices()
{
//Create the vertex array and push the four corner points to the back of the array by using vector like operations:
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array;
vertices->push_back( osg::Vec3(0.0f, 0.0f, 0.0f) );
vertices->push_back( osg::Vec3(1.0f, 0.0f, 0.0f) );
vertices->push_back( osg::Vec3(1.0f, 0.0f, 1.0f) );
vertices->push_back( osg::Vec3(0.0f, 0.0f, 1.0f) );
//We have to indicate the normal of each vertex; otherwise OpenGL will use a default (0, 0, 1) normal vector
//and the lighting equation calculation may be incorrect. The four vertices actually face the same direction,
//so a single normal vector is enough. We will also set the setNormalBinding() method to BIND_OVERALL later.
osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array;
normals->push_back( osg::Vec3(0.0f,-1.0f, 0.0f) );
osg::ref_ptr<osg::Vec4Array> colors = new osg::Vec4Array;
//here We will indicate a unique color value to each vertex and make them colored. By default,
//OpenGL will use smooth coloring and blend colors at each vertex together:
colors->push_back( osg::Vec4(1.0f, 0.0f, 0.0f, 1.0f) );
colors->push_back( osg::Vec4(0.0f, 1.0f, 0.0f, 1.0f) );
colors->push_back( osg::Vec4(0.0f, 0.0f, 1.0f, 1.0f) );
colors->push_back( osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f) );
//Next, we create the osg::Geometry object and set the prepared vertex, normal, and color arrays to it.
//We also indicate that the single normal should be bound to the entire geometry and that the colors
//should be bound per vertex:
osg::ref_ptr<osg::Geometry> quad = new osg::Geometry;
quad->setVertexArray( vertices.get() );
quad->setNormalArray( normals.get() );
quad->setNormalBinding( osg::Geometry::BIND_OVERALL );
quad->setColorArray( colors.get() );
quad->setColorBinding( osg::Geometry::BIND_PER_VERTEX );
//The last step required to finish a geometry and add it to the scene graph is to specify the primitive set.
//A newly allocated osg::DrawArrays instance with the drawing mode set to GL_QUADS is used here, in order to
//render the four vertices as quad corners in a counter-clockwise order:
quad->addPrimitiveSet( new osg::DrawArrays(GL_QUADS, 0, 4) );
//Add the geometry to an osg::Geode object and render it in the scene viewer:
osg::ref_ptr<osg::Geode> root = new osg::Geode;
root->addDrawable( quad.get() );
osgViewer::Viewer viewer;
viewer.setSceneData( root.get() );
return viewer.run();
}
I didn't have any problems with the code. Took it from the beginners guide and it works fine:
#include <osg/Geometry>
#include <osg/Geode>
#include <osgViewer/Viewer>
int main()
{
osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array;
vertices->push_back( osg::Vec3(0.0f, 0.0f, 0.0f) );
vertices->push_back( osg::Vec3(1.0f, 0.0f, 0.0f) );
vertices->push_back( osg::Vec3(1.0f, 0.0f, 1.0f) );
vertices->push_back( osg::Vec3(0.0f, 0.0f, 1.0f) );
osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array;
normals->push_back( osg::Vec3(0.0f,-1.0f, 0.0f) );
osg::ref_ptr<osg::Vec4Array> colors = new osg::Vec4Array;
colors->push_back( osg::Vec4(1.0f, 0.0f, 0.0f, 1.0f) );
colors->push_back( osg::Vec4(0.0f, 1.0f, 0.0f, 1.0f) );
colors->push_back( osg::Vec4(0.0f, 0.0f, 1.0f, 1.0f) );
colors->push_back( osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f) );
osg::ref_ptr<osg::Geometry> quad = new osg::Geometry;
quad->setVertexArray( vertices.get() );
quad->setNormalArray( normals.get() );
quad->setNormalBinding( osg::Geometry::BIND_OVERALL );
quad->setColorArray( colors.get() );
quad->setColorBinding( osg::Geometry::BIND_PER_VERTEX );
quad->addPrimitiveSet( new osg::DrawArrays(GL_QUADS, 0, 4) );
osg::ref_ptr<osg::Geode> root = new osg::Geode;
root->addDrawable( quad.get() );
osgViewer::Viewer viewer;
viewer.setSceneData( root.get() );
return viewer.run();
}
I recommend you check your project properties.
Have you included additional include directories: $(OSG_ROOT)\include;$(OSG_SOURCE)\include;$(OSG_ROOT)\include\osg;
If you're in Debug mode, do you have this in your preprocessor definitions? _DEBUG;WIN32;
Did you specify your linker additional directory: $(OSG_ROOT)\lib
Did you specify linker additional dependencies?: osgWidgetd.lib;osgVolumed.lib;osgViewerd.lib;osgUtild.lib;osgTextd.lib;osgTerraind.lib;osgSimd.lib;osgShadowd.lib;osgPresentationd.lib;osgParticled.lib;osgManipulatord.lib;osgGAd.lib;osgFXd.lib;osgDBd.lib;osgd.lib;osgAnimationd.lib;OpenThreadsd.lib;;;;;;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)
Have you specified Configuration properties > debugging > Working directory as: $(OSG_ROOT)\bin
If an extreme case, it may be because your Visual Studio installation is corrupted. Try reinstalling Visual Studio and if the OSG installation was corrupted, then reinstall OSG (build from source). Mentioning this because a friend of mine had problems running OSG because his Visual Studio was corrupted. Reinstalling fixed it.
Does osg build? Did you run the "Install" project from within OSG? Even if you did, the permissions can be borked in Win7 - you might have to manually install to Program Files.
Your sample posted above compiled perfectly for me on Win7 / VS 2008 / Win32-Release build config, built against version 3.1.0 of OSG. I just replaced the main from one of the Example Projects in the OSG solution with the code you pasted above, it builds and runs without the error you listed.
I am using OSG from the trunk - probably at least a minor version ahead of any of the prebuilds, but it should work from the prebuilds if you have your paths, etc., set right. You could, of course, also try starting from the authors' download of the examples: http://www.skew-matrix.com/OSGQSG/ - they already have the project files, etc., set up correctly.
You don't define the osg::Geometry class in your code, so the most likely problem is that you aren't properly linking to the object or library where it is defined.

how to rotate Texture2D in cocos2d-iphone

I'm making a game using cocos2d-iphone, I want's to rotate a Texture2D of maybe 20 degrees, what can I do?
I tried glRotate(20.0f, 0.0f, 0.0f, 1.0f), but it doesn't work.
So any ideas?
Use the CCSprite property rotation
CCSprite* sprite = [CCSprite spriteWithTexture:texture];
sprite.position = ccp(100,100);
sprite.rotation = 20;
[scene addChild:sprite];