How to test functions which has a return value that takes manual effort to calculate - unit-testing

Assume we have the following pseudo-code:
class Position{
int x, y
}
class Body{
Body parent;
Position start; //always initialized relative to parent's
}
class OrbitingBody extends Body{
int angularVelocity; // angles travelled per day
int radius; // radius of the orbit, like earths orbit radius around sun
//returns its position after 'days' days relative to its parent
Position getRelativePosition(int days) {
totalAngles = self.angularVelocity * days
roundedAngle = totalAngles % 360
return Math.polar2Cartesian(roundedAngle, self.radius)
}
// returns position relative to the absolute parent, i.e Sun
Position getAbsolutePosition(int days) {
position = self.getRelativePosition(days)
if (self.parent) {
parentPosition = parent.getAbsolutePosition(d)
// Math.relativePosition converts position with respect to parents'
position = Math.relativePosition(position, parentPosition)
}
return position
}
}
class Sun extends Body{
// position will be (0,0)
}
class Planet extends OrbitingBody{
//parent will be Sun
}
class Moon extends OrbitingBody{
//parent will be a Planet
}
The above classes describe a basic modal of our solar system.
The method getAbsolutePosition finds position of the moon with respect to sun. For a planet this is straight forward.
For moon, there is a complication, as it returns the position relative to sun. getRelativePosition will always return relative to its parent, i.e for moon it will return w.r.t earth.
My objective here is to unit test the getAbsolutePosition.
If I have to test this for moon, I have to find out the return values that requires quite some manual effort, for a bunch of test cases. If there is any change in math logic, that would mean I have to find the updated return values again manually.
What are good software engineering practices to test functions like these?

You are correct in your assumption that tests which require "heavy computational work" in order to determine the "expected" values to check your actual results against would really suffer if base assumptions change and the expected results would change as result of that.
I see one option here: if possible, simply compute the expected test results, too, but don't code that yourself. You see, if you create test code and production code, chances are that you could make the same mistakes twice. So even when you would try to implement the required equations "in a different way" you doing test and production code isn't exactly ideal.
Thus: if possible, find somebody else who puts down computational logic to be used within your test cases. Maybe that code be somehow simplified - those "generators" don't need to be perfect; but they should give correct results for those corners you will be using them.

I often resolve these situations by writing implementation first, then using it to calculate values for test inputs, then I only verify them by hand, which is usually tiny bit easier than calculating them completely (although there's risk of introducing some bug into expected values from bug in code, I my verification fails).
Sometimes to get at least some distinction from the implementation code I do the math in Libre Office calc sheet, so I'm using the same formulas, but written second time by me, and into cells, so I usually end with a bit different structure and make sure one more time, that the formulas are as I wanted them.
Also try to decouple the complex calculations into several simpler steps, so you can mostly unit test the simple operations, then treat the whole operation more like integration test, testing only few cases. In case the internal logic will be adjusted, it will maybe break all those complex tests, but the simple operations tests will work for those which didn't change.
If this would be some real world astrology (it looks like 2D simplified space model, probably for some interactive animation), you would be maybe able to find some example data already computed by others, in some book/etc, so you could use those, that helps a lot (especially to find bugs in calculations of others :)).
And if you don't need super accurate results, but you just need to know that Moon was on Earth orbit and toward Sun, then you can test results against expected values guessed by hand with some big margin for error (like 1/4 of orbit distance, etc).

Related

Multiple instances of btDefaultMotionState, all ignored, but one

To summarize the problem(s):
I have two bodies in my world so far, one being the ground, the other one being a falling box called "fallingStar".
1) I do not understand why my bullet world is not aligned with my drawn world unless I set an offset of btVector3(2,2,2) to the (btDefault)MotionState.
There is no fancy magic going on anywhere in the code that would explain the offset. Or at least I could not find any reason, not in the shaders, not anywhere.
2) I expected to be able to use multiple instances of btDefaultMotionState, to be precise, I wanted to use one instance for the falling entity and place it somewhere above the ground and then create another instance for the ground that should simply be aligned with my graphics-ground, ever unmoving.
What I am experiencing in regards to 2) is that for whatever reason the btDefaultMotionState instance for the falling entity is always also influencing the one for the ground, without any reference.
Now to the code:
Creation of the fallingBox:
btCollisionShape *fallingBoxShape = new btBoxShape(btVector3(1,1,1));
btScalar fallingBoxMass = 1;
btVector3 fallingBoxInertia(0,0,0);
fallingBoxShape->calculateLocalInertia(fallingBoxMass, fallingBoxInertia);
// TODO this state somehow defines where exactly _ALL_ of the physicsWorld is...
btDefaultMotionState *fallMotionState = new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1), btVector3(2,2,2)));
//btDefaultMotionState *fallMotionState = new btDefaultMotionState();
btRigidBody::btRigidBodyConstructionInfo fallingBoxBodyCI(fallingBoxMass, fallMotionState, fallingBoxShape, fallingBoxInertia);
/*btTransform initialTransform;
initialTransform.setOrigin(btVector3(0,5,0));*/
this->fallingBoxBody = new btRigidBody(fallingBoxBodyCI);
/*fallMotionState->setWorldTransform(initialTransform);
this->fallingBoxBody->setWorldTransform(initialTransform);*/
this->physicsWorld->addBody(*fallingBoxBody);
Now the interesting parts to me are the necessary offset of btVector3(2,2,2) to align it with my drawn world and this:
btTransform initialTransform;
initialTransform.setOrigin(btVector3(0,5,0));
this->fallingStarBody = new btRigidBody(fallingStarBodyCI);
fallMotionState->setWorldTransform(initialTransform);
If I reenable this part of the code ALL the bodies again show an offset, but NOT just 5 up, which I could somehow comprehend if for whatever reason the worldTransform would effect every entity, but about 2,2,2 off... which I cannot grasp at all.
I guess that this line is useless:
fallMotionState->setWorldTransform(initialTransform); as it does not change anything whether it's there or not.
Now to the code of the ground creation:
btCompoundShape *shape = new btCompoundShape();
... just some logic, nothing to do with bullet
btTransform transform;
transform.setIdentity();
transform.setOrigin(btVector3(x + (this->x * Ground::width),
y + (this->y * Ground::height),
z + (this->z * Ground::depth)));
btBoxShape *boxShape = new btBoxShape(btVector3(1,0,1)); // flat surface, no box
shape->addChildShape(transform, boxShape);
(this portion just creates a compoundshape for each surface tile :)
btRigidBody::btRigidBodyConstructionInfo info(0, nullptr, shape);
return new btRigidBody(info);
Here I purposely set the motionstate to nullptr, but this doesn't change anything.
Now I really am curious... I thought maybe the implementation of btDefaultMotionState is a singleton, but it doesn't look so, so... why the hell is setting the motionState of one body affecting the whole world?
Bullet is a good library but only few dedicate time to write good documentation.
To set position of a btRigidBody, try this :-
btTransform transform = body -> getCenterOfMassTransform();
transform.setOrigin(aNewPosition); //<- set orientation / position that you like
body -> setCenterOfMassTransform(transform);
If your code is wrong only at the set transformation part (that is what I guess from skimming your code), it should be solved.
Note that this snippet works only for dynamic body, not static body.
About CompoundBody:-
If it is a compound body, e.g. shape B contains shape C.
Setting transformation of B would work (set body of B), but not work for C.
(because C is just a shape, transformation support only body.)
If I want to change relative transformation of C to B, I would create a whole new compound shape and a new rigid body. Don't forget to remove old body & shape.
That is a library limitation.
P.S.
I can't answer some of your doubt/questions, these information are what I gathered after stalking in Bullet forum for a while, and tested by myself.
(I am also coding game + game library from scratch, using Bullet and other open sources.)
Edit: (about the new problem)
it just slowly falls down (along with the ground itself, which should
not move as I gave it a mass of 0)
I would try to solve it in this order.
Idea A
Set to the compound mass = 0 instead, because setting a child shape's mass has no meaning.
Idea B
First check -> getCenterOfMassTransform() every time-step , is it really falling?
If it is actually falling, to be sure, try dynamicsWorld->setGravity(btVector3(0,0,0));.
If still not work, try with very simple world (1 simple object, no compound) and see.
Idea C (now I start to be desperate)
Ensure your camera position is constant.
If the problem is still alive, I think you now can create a simple test-case and post it in Bullet forum without too much effort.
Lower amounts of lines of code = better feedback
What you are describing is not normal bullet behavior. Your understanding of the library is correct.
What you are most likely dealing with is either a buffer overrun or a dangling pointer. The code you have posted does not have an obvious one of either, so it would be coming from somewhere else in your codebase. You might be able to track that down using a well-placed memory breakpoint.
You "might" be dealing with a header/binary version inconsistency issue, but that's less likely as you would probably be seeing other major issues.
Just had the exact same type of behavior with the DebugDrawer suspended on top of the world. Solved it by passing to Bullet Physics the projectionview matrix alone, without the model matrix that he has and multiplies with already:
glUseProgram(shaderID);
m_MVP = m_camera->getProjectionViewMatrix();
glUniformMatrix4fv(shaderIDMVP, 1, GL_FALSE, &m_MVP[0][0]);
if (m_dynamicWorld) m_dynamicWorld->debugDrawWorld();

Cocos2D BezierBy with increasing speed over time

I'm pretty new to C++/Cocos2d, but I've been making pretty good progress. :)
What I want to do is animate a coin 'falling off' the screen after a player gets it. I've managed to successfully implement it 2 different ways, but each way has major downsides.
The goal: After a player gets a coin, the coin should 'jump', then 'fall' off of the screen. Ideally, the coin acts as if acted upon by gravity, so it jumps up with a fast speed, slows down to a stop, then proceeds to go downward at an increasing rate.
Attempts so far:
void Coin::tick(float dt) {
velocityY += gravity * dt;
float newX = coin->getPositionX() + velocityX;
float newY = coin->getPositionY() + velocityY;
coin->setPosition(newX, newY);
// using MoveBy(dt, Vec2(newX, newY)) has same result
}
// This is run on every 'update' of the main game loop.
This method does exactly what I would like it to do as far as movement, however, the frame rate gets extremely choppy and it starts to 'jump' between frames, sometimes quite significant distances.
ccBezierConfig bz;
bz.controlPoint_1 = Vec2(0, 0);
bz.controlPoint_2 = Vec2(20, 50); // These are just test values. Will normally be randomized to a degree.
bz.endPosition = Vec2(100, -2000);
auto coinDrop = BezierBy::create(2, bz);
coin->runAction(coinDrop);
This one has the benefit of 'perfect' framerate, where there is no choppiness whatsoever, however, it moves at a constant rate which ruins the experience of it falling and just makes it look like it's arbitrarily moving along some set path. (Which, well, it is.)
Has anybody run into a similar situation or know of a fix? Either to better handle the frame rate of the first one (MoveBy/To don't work- still has the choppy effect) or to programmatically set speeds of the second one (change speeds going to/from certain points in the curve)
Another idea I've had is to use a number of different MoveBy actions with different speeds, but that would have awkward 'pointy' curves and awkward changes in speed, so not really a solution.
Any ideas/help are/is greatly appreciated. :)
Yes, I have run into a similar situation. This is where 'easing' comes in handy. There are many built in easing functions that you can use such as Ease In or Ease Out. So your new code would look something like:
coin->runAction(cocos2d::EaseBounceOut::create(coinDrop));
This page shows the graphs for several common easing methods:
http://cocos2d-x.org/docs/programmers-guide/4/index.html
For your purposes (increasing speed over time) I would recommend trying the 'EaseIn' method.

Getting the velocity vector from position vectors

I looked at a bunch of similar questions, and I cannot seem to find one that particularly answers my question. I am coding a simple 3d game, and I am trying to allow the player to pick up and move entities around my map. I essentially want to get a velocity vector that will "push" the physics object a distance from the player's eyes, wherever they are looking. Here's an example of this being done in another game (the player is holding a chair entity in front of his eyes).
To do this, I find out the player's eye angles, then get the forward vector from the angles, then calculate the velocity of the object. Here is my working code:
void Player::PickupOtherEntity( Entity& HoldingEntity )
{
QAngle eyeAngles = this->GetPlayerEyeAngles();
Vector3 vecPos = this->GetEyePosition();
Vector3 vecDir = eyeAngles.Forward();
Vector3 holdingEntPos = HoldingEntity.GetLocation();
// update object by holding it a distance away
vecPos.x += vecDir.x * DISTANCE_TO_HOLD;
vecPos.y += vecDir.y * DISTANCE_TO_HOLD;
vecPos.z += vecDir.z * DISTANCE_TO_HOLD;
Vector3 vecVel = vecPos - holdingEntPos;
vecVel = vecVel.Scale(OBJECT_SPEED_TO_MOVE);
// set the entity's velocity as to "push" it to be in front of the player's eyes
// at a distance of DISTANCE_TO_HOLD away
HoldingEntity.SetVelocity(vecVel);
}
All that is great, but I want to convert my math so that I can apply an impulse. Instead of setting a completely new velocity to the object, I want to "add" some velocity to its existing velocity. So supposing I have its current velocity, what kind of math do I need to "add" velocity? This is essentially a game physics question. Thank you!
A very simple implementation could be like this:
velocity(t+delta) = velocity(t) + delta * acceleration(t)
acceleration(t) = force(t) / mass of the object
velocity, acceleration and force are vectors. t, delta and mass scalars.
This only works reasonably well for small and equally spaced deltas. What you are essentially trying to achieve with this is a simulation of bodies using classical mechanics.
An Impulse is technically F∆t for a constant F. Here we might want to assume a∆t instead because mass is irrelevant. If you want to animate an impulse you have to decide what the change in velocity should be and how long it needs to take. It gets complicated real fast.
To be honest an impulse isn't the correct thing to do. Instead it would be preferable to set a constant pick_up_velocity (people don't tend to pick things up using an impulse), and refresh the position each time the object rises up velocity.y, until it reaches the correct level:
while(entPos.y < holdingEntPos.y)
{
entPos.y += pickupVel.y;
//some sort of short delay
}
And as for floating in front of the player's eyes, set an EyeMovementEvent of some sort that also sends the correct change in position to any entity the player is holding.
And if I missed something and that's what you are already doing, remember that when humans apply an impulse, it is generally really high acceleration for a really short time, much less than a frame. You wouldn't see it in-game anyways.
basic Newtonian/D'Alembert physics dictate:
derivate(position)=velocity
derivate(velocity)=acceleration
and also backwards:
integrate(acceleration)=velocity
integrate(velocity)=position
so for your engine you can use:
rectangle summation instead of integration (numerical solution of integral). Define time constant dt [seconds] which is the interval between updates (timer or 1/fps). So the update code (must be periodically called every dt:
vx+=ax*dt;
vy+=ay*dt;
vz+=az*dt;
x+=vx*dt;
y+=vy*dt;
z+=vz*dt;
where:
a{x,y,z} [m/s^2] is actual acceleration (in your case direction vector scaled to a=Force/mass)
v{x,y,z} [m/s] is actual velocity
x,y,z [m] is actual position
These values have to be initialized a,v to zero and x,y,z to init position
all objects/players... have their own variables
full stop is done by v=0; a=0;
driving of objects is done only by change of a
in case of collision mirror v vector by collision normal
and maybe multiply by some k<1.0 (0.95 for example) to account energy loss on impact
You can add gravity or any other force field by adding g vector:
vx+=ax*dt+gx*dt;
vy+=ay*dt+gy*dt;
vz+=az*dt+gz*dt;
also You can add friction and anything else you need
PS. the same goes for angles just use angle/omega/epsilon/I instead of x/a/v/m
to be clear by angles I mean rotation (pitch,yaw,roll) around mass center

Distinguish between collision surface orientations in box2d

I've been working on an iOS project, using Cocos2D 1.0 and Box2D, and I've run into a bit of a problem.
What I need to be able to do is determine the orientation of a surface my player has hit. For example, if we have a rectangular platform, and the player collides with it, I need to know whether the player has hit the left, right, top, or bottom face of it. ALL the objects in the game are square, and the ONLY one moving is the player.
I'm currently using a b2ContactListener in Box2D (well, my own subclass of one, anyway), and have been playing around with the local normal of the manifold from the contact in BeginContact. The main problem I have is that that normal seems to be affected by the rotation of the player body (e.g. the player has rotated 90 degrees, OR the player is spinning wildly on impact - both situations are giving me trouble), and I seem to end up with ambiguity (i.e. collisions with different faces that give the same normal...) if I try to allow for that - although of course I could just be doing something horribly wrong. Now, I don't understand manifolds very well, so it's possible that my problem stems from that, or maybe I'm missing something obvious.
Any suggestions?
I would prefer to do this in the cleanest and least ugly manner possible. Bear in mind that the main categorisation I care about is "player is landing on something from above" vs "everything else", but I may end up needing the exact
If you need more information or clarification about anything, just ask.
EDIT: Just to clarify, I am aware that the normal points from A to B (in a collision between A and B) by convention in Box2D, and my code does check to see which one is the player and takes this into account before doing any calculations to determine which face has been hit.
So, I feel a little awkward about answering my own question, but apparently it's officially encouraged.
Anyway, the problem with the way I was approaching things was twofold. Firstly, I was using the contact manifold's local normal instead of the world normal. Secondly, my code for reversing the object transformations was buggy (I would never have needed to do this if I had been using the world manifold).
The world manifold takes into account object transformations and sizes and as such contains data more easily applicable to the world co-ordinate system.
By convention in Box2d, the collision normal (for both the world manifold and the contact manifold) points from A to B - this has to be taken into account for some uses, since the normal from A to B is the inverse of the normal from B to A, so you can't just assume that one body will always be A.
So, the solution is to use get the world manifold for each collision, examine its normal, and then make whatever decisions you want to make.
For example, in the BeginContact method of a b2ContactListener subclass (if you have no idea what I'm talking about then check out part 2 of this tutorial):
void ContactListener::BeginContact(b2Contact* contact)
{
b2WorldManifold worldManifold;
contact->GetWorldManifold(&worldManifold); // this method calls b2WorldManifold::Initialize with the appropriate transforms and radii so you don't have to worry about that
b2Vec2 worldNormal = worldManifold.normal;
// inspect it or do whatever you want based on that...
}
Since you'll likely need to check what bodies are colliding, and which one is A and which one is B, you may want to keep a vector of structs containing the fixtures that collided (as in that tutorial) and the normal, and iterate over the vector in your tick() method or similar. (You can get these out of the contact with contact->GetFixtureA() and contact->GetFixtureB().)
Now, you could get the point data from the world manifold, and make your decisions based on that, but why would you when the normal is already available, since in this particular case the normal (combined with which shapes the normal points from and to) is all that is needed.
Edit (for #iBradApps):
First, I'm assuming here that you have followed the tutorial I linked to and have a contact listener set up. If you haven't, follow it because Ray explains it in depth quite well.
Second, I want to point out that there is no absolute guarantee which object is A and which is B (well, it depends on what kind of Box2D objects they are; suffice to say if they can both move, you can't guarantee the ordering, at least as far as I know), so in my case I wanted to see if the player object had hit something, so I created a class variable (b2Fixture *playerF) in my contact listener that stored a reference to the player object so I could determine whether contact A or contact B was the player.
You asked about detecting a collision where something else collided with the top of B. Something like the following should work, although I haven't had a chance to test it for you:
In your ContactListener.h:
public:
b2Fixture *playerF;
// along with the vector etc mentioned in Ray's tutorial
// and anything else you want
When you make the ContactListener in your init() (assuming you called it _contactListener):
_contactListener->playerF = playerFixture; // or whatever you called the player body fixture
BeginContact method:
void ContactListener::BeginContact(b2Contact* contact)
{
b2WorldManifold worldManifold;
contact->GetWorldManifold(&worldManifold); // this method calls b2WorldManifold::Initialize with the appropriate transforms and radii so you don't have to worry about that
b2Vec2 worldNormal = worldManifold.normal; // this points from A to B
if (playerF == contact->GetFixtureA()) {
// note that +ve y-axis is "up" in Box2D but down in OpenGL and Cocos2D
if (worldNormal.y < -0.707) { // use a constant for performance reasons
// if the y component is less than -1/sqrt(2) (approximately -0.707),
// then the normal points more downwards than across, so A must be hitting B
// from roughly above. You could tune this more towards the top by increasing
// towards -1 if you want but it worked fine for me like this last time and
// you might run into issues with missing hits
NSLog(#"Player (A) hit B roughly on the top side!");
// here you can set any class variables you want to check in
// your update()/tick(), such as flags for whether the player has died from
// falling or whatever
}
} else if (playerF == contact->GetFixtureB()) {
if (worldNormal.y > 0.707) {
NSLog(#"Player (B) hit A roughly on the top side!");
}
} else {
// it's something else hitting something else and we don't care about it
}
}
As for doing it in your tick() method instead, yes, you can. I actually did all my stuff in PostSolve in the contact listener because I needed to know how hard the player hit, but all I cared about beyond that was whether the player had hit hard enough to kill them, so I didn't need or want to iterate over all the contacts in my tick() - I just set a flag in the contact listener that said the player had suffered a fatal impact.
If you want to do this all in the update method, then starting from what Ray has, add a b2Vec2 to the MyContact struct, and in BeginContact, add both the two fixtures (like Ray does) and get the collision normal (as I do) and add it too.
The modified MyContact struct:
struct MyContact {
b2Fixture *fixtureA;
b2Fixture *fixtureB;
b2Vec2 normal;
bool operator==(const MyContact& other) const
{
return (fixtureA == other.fixtureA) && (fixtureB == other.fixtureB);
}
};
The new BeginContact method:
void MyContactListener::BeginContact(b2Contact* contact) {
b2WorldManifold wordManifold;
contact->GetWorldManifold(&worldManifold);
MyContact myContact = { contact->GetFixtureA(), contact->GetFixtureB(), worldManifold.normal };
_contacts.push_back(myContact);
}
This will give you all the information you need to do the checking I initially described in your tick().
Edit again:
Your tick() method might contain something like this if you want to do the processing there, assuming you have called the player fixture (or ball fixture, like in the tutorial, or whatever it is you're interested in) _playerFixture, that you've got a contact listener with the same name as in the tutorial, that you added the b2Vec2 normal to the MyContact struct, that you are adding contacts to the vector (as above) in BeginContact, and that you are deleting contacts from the vector in the EndContact (as shown in the tutorial - it's probably fine as is):
std::vector<MyContact>::iterator pos;
for(pos = _contactListener->_contacts.begin(); pos != _contactListener->_contacts.end(); ++pos) {
MyContact contact = *pos;
if (_playerFixture == contact.fixtureA && contact.normal.y < -0.707) {
NSLog(#"Player (A) hit B roughly on the top side!");
} else if (_playerFixture == contact.fixtureB && contact.normal.y > 0.707) {
NSLog(#"Player (B) hit A roughly on the top side!");
} else {
// it's something else hitting something else and we don't care about it
}
}

2D Movement Theory

I have recently been getting into OpenGL/SDL and playing around with objects in 2D/3D Space and as i'm sure most newbies to this area do, have a few queries, about the 'best' way to do something. I quote best, because i'm assuming there isn't a best way, it's personal preference.
So, I have an entity, a simple GL_QUAD, which I want to move around. I have keyboard events setup, i can get the keypress/release events, not a problem.
I have an entity class, which is my GL_QUAD, pseudo implementation....
class Entity
{
void SetVelocity(float x, float y);
}
I then have this event handler code...
if theEvent.Key = UPARROW AND theEvent.State = PRESSED
Entity.SetVelocity(0.0f, -1.0f);
else if theEvent.Key = UPARROW AND theEvent.State = RELEASED
Entity.SetVelocity(0.0f, 0.0f);
My question is, is this the best way to move my entity? This has led me to thinking that we could make it a little more complex, by having a method for adjusting the X/Y Velocity, seperately. As SetVelocity would forget me X velocity if i started moving left? So i could never travel diagonally.
For Example...
class Entity
{
void SetXVelocity(float x);
void SetYVelocity(float y);
}
if theEvent.Key = UPARROW AND theEvent.State = PRESSED
Entity.SetYVelocity(-1.0f);
else if theEvent.Key = UPARROW AND theEvent.State = RELEASED
Entity.SetYVelocity(0.0f);
if theEvent.Key = LEFTARROW AND theEvent.State = PRESSED
Entity.SetXVelocity(-1.0f);
else if theEvent.Key = LEFTARROW AND theEvent.State = RELEASED
Entity.SetXVelocity(0.0f);
This way, if I have an XVelocity and I then press the UPARROW, I will still have my XVelocity, as well as a new YVelocity, thus moving diagonally.
Is there a better way? Am I missing something very simple here?
I am using SDL 1.2, OpenGL, C++. Not sure if there is something in SDL/OpenGL which would help?
Thanks in advance for your answers.
The question is really general since it depends on how you want to model the movement of your objects in your world.
Usually every object has its velocity which is calculated basing on an acceleration and capped to a maximum. This means that a key press alters the acceleration of the object for the specified frame which is then calculated and applied to the current velocity of the object.
This is done through an update phase which goes through all the objects and calculates the velocity change according to the object acceleration. In this way you don't bother to modify the velocity itself but you let your engine to do its calculations depending on the state of every object..
acceleration is applied over a period of time, so in the example by #jack, you would apply an acceleration 10m/s^2 over a time period of one second.
you should also modify your application to make it time based, not frame based.
have a look at this basic game physics introduction, and I would also really have a look at the GameDev.net Physics Tutorials
I assume the way you want movement to work is that you want the player to move only when a key is held.
In short: your solution is fine.
Some potential gotchas to take consideration of: What happens if both left and right is pressed?
Well, what you describe here is a simple finite state machine. You have the different directions in which you can move (plus no movement at all) as the states, and the key-events as transitions. This can usually be implemented quite well using the state pattern, but this is often quite painful in C++ (lots of boilerplate code), and might be over the top for your scenario.
There are of course other ways to represent speed and direction of your entity, e.g. as a 2D-vector (where the length gives the speed). This would enable you to easily represent arbitrary directions and velocities.