Creating and storing a 2D/3D map for autonomous robot - c++

I have built an autonomous RC car that is capable of tracking its location and scanning its environment for obstacles. I am stuck however on how to store/create a map of its environment. The environment would not be fixed in size, as the robot explores it may uncover new areas. My background is on navigation, so that part and determining where obstacles are is the easy part (for me). The hard part is how to store/access the data.
Originally I was going to use a 2D array/matrix with a resolution of 2-5 cm, i.e each cell is 2-5cm wide when projected onto the ground. This however could eat up a lot of memory (it is embedded so I have limited resources, 512mb of RAM right now but may decrease when prototype is finished). As well, I will eventually be moving the system to a quadcopter so I will be working in 3D.
The next solution was to create an obstacle object, which would have a position. However I was unsure on how to: 1. keep track of the size and orientation of the object, i.e. how to tell if the object is a wall or just a table leg (cant avoid a wall, can go around a leg) and 2. how to efficiently search/tell if an obstacle is in the robots way.
I am using c++, and the integration of a camera and ultrasonic sensor to detect the environment. As mentioned, stuck on how to store the environment efficiently. I would like to keep the environment information, whether save to a file or send to a server so I can later use/access the map. Would anyone have any suggestions on what to try, or where to even begin looking, this area is all new to me?

You can use a R-Tree where your initial place is the root of the tree. Each time you can go somewhere, you can add the place you went in your tree.
You could also fusion some neighbour parts together if possible.
From a memory point of view, you are not bounded by your array/matrix limits and you store only valuable data. It is up to you to choose what to store : obstacles or free space.
Searching in that kind of structure is also efficient.
As for implementations, you could take a look a the r-tree in the boost library.
Later on, you could even create a graph based on your tree to be used by path-finding algorithms if you expect your robot to go from one point to another.

Related

SDL 2 collision detetection

I am creating a physics simulator of sorts and I have thousands of point-like objects (single pixels) moving at the same time. The way I have this setup currently is each point moving only one pixel per frame, which makes it easy to keep track of them in a two dimensional array and check if they're going to collide. However, this solution doesn't permit frame independent movement, which is necessary, because the collision detection is very slow. What is the most efficient way of doing collision detection in this case?
Okay, first things first:
On any modern OS, your app will be either
Sharing processor time with another app, or the OS itself, which is about the same thing
Doing different amounts of work at different times - like, loading assets in the background, rebuilding collision trees, or playing Pac-Man
Fighting the flying toasters
Also, you never know what kind of hardware your app, if distributed, will be running on. This entails lots of headaches, but the first and foremost is that you never know, at compile time, how much real time has elapsed between frames.
(A funny situation recently arose when a customer wanted an orbital calculation to be correct after he had closed his laptop, got on a plane, and reopened it. Easy enough to fix, but you might want to anticipate a 12 hour per frame situation.)
So, how do you deal with this?
Any framework will provide a timer of some sort. I'm not sure how SDL handles this, but typically, on Windows, you'd use GetTickCount() to get the elapsed milliseconds between frames. Each particle has a velocity, expressed in units per second. (Please use meters. Save the world the pain of Units Of user1868866).
When moving the particle,
pos += velocity * elapsed_time;
Or, as a concrete example, if I am in a car moving at 50 mph,
position += 50 miles/hr * 2 hr = 100 miles.
Doing this will solve the problem where particles are moving in frame time instead of simulation/game/real time.
Now, the collision-detection problem. Since we're working in 2D here...
With more than a handful of objects, you can't compare every object to every other object in a reasonable amount of time to see if they collide.
So, we have fancy things like Quadtrees. The idea is to partition your space recursively into quadrants, each of which is really a data structure that somehow "contains" all of the items that fall within its bounds. Then, you only have to check for collision between items within the same quadtree node.
Implementation of a quadtree for your specific applicatino is way, way too long to be appropriate for an SO answer, but I encourage you to research it, try and implement it, and come back here with any issues you have. Another great resource is gamedev.stackexchange.com, which is more game/graphics focused than SO.
Good luck.

Game engines: What are scene graphs?

I've started reading into the material on Wikipedia, but I still feel like I don't really understand how a scene graph works and how it can provide benefits for a game.
What is a scene graph in the game engine development context?
Why would I want to implement one for my 2D game engine?
Does the usage of a scene graph stand as an alternative to a classic entity system with a linear entity manager?
What is a scene graph in the game
engine development context?
Well, it's some code that actively sorts your game objects in the game space in a way that makes it easy to quickly find which objects are around a point in the game space.
That way, it's easy to :
quickly find which objects are in the camera view (and send only them to the graphics cards, making rendering very fast)
quickly find objects near to the player (and apply collision checks to only those ones)
And other things. It's about allowing quick search in space. It's called "space partitioning". It's about divide and conquer.
Why would I want to implement one for
my 2D game engine?
That depends on the type of game, more precisely on the structure of your game space.
For example, a game like Zelda could not need such techniques if it's fast enough to test collision between all objects in the screen. However it can easily be really really slow, so most of the time you at least setup a scene graph (or space partition of any kind) to at least know what is around all the moving objects and test collisions only on those objects.
So, that depends. Most of the time it's required for performance reasons. But the implementation of your space partitioning is totally relative to the way your game space is structured.
Does the usage of a scene graph stand
as an alternative to a classic entity
system with a linear entity manager?
No.
Whatever way you manage your game entities' object life, the space-partition/scene-graph is there only to allow you to quickly search objects in space, no more no less. Most of the time it will be an object that will have some slots of objects, corresponding to different parts of the game space and in those slots it will be objects that are in those parts.
It can be flat (like a 2D screen divider in 2 or 4), or it can be a tree (like binary tree or quadtree, or any other kind of tree) or any other sorting structure that limits the number of operations you have to execute to get some space-related informations.
Note one thing :
In some cases, you even need different separate space partition systems for different purposes. Often a "scene graph" is about rendering so it's optimized in a way that is dependent on the player's point of view and it's purpose is to allow quick gathering of a list of objects to render to send to the graphics card. It's not really suited to perform searches of objects around another object and that makes it hard to use for precise collision detection, like when you use a physic engine. So to help, you might have a different space partition system just for physics purpose.
To give an example, I want to make a "bullet hell" game, where there is a lot of balls that the player's spaceship has to dodge in a very precise way. To achieve enough rendering and collision detection performance I need to know :
when bullets appear in the screen space
when bullets leave the screen space
when the player enters in collision with bullets
when the player enters in collision with monsters
So I recursively cut the screen that is 2D in 4 parts, that gives me a quadtree. The quadtree is updated each game tick, because everything moves constantly, so I have to keep track of each object's (spaceship, bullet, monster) position in the quadtree to know which one is in which part of the screen.
Achieving 1. is easy, just enter the bullet in the system.
To achieve 2. I kept a list of leaves in the quadtree (squared sections of the screen) that are on the border of the screen. Those leaves contain the ids/pointers of the bullets that are near the border so I just have to check that they are moving out to know if I can stop rendering them and managing collision too. (It might be bit more complex but you get the idea.)
To achieve 3 and 4. I need to retrieve the objects that are near the player's spaceship. So first I get the leaf where the player's spaceship is and I get all of the objects in it. That way I will only test the collision with the player spaceship on objects that are around it, not all objects. (It IS a bit more complex but you get the idea.)
That way I can make sure that my game will run smoothly even with thousands of bullets constantly moving.
In other types of space structure, other types of space partitioning are required. Typically, kart/auto games will have a "tunnel" scene-graph because visually the player will see only things along the road, so you just have to check where he is on the road to retrieve all visible objects around in the "tunnel".
What is a scene graph? A Scene graph contains all of the geometry of a particular scene. They are useful for representing translations, rotations and scales (along with other affine transformations) of objects relative to each other.
For instance, consider a tank (the type with tracks and a gun). Your scene may have multiple tanks, but each one be oriented and positioned differently, with each having its turret rotated to different azimuth and with a different gun elevation. Rather than figuring out exactly how the gun should be positioned for each tank, you can accumulate affine transformations as you traverse your scene graph to properly position it. It makes computation of such things much easier.
2D Scene Graphs: Use of a scene graph for 2D may be useful if your content is sufficiently complex and if your objects have a number of sub components not rigidly fixed to the larger body. Otherwise, as others have mentioned, it's probably overkill. The complexity of affine transformations in 2D is quite a bit less than in the 3D case.
Linear Entity Manager: I'm not clear on exactly what you mean by a linear entity manager, but if you are refering to just keeping track of where things are positioned in your scene, then scene graphs can make things easier if there is a high degree of spatial dependence between the various objects or sub-objects in your scene.
A scene graph is a way of organizing all objects in the environment. Usually care is taken to organize the data for efficient rendering. The graph, or tree if you like, can show ownership of sub objects. For example, at the highest level there may be a city object, under it would be many building objects, under those may be walls, furniture...
For the most part though, these are only used for 3D scenes. I would suggest not going with something that complicated for a 2D scene.
There appear to be quite a few different philosophies on the web as to what the responsebilties are of a scenegraph. People tend to put in a lot of different things like geometry, camera's, light sources, game triggers etc.
In general I would describe a scenegraph as a description of a scene and is composed of a single or multiple datastructures containing the entities present in the scene. These datastructures can be of any kind (array, tree, Composite pattern, etc) and can describe any property of the entities or any relationship between the entities in the scene.
These entities can be anything ranging from solid drawable objects to collision-meshes, camera's and lightsources.
The only real restriction I saw so far is that people recommend keeping game specific components (like game triggers) out to prevent depedency problems later on. Such things would have to be abstracted away to, say, "LogicEntity", "InvisibleEntity" or just "Entity".
Here are some common uses of and datastructures in a scenegraph.
Parent/Child relationships
The way you could use a scenegraph in a game or engine is to describe parent/child relationships between anything that has a position, be it a solid object, a camera or anything else. Such a relationship would mean that the position, scale and orientation of any child would be relative to that of its parent. This would allow you to make the camera follow the player or to have a lightsource follow a flashlight object. It would also allow you to make things like the solar system in which you can describe the position of planets relative to the sun and the position of moons relative to their planet if that is what you're making.
Also things specific to some system in your game/engine can be stored in the scenegraph. For example, as part of a physics engine you may have defined simple collision-meshes for solid objects which may have too complex geometry to test collisions on. You could put these collision-meshes (I'm sure they have another name but I forgot it:P) in your scenegraph and have them follow the objects they model.
Space-partitioning
Another possible datastructure in a scenegraph is some form of space-partitioning as stated in other answers. This would allow you to perform fast queries on the scene like clipping any object that isn't in the viewing frustum or to efficiently filter out objects that need collision checking. You can also allow client code (in case you're writing an engine) to perform custom queries for whatever purpose. That way client code doesn't have to maintain its own space-partitioning structures.
I hope I gave you, and other readers, some ideas of how you can use a scenegraph and what you could put in it. I'm sure there are alot of other ways to use a scenegraph but these are the things I came up with.
In practice, scene objects in videogames are rarely organized into a graph that is "walked" as a tree when the scene is rendered. A graphics system typically expects one big array of stuff to render, and this big array is walked linearly.
Games that require geometric parenting relationships, such as those with people holding guns or tanks with turrets, define and enforce those relationships on an as-needed basis outside of the graphics system. These relationships tend to be only one-deep, and so there is almost never a need for an arbitrarily deep tree structure.

Improving camshift algorithm in open cv

I am using camshift algorithm of opencv for object tracking. The input is being taken from a webcam and the object is tracked between successive frames. How can I make the tracking stronger? If I move the object at a rapid rate, tracking fails. Also when the object is not in the frame there are false detections. How do I improve this ?
Object tracking is an active research area in computer vision. There are numerous algorithms to do it, and none of them work 100% of the time.
If you need to track in real time, then you need something simple and fast. I am assuming that you have a way of segmenting a moving object from the background. Then you can compute a representation of the object, such as a color histogram, and compare it to the the object you find in the next frame. You should also check that the object has not moved too far between frames. If you want to try more advanced motion tracking, then you should look up Kalman Filter.
Determining that an object is not in the frame is also a big problem. First, what kinds of objects are you trying to track? People? Cars? Dogs? You can build an object classifier, which would tell you whether or not the moving object in the frame is your object of interest, as opposed to noise or some other kind of object. A classifier can be something very simple, such as a constraint on size, or it can be very complicated. In the latter case you need to learn about features that can be computed, classification algorithms, such as support vector machines, and you would need to collect training images to train it.
In short, a reliable tracker is not an easy thing to build.
Suppose you find the object in the first two frames. From that information, you can extrapolate where you'd expect the object in the third frame. Instead of using a generic find-the-object algorithm, you can use a slower, more sophisticated (and thus hopefully more reliable) algorithm by limiting it to check in the vicinity that the extrapolation predicts. It may not be exactly where you expect (perhaps the velocity vector is changing), but you should certainly be able to reduce the area that's checked.
This should help reduce the number of times some other part of the frame is misidentified as the object (because you're looking at a smaller portion of the frame and because you're using a better feature detector).
Update the extrapolations based on what you find and iterate for the next frame.
If the object goes out of frame, you fall back to your generic feature detector, as you do with the first two frames, and try again to get a "lock" when the object returns to the view.
Also, if you can, throw as much light into the physical scene as possible. If the scene is dim, the webcam will use a longer exposure time, leading to more motion blur on moving objects. Motion blur can make it very hard for the feature detectors (though it can give you information about direction and speed).
I've found that if you expand the border of the search window in camShift it makes the algorithm a bit more adaptive to fast moving objects, although it can introduced some irregularities. try just making the border of your window 10% bigger and see what happens.

how do aim bots in fps games work?

I was curious if anyone had any experience/knowledge about aim bots in online FPS games such as Counter-Strike. I am curious and would like to learn more about how the cursor knows how to lock on to an opposing player. Obviously if I wanted to cheat I could go download some cheats so this is more of a learning thing. What all is involved in it? Do they hook the users mouse/keyboard in order to move the cursor to the correct location? How does the cheat application know where exactly to point the cursor? The cheat app must be able to access data within the game application, how is that accomplished?
EDIT: to sids answer, how do people obtain those known memory locations to grab the data from? EDIT2: Lets say I find some values that I want at location 0xbbbbbbbb using a debug program or some other means. How do I now access and use the data stored at that location within the application since I don't own that memory, the game does. Or do I now have access to it since I have injected into the process and can just copy the memory at that address using memcpy or something?
Anyone else have anything to add? Trying to learn as much about this as possible!
Somewhere in the game memory is the X,Y, and Z location of each player. The game needs to know this information so it knows where to render the player's model and so forth (although you can limit how much the game client can know by only sending it player information for players in view).
An aimbot can scan known memory locations for this information and read it out, giving it access to two positions--the player's and the enemies. Subtracting the two positions (as vectors) gives the vector between the two and it's simple from there to calculate the angle from the player's current look vector to the desired angle vector.
By sending input directly to the game (this is trivial) and fine-tuning with some constants you can get it to aim automatically pretty quickly. The hardest part of the process is nailing down where the positions are stored in memory and adjusting for any dynamic data structure moving players around on you (such as frustum culling).
Note that these are harder to write when address randomization is used, although not impossible.
Edit: If you're wondering how a program can access other programs memory, the typical way to do it is through DLL injection.
Edit: Since this is still getting some hits there are more ways that aimbots work that are more popular now; namely overwriting (or patching in-place) the Direct3D or OpenGL DLL and examining the functions calls to draw geometry and inserting your own geometry (for things like wall-hacks) or getting the positions of the models for an aimbot.
Interesting question - not exactly your answer but I remember in the early days of Counter-Strike people used to replace their opengl32.dll with a botched one that would render polygons as transparent so they could see through the walls.
The hacks improved and got more annoying, and people got more creative. Now Valve/Steam seems to do a good job of removing them. Just a bit of warning if you're planning on playing with this stuff, Steam does scan for 'hacks' and if any are found, they'll ban you permanently
A lot of "Aim bots" aren't aim bots at all but trigger bots. They're background processes that wait until your reticule is actually over a target and fire automatically. This can be accomplished in a number of different ways but a lot of games make it easy by displaying the name of someone whenever your target goes over them or some other piece of data in memory that a trigger bot can pin point.
This way, you play by waving the mouse at your target and as soon as you mouse over them it will trigger a shot without your having to actually fire yourself.
They still have to be able to pinpoint that sort of stuff in memory and have the same sort of issues that truer "Aim bots" do.
Another method that has been used in the past is to reverse engineer the network packet formatting. A man-in-the-middle attack on the packet stream (which can be done on the same system the game runs on) can provide player positions and other useful related information. Forged packets can be sent to the server to move the player, shoot, or do all kinds of things depending on the game.
Check out the tutorial series by Fleep here. His fully commented C# source code can be downloaded here.
In a nutshell:
Find your player's x y z coordinates, cursor x y coordinates as well as all enemies x y z coordinates. Calculate the distance between you and the nearest enemy. You are now able to calculate the x y cursor coordinates needed in order to get auto aim.
Alternatively you can exclude enemies who are dead (health is 0) so in this case you also need to find the enemy's health address. Player-related data is usually close to each other in memory.
Again, check out the source code to see in detail how all of this works.
Edit: I know this offtopic, sorry But i thought this would help out the asker.
The thing the hacking industry haven't tried out, but which I've been experimenting with, is socket hijacking. It may sound a lot more than it actually is, but basically it uses the WinPCap drivers to hook into the process' Internet connections via TCP (Sockets), without even going near the process' offsets.
Then you will simply have to learn the way the TCP Signals are being transferred and store them into a hash-Table or a Multiplayer (Packet) class. Then after retrieving the information and overlay the information over the Window (not hooked), just transparent labels and 2D boxes over the screen of the windowed game.
I've been testing it on Call of Duty 4 and I have gotten the locations of 4 players via TCP, but also as Ron Warholic has said: all of the hacking methods won't work if a game developer wrote a game server to only output the players when the current user should see the player.
And after cut the transmission of that player's location as for the X Y Z and player will no longer be stored and not rendered there for stop the wallhack. And aimbots will in a way stall work but not efficiently. So anyway, if you are looking into making a wallhack, don't hook into the process, try to learn WinPCap and hook into the Internet signals. As for games, don't search for processes listing for Internet transmissions. If you need an example that utilizes this, go search Rust Radar that outputs the player's location on a map and also outputs other players around you that is being sent via Internet transmissions TCP and is not being hooked into the game.

SolidWorks API - Electromagnetic Dynamics

Is it possible to simulate custom forces (in my case, electromagnetic) using the SolidWorks API for Animator/Motion Study/COSMOS/EMS?
I'm looking for any combination of API's that would expose the required data to be able to simulate the dynamics of either electrical positive/negative or magnetic north/south forces.
The very basics of what I need to be able to do is:
Model two cubes
Mark a point on one as having positive charge and the point on the other as negative charge (or north/south magnetism)
Press "Go"
Watch them come together and stick
Once I can figure out how to do this, I can go through with the more complicated code that I'm trying to write (that's not the problem). I'm simply stuck on where to begin. I have searched and searched but cannot find a definitive answer, the documentation is sparse and hard to grasp.
If this is definitely not possible or not worth it to attempt in SolidWorks, then that's an acceptable answer. I never would have chosen SolidWorks if I was left free to pick the platform, but it was chosen for me.
EDIT
It seems COSMOSMotion API's IDDMActionReactionForce class is what I was looking for. Can anyone point me to an example of using it to define a custom force between two objects?
I can't speak about SolidWorks, so my answer may be irrelevant — BUT I have used ray-tracing software to model dynamic systems.
I my case, I was simulating the circumstances of lunar and solar eclipses. The ray-tracing software (POVRay) took care of generating an image of the scene including the Sun, Earth and Moon, but I had to calculate the positions of the various bodies for each frame of the animation.
I suspect this may be the case with modelling Electromagnetic Dynamics, and you will have to calculate the positions of the bodies involved at intervals, so that Solidworks will render the scenes of an animation.
I may be all wrong about the capabilities of SolidWorks, so I wish you luck.
I was tempted to say that "it's impossible" because you said it would be "an acceptable answer", but that would be too easy.
After much trying, my conclusion is SolidWorks is not the appropriate platform for this. It doesn't let you hook into its internal physics calculations and the Force object I spoke of is way too inefficient for the problem I needed to model. Theoretically, it will work to bring two cubes together along side SolidWorks' built in gravity/collision detection simulation elements but when confronted with an n-body problem, it was apparent that it wasn't made for that.