Navigation of maze with a group (cluster) of robots - computer-vision

I am thinking about starting a new project with the purpose of mapping and navigating a maze with a cluster of robots. The number of robots I was thinking about are 2 or 3.
The following assumptions are made :
The robots are fitted with a camera each to help detect the walls of the maze
The size and shape of the maze is unknown and can be changed according to will
The way the robots should work is that they should communicate and efficiently divide the task of mapping and navigation among themselves.
I am studying Electrical Engineering and have no previous experience with maze planning/solving with robotics. I would like to know as to how to begin with this; and more importantly the resources I should be looking at. Any suggestions of books, websites, forums are welcome.
The microcontroller I am planning to use is Arduino Uno. I am familiar with it and it has very good support online. Thus it seems to be a good choice. Also, I will have around 2 months to finish the project. Is that amount of time enough to accomplish the aforementioned things?

A single robot in a maze is called a Braitenberg-vehicle. A group of such robots is a multi-robot-formation, which implies that the agents must coordinate their behavior. In the literature such games are called “Signaling games”, because a sender has private access to an event and must share this information with the group. For example, robot1 has detected a wall and sends the status update to another robot.
In it's easiest form, signaling games are modeled with a lexicon. That is a list of possible messages between the robots. For example: 0=detectwall, 1=walkahead, 2=stop. As a reaction to a received signal, the robot can adapt his behavior and change his map of the maze. Sometimes this idea is called a distributed map building algorithm, because the information is only partial available.
A basic example is the case if two robots are driving against each other (Chicken Game). They must communicate about their evasive strategy to prevent collision. If both robots have decided to drive in the same direction, they will collide as well. The problem is, that they don't know, what the other robot is planning.

Related

Creating and storing a 2D/3D map for autonomous robot

I have built an autonomous RC car that is capable of tracking its location and scanning its environment for obstacles. I am stuck however on how to store/create a map of its environment. The environment would not be fixed in size, as the robot explores it may uncover new areas. My background is on navigation, so that part and determining where obstacles are is the easy part (for me). The hard part is how to store/access the data.
Originally I was going to use a 2D array/matrix with a resolution of 2-5 cm, i.e each cell is 2-5cm wide when projected onto the ground. This however could eat up a lot of memory (it is embedded so I have limited resources, 512mb of RAM right now but may decrease when prototype is finished). As well, I will eventually be moving the system to a quadcopter so I will be working in 3D.
The next solution was to create an obstacle object, which would have a position. However I was unsure on how to: 1. keep track of the size and orientation of the object, i.e. how to tell if the object is a wall or just a table leg (cant avoid a wall, can go around a leg) and 2. how to efficiently search/tell if an obstacle is in the robots way.
I am using c++, and the integration of a camera and ultrasonic sensor to detect the environment. As mentioned, stuck on how to store the environment efficiently. I would like to keep the environment information, whether save to a file or send to a server so I can later use/access the map. Would anyone have any suggestions on what to try, or where to even begin looking, this area is all new to me?
You can use a R-Tree where your initial place is the root of the tree. Each time you can go somewhere, you can add the place you went in your tree.
You could also fusion some neighbour parts together if possible.
From a memory point of view, you are not bounded by your array/matrix limits and you store only valuable data. It is up to you to choose what to store : obstacles or free space.
Searching in that kind of structure is also efficient.
As for implementations, you could take a look a the r-tree in the boost library.
Later on, you could even create a graph based on your tree to be used by path-finding algorithms if you expect your robot to go from one point to another.

Design of virtual trial room

As a part of my masters project I proposed to build a virtual trial room application intended for retail clothing stores. Currently its meant to be used directly in store though it may be extended for online stores as well.
This application will show customers how a selected apparel would look on them by showing it on their 3D replica on screen.
It involves 3 steps
Sizing up the customer
Building customer replica 3D humanoid model
Apply simulated cloth on the model
My question is about the feasibility of the project and choice of framework.
Can this be achieved in real time using a normal Desktop computer? If yes what would be appropriate framework ( hardware, software, programming language etc ) for this purpose?
On the work I have done till now, I was planning to achieve above steps in following ways
for step 1 : option a) Two cameras for front and side views or
option b) 1 Kinect or 2 Kinect for complete 3D data
for step 2: either use makehuman (http://www.makehuman.org/) code to build a customised 3D model using above data or build everything from scratch, unsure about the framework.
for step 3: Just need few cloth samples, so thought of building simulated clothes in blender.
Currently I have just the vague idea about different pieces but I am not sure of how to develop complete application.
Theoretically this can be achieved in real time. Many usefull algorithms for video tracking, stereo vision and 3d recostruction are available in OpenCV library. But it's very difficult to build robust solution. For example, you'll probably need to track human body which moves frame to frame and perform pose estimation (OpenCV contains POSIT algorithm), however it's not trivial to eliminate noise in resulting objects coordinates. For inspiration see a nice work on video tracking.
You might want to choose another way, simplify some things, avoid complicated stuff do things less dynamicaly and estimate only clothes size and approximate human location. I this case most likely you will create something usefull and interesting.
I've lost link to one online fiting room where hands and body detection implemented. Using Kinnect solves many problems. But If for some reason you won't use it then AR(augmented reality) helps you (yet another fitting room)

Lag compensation with networked 2D games

I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation.
What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue.
Basically how does this work for shooting and stuff like that?
Thanks
Typically, what will happen is that the player will view the game 100ms behind how it actually is. That, and you have to accept that some game designs just require faster server updates. Typically, games that need faster server updates use client/server model where the server computes all the physics, not the client. You've seen the traditional RTS model- peer2peer, slow updates. But the traditional model for FPS games is client/server, fast updates, for this exact reason.
There is a forum entry and a whole multiplayer/network FAQ on GameDev on this topic that might be a good first read.
You have to design your game to take his into account. For example if you're shooting a target that could move out of the way suddenly, just have shells with proximity fuses and make lots of noise and pretty colours for a while, hiding the fact neither your nor your player actually knows yet if they hit the target.
In fact this is pretty close to real physics: it takes time for the consequences of actions to be observed.

how do aim bots in fps games work?

I was curious if anyone had any experience/knowledge about aim bots in online FPS games such as Counter-Strike. I am curious and would like to learn more about how the cursor knows how to lock on to an opposing player. Obviously if I wanted to cheat I could go download some cheats so this is more of a learning thing. What all is involved in it? Do they hook the users mouse/keyboard in order to move the cursor to the correct location? How does the cheat application know where exactly to point the cursor? The cheat app must be able to access data within the game application, how is that accomplished?
EDIT: to sids answer, how do people obtain those known memory locations to grab the data from? EDIT2: Lets say I find some values that I want at location 0xbbbbbbbb using a debug program or some other means. How do I now access and use the data stored at that location within the application since I don't own that memory, the game does. Or do I now have access to it since I have injected into the process and can just copy the memory at that address using memcpy or something?
Anyone else have anything to add? Trying to learn as much about this as possible!
Somewhere in the game memory is the X,Y, and Z location of each player. The game needs to know this information so it knows where to render the player's model and so forth (although you can limit how much the game client can know by only sending it player information for players in view).
An aimbot can scan known memory locations for this information and read it out, giving it access to two positions--the player's and the enemies. Subtracting the two positions (as vectors) gives the vector between the two and it's simple from there to calculate the angle from the player's current look vector to the desired angle vector.
By sending input directly to the game (this is trivial) and fine-tuning with some constants you can get it to aim automatically pretty quickly. The hardest part of the process is nailing down where the positions are stored in memory and adjusting for any dynamic data structure moving players around on you (such as frustum culling).
Note that these are harder to write when address randomization is used, although not impossible.
Edit: If you're wondering how a program can access other programs memory, the typical way to do it is through DLL injection.
Edit: Since this is still getting some hits there are more ways that aimbots work that are more popular now; namely overwriting (or patching in-place) the Direct3D or OpenGL DLL and examining the functions calls to draw geometry and inserting your own geometry (for things like wall-hacks) or getting the positions of the models for an aimbot.
Interesting question - not exactly your answer but I remember in the early days of Counter-Strike people used to replace their opengl32.dll with a botched one that would render polygons as transparent so they could see through the walls.
The hacks improved and got more annoying, and people got more creative. Now Valve/Steam seems to do a good job of removing them. Just a bit of warning if you're planning on playing with this stuff, Steam does scan for 'hacks' and if any are found, they'll ban you permanently
A lot of "Aim bots" aren't aim bots at all but trigger bots. They're background processes that wait until your reticule is actually over a target and fire automatically. This can be accomplished in a number of different ways but a lot of games make it easy by displaying the name of someone whenever your target goes over them or some other piece of data in memory that a trigger bot can pin point.
This way, you play by waving the mouse at your target and as soon as you mouse over them it will trigger a shot without your having to actually fire yourself.
They still have to be able to pinpoint that sort of stuff in memory and have the same sort of issues that truer "Aim bots" do.
Another method that has been used in the past is to reverse engineer the network packet formatting. A man-in-the-middle attack on the packet stream (which can be done on the same system the game runs on) can provide player positions and other useful related information. Forged packets can be sent to the server to move the player, shoot, or do all kinds of things depending on the game.
Check out the tutorial series by Fleep here. His fully commented C# source code can be downloaded here.
In a nutshell:
Find your player's x y z coordinates, cursor x y coordinates as well as all enemies x y z coordinates. Calculate the distance between you and the nearest enemy. You are now able to calculate the x y cursor coordinates needed in order to get auto aim.
Alternatively you can exclude enemies who are dead (health is 0) so in this case you also need to find the enemy's health address. Player-related data is usually close to each other in memory.
Again, check out the source code to see in detail how all of this works.
Edit: I know this offtopic, sorry But i thought this would help out the asker.
The thing the hacking industry haven't tried out, but which I've been experimenting with, is socket hijacking. It may sound a lot more than it actually is, but basically it uses the WinPCap drivers to hook into the process' Internet connections via TCP (Sockets), without even going near the process' offsets.
Then you will simply have to learn the way the TCP Signals are being transferred and store them into a hash-Table or a Multiplayer (Packet) class. Then after retrieving the information and overlay the information over the Window (not hooked), just transparent labels and 2D boxes over the screen of the windowed game.
I've been testing it on Call of Duty 4 and I have gotten the locations of 4 players via TCP, but also as Ron Warholic has said: all of the hacking methods won't work if a game developer wrote a game server to only output the players when the current user should see the player.
And after cut the transmission of that player's location as for the X Y Z and player will no longer be stored and not rendered there for stop the wallhack. And aimbots will in a way stall work but not efficiently. So anyway, if you are looking into making a wallhack, don't hook into the process, try to learn WinPCap and hook into the Internet signals. As for games, don't search for processes listing for Internet transmissions. If you need an example that utilizes this, go search Rust Radar that outputs the player's location on a map and also outputs other players around you that is being sent via Internet transmissions TCP and is not being hooked into the game.

SolidWorks API - Electromagnetic Dynamics

Is it possible to simulate custom forces (in my case, electromagnetic) using the SolidWorks API for Animator/Motion Study/COSMOS/EMS?
I'm looking for any combination of API's that would expose the required data to be able to simulate the dynamics of either electrical positive/negative or magnetic north/south forces.
The very basics of what I need to be able to do is:
Model two cubes
Mark a point on one as having positive charge and the point on the other as negative charge (or north/south magnetism)
Press "Go"
Watch them come together and stick
Once I can figure out how to do this, I can go through with the more complicated code that I'm trying to write (that's not the problem). I'm simply stuck on where to begin. I have searched and searched but cannot find a definitive answer, the documentation is sparse and hard to grasp.
If this is definitely not possible or not worth it to attempt in SolidWorks, then that's an acceptable answer. I never would have chosen SolidWorks if I was left free to pick the platform, but it was chosen for me.
EDIT
It seems COSMOSMotion API's IDDMActionReactionForce class is what I was looking for. Can anyone point me to an example of using it to define a custom force between two objects?
I can't speak about SolidWorks, so my answer may be irrelevant — BUT I have used ray-tracing software to model dynamic systems.
I my case, I was simulating the circumstances of lunar and solar eclipses. The ray-tracing software (POVRay) took care of generating an image of the scene including the Sun, Earth and Moon, but I had to calculate the positions of the various bodies for each frame of the animation.
I suspect this may be the case with modelling Electromagnetic Dynamics, and you will have to calculate the positions of the bodies involved at intervals, so that Solidworks will render the scenes of an animation.
I may be all wrong about the capabilities of SolidWorks, so I wish you luck.
I was tempted to say that "it's impossible" because you said it would be "an acceptable answer", but that would be too easy.
After much trying, my conclusion is SolidWorks is not the appropriate platform for this. It doesn't let you hook into its internal physics calculations and the Force object I spoke of is way too inefficient for the problem I needed to model. Theoretically, it will work to bring two cubes together along side SolidWorks' built in gravity/collision detection simulation elements but when confronted with an n-body problem, it was apparent that it wasn't made for that.