Detour recast tileLayer what is it about - c++

I would like to ask how tileLayers in detour work.
Can i layer any 2 tiles resulting in "cuts" being merged into bigger holes and in what manner, are there any limitations? How many of these layers can i have and how they interact when navigating. Can i have for example basic always unchanged layer with whole map and then multiple layers with user placed geometry etc...
I'am talking about dtNavMeshCreateParams::tileLayer from original lib. There is some implementation using it in TempObstackes example. But it is not well explained (it has many custom things inside but i'am interested only in function of those layers)
Thank you in advance, i cant find much info about this online so any help would really be appreceated.

So actually i finally (many hours) found my answer to what these layers are used for:
http://digestingduck.blogspot.com/2011/02/heightfield-layer-portals.html
(btw this seems to be blog of navmesh lib author, enjoy)

Related

FFTW 3.3.3 basic usage with real datas

I'm a newbie in FFT and I was asked to find a way to analyse/process a particular set of data collected by oil drilling rigs.
There is a lot of noise in the collected data due to rig movements (up & down with tides and waves for example).
I was asked to clean the collected data up with FFT=>filtering=>IFFT.
I use C++ and the FFTW 3.3.3 library.
An example is better than anything else so :
I have a DB with, for example, the mudflow (liters per minutes). The mudflow is collected every 5 seconds, there is a timestamp in the DB for every measure (ex. 1387411235).
So my IN_data for my FFT is a couple of timestamp/mudflow (ex. 1387456630/3955.94, 1387456635/3954.92, etc...)
Displaying theses data really looks like a noisy sound signal and relevant events may be masked by the noise.
Using examples found on the Internet I can manage to perform FFT but my lack of knowledge and understanding is a big problem as I've never worked on signal processing and Fourier Transforms.
I don't really know how to proceed to start with this job, which version of FFTW routine to use (c2c, r2c, etc...), if there is any pre-data-processing and/or post-processing to do.
There are a lot of examples and tutorials that I've read on the internet but I'm french (sorry for my mistakes here) and it doesn't always make sense to me especially with OUT_data units, OUT_data type, In and Out data array size, windowing (what is that by the way), to put it in a nutshell I'm lost...
I suppose that my problem would be pretty straightforward for someone used to FFTW but for me it's very complicated right now.
So my questions :
What FFTW routine to use in both ways (FFT & IFFT) (what kind, type and size, of array for IN_data and OUT_data).
How to interpret the resulting array (what are the units that FFTW will return).
For now a short sample of what I've done is :
fftw_plan p;
p = (fftw_plan)fftw_plan_dft_1d(size,in,out,FFTW_FORWARD,FFTW_ESTIMATE);
fftw_execute(p);
fftw_destroy_plan(p);
with "in" and "out" as fftw_complex (the complex element of my In_data array is set to 1 for every data, don't really know why but the tutorial said to do that).
This code is based on an example found on the Internet but my lack of knowledge/understanding is a big drag and I was wondering if there was someone here who could give me explanations/workflow/insights/links on how to pull this out.
I'm in a trial period for my new job and I really want to implement this feature for my boss even if it means asking around for help, I've seen a lot of FFTW skilled posts here...
This is quite an ambitious project for someone who is completely new to DSP, but you can start by reading about the overlap-add method, which is essentially the method you need for your FFT-filter-IFFT approach to cleaning up this data. You should also check out the DSP StackExchange site dsp.stackexchange.com, where the theoretical background and application of frequency domain filtering is covered in several similar questions/answers.

How to search for images/png/jpeg/any other types in the memory of a program and display it?

Well, lately I have found a very interesting article about Map Hacks in online games.
After reading it I read they used a memory scanner to look for images in the memory.
How would they accomplish such a program, is there a solution for this freely available?
if not how would I code it in C++? How can I know a piece of memory is an "image"?
I can load my own DLL into the process so that shouldn't be a big issue..
To answer your questions:
A memory scanner uses OS apis to query memory from another process and perform searches for patterns or differences. A great tool for this is cheat engine.
The tool mentioned in the article visualizes the memory by coloring pixels according to the value of the bytes in memory. The alignment still needs to be done manually and could be very time consuming. I don't think the mentioned program was ever released.
The main problem is that you can't know that a particular piece of memory is supposed to be a map. Any big regular structure could look like one when colorized and aligned. Finding the actual piece of memory you are looking for is very hard.
Additional Info:
A property map in a game is very dynamic. If units or something moves the visibility has to update. So the actual format of a map like this is most likely a binary bitmap with no specific image format (png,jpg,...).
I personally find the approach to look for a map structure in memory is a very inefficient and time consuming approach. It's beatuful to show to people that have no idea about reverse engineering, but to me seems very impractical. The approach which is best totally depends on the game and your creativity.
I hope I can help you with the following example how I made a map hack for starcraft 2.
My idea was to load up a replay of a game, where I had full view of the map and find the difference to loading up a normal game where my vision is restricted. I switched a couple of times between replay and normal game and could indeed find a state variable that was 0 on normal game and 1 on replay (a common tool for finding memory like this is cheat engine).
Next I loaded the game up in a debugger and put a memory access breakpoint on this state variable. Now when loading up a normal game I would change the value when it is accessed while the map was loading. Through some trial and error I was able to find the correct location that was responsible for revealing the minimap and real map. The only task left was to create a dll that detours the code location and make sure the map is always revealed on every mapload.
Reply to typ1232: You mention that it's hard and impractical to find the map structure in memory. Heres a method I have had great success with: Load up a map in any game with fog of war, like StarCraft 2. Take a dump of the memory and save it. Send out troops/units and reveal as much of the previously undiscovered map and take another memory dump. Compare the two dumps and look closer at the areas in memory where there are a high frequency of changes. This is likely to be where the map is stored.
Sorry if I'm doing it wrong, new to stackoverflow :)
This might be a bit broader answer to the subject of "finding data" but there are binary analysis tools out there. For example, ..cantor.dust.. is a binary visualization tool (though its only in beta the idea remains the same). You can search for different patterns within a memory dump for "images" or structures. Youtube cantor dust and the creator did a presentation at DerbyCon of how he used it to find EFI structures to recreate an exploit of a PNG parser at the EFI level.
I also think the saving two memory states of visible map vs limited visibility map and search for the changes is viable, if not the best option, I just am trying to point out an alternative.

Graph-Drawing / TSP-Route-Drawing in C++ with "known" coordinates: How? Which Library/Tool?

i'm developing some kind of heuristics for a variation of the vehicle-routing-problem in C++.
After generating a solution, i want to plot this solution. The solution is a composite of various tours, all starting and ending at a common depot.
Therefore i have a vertex-set with all the coordinates and edges defined by two vertex-id's each. Furthermore i have all the distances between vertex-pairs of course.
It would be helpful to plot this in an extra-window opening in my program, but writing a plot to a graphics-file should be okay too.
What is an easy way to plot this? How would you tackle this?
First i tried to look for common graph-visualization packages (graphviz, tulip, networkx (python)), but i realized that all of them are specialized at graph-layouting (when there are no coordinates). Correct me when i'm wrong.
I don't know if it is possible to tell these packages that i already have the coordinates, helping the layouting-algorithms.
Next thing i tried is the CGAL library with geomview output -> no luck until now -> ubuntu crashes geomview.
One more question: Is it a better idea to use some non-layouting 2d-plot-libraries risking a plot, which isn't really good to view at (is there more to do than scaling?) or to use some layout-algorithm-based-libraries (e.g. graphviz, tulip, networkx), feed them with the distances between the vertices and hope the layouting-algorithms are keeping the distances while plotting in a good-to-view-at way?
If non-layouting-plotting is the way to do it: which library do you recommend?
If layout-based-plotting is the way to do it: how can i make use of the distances/coordinates in these libraries? And which library do you recommend?
Thanks for all your input!
Sascha
EDIT: I completed a prototype implementation using the PLplot library (http://plplot.sourceforge.net/). The results are nice and should be enough for the moment. I discovered and chosed this library because a related project (VRPH Software Package / Groer) used this plot and the source code was distributed. So the implementation was done in a short amount of time. The API is in my opinion bit awkward and low-level. Maybe there are some more modern (maybe not a c-based library) libraries out there? MathGL? Dislin? Maybe i will try them too.
The nice thing about drawing multiple tours in a vehicle routing problem is that "not so bad" algorithms tend to discover nice non-overlapping and divergent tours which is really good for the eye ;-)
It is not quite clear what you are trying to archive, but if I understand your question correctly, then you could do it using OpenGL. Having vertex coordinates, it should be fairly easy.
You can use Gnuplot with a input text file that contains your solution.
It is convenient to draw the points (vertex) then lines (agents paths) than link them.
To make the plot script easy, you can have a separate file for each vehicle, if the number
of vehicles is known.
check out:
http://www.cleveralgorithms.com/nature-inspired/advanced/visualizing_algorithms.html

Looking for Ideas: How would you start to write a geo-coder?

Because the open source geo-coders cannot begin to compare to Google's or even Yahoo's, I would like to start a project to create a good open source geo-coder. Just to clarify, a geo-coder takes some text (usually with some constraints) and returns one or more lat/lon pairs.
I realize that this is a difficult and garguntuan task, so I am wondering how you might get started. What would you read? What algorithms would you familiarize yourself with? What code would you review?
And also, assuming you were going to develop this very agilely, what would you want the first prototype to be able to do?
EDIT: Let's set aside the data question for now. I am going to use OpenStreetMap data, along with a database of waypoints that I have. I would later plan to include other data sets as well, and I realize the geo-coder would be inherently limited by the quality of the original data.
The first (and probably blocking) problem would be: where do you get your data from? (unless you are willing to pay thousands of dollars for proprietary sets).
You could build a geocoding-api on top of OpenStreetMap (they publish their data in dumps on a regular basis) I guess, but that one was still very incomplete last time I checked.
Algorithms are easy. Good mapping data, however, is expensive. Very expensive.
Google drove their cars all over the world, collecting this data among other things.
From a .NET point of view these articles might be interesting for you:
Writing Your Own GPS Applications: Part I
Writing Your Own GPS Applications: Part 2
Writing GIS and Mapping Software for .NET
I've only glanced at the articles but they've been on CodeProject's 'Most Popular' list for a long time.
And maybe this CodePlex project which the author of the articles above made available.
I would start at the absolute beginning by figuring out how you're going to get the data that matches a street address with a geocode. Either Google had people going around with GPS units, OR they got the information from some existing source. That existing source may have been... (all guesses)
The Postal Service
Some existing maps(printed)
A bunch of enthusiastic users that were early adopters of GPS technology who ere more than willing to enter in street addresses and GPS coordinates
Some government entity (or entities)
Their own satellites
etc
I guess what I'm getting at is the information was either imported from somewhere or was input by someone via some interface. As my starting point I would look at how to get that information. In an open source situation, you may be able to get a bunch of enthusiastic people to enter information.
So for my first prototype, boring as it would be, I would create a form for entering information.
Then you need to know the math for figuring out the closest distance (as the crow flies). From there, try to figure out how to include roads. (My guess is you would have to have data point for each and every curve, where you hold the geocode location of the curve, and the angle of the road on a north/south and east/west vector. You'd probably need to take incline into account, too to get accurate road measurements.)
That's just where I'd start.
But in all honesty, I wouldn't even start on this. Other programmers have done it already, I'm more interested in what hasn't already been done.
get my free raw data from somewhere like http://ipinfodb.com/ip_database.php
load it into a database, denormalizing for fast lookups
design my API
build it out as a RESTful web service
return results in varying formats: JSON, XML, CSV, raw text
The first prototype should accept a ZIP code and return lat/lon in raw text.

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)