Creating nodes at polyline vertices of gis shape file - mesa-abm

I am new to mesa but have created an ABM (Agent-Based-Model) previously using netlogo. I am aware that there is a Gis extension for mesa ABM platform called geo-mesa. What I am interested in is, if there is a road network (polyline) shapefile if its possible to allow the agents to be moved on these roads.
(In Netlogo you do this by creating a node (a type of agent) at the vertices of the polyline allowing agents to traverse the road network by jumping from node to node.) Is something similar possible in mesa/geo mesa. Also, are there any other Models created using mesa apart from the tutorials in mesa and geo-mesa website, Geoschelling model.
Thanks in advance for any helpful comments

You can use the Continuous Space grid and set your turtles in motion with the get_heading() function. Some (little) reference is presented on https://mesa.readthedocs.io/en/master/apis/space.html#space.ContinuousSpace. Besides the project page, the project github page (https://github.com/projectmesa/mesa) is the best source for how to use mesa. Yet, it is still a module to get people using it and sharing their experience. Good luck on you modelling!

Related

How to distinguish OpenCV cameras?

I am writing C++ class for managing multiple cameras and reading frames from them. Let's say it is wrapper for OpenCV. Currently I am finding cameras by trying to create devices from 0-10 range and If there is output I know that I've found working camera. I can always save internal IDs of those cameras to distinguish them but what If another camera is plugged in? It may break the order of IDs. So is there any way to distinguish OpenCV cameras for example by getting their hardware IDs?
I know this doesn't help you much, but the short answer is "No, OpenCV doesn't currently provide that capability."
According to the doc, any hardware ids are not properties you can retrieve using the get method or any other.
Having said that, if you're very intent on using OpenCV, I would still test the behavior of OpenCV 2.4.10 on various platforms and using various middleware and see how it behaves. If you get a consistent behavior, then you can run with it, but be somewhat ready for it to break in the future. What would work for you is that OpenCV is using various middleware in the backend, such as V4L, Qt, etc., and these are well-maintained and more-or-less consistent.
In retrospect, I would stay away from OpenCV's video interface altogether right now for commercial software, unless you're okay with the situation I described. Beware that OpenCV 3.0 videoio library is unstable at this point and has open bug reports.

Is there any existing (public) source code for large-scale kinect fusion?

Kinect Fusion requires the client code to specify what is effectively a bounding-box and voxel resolution within the box before initialisation. The details of the reconstruction within the box are held in GPU memory, and therefore one runs into limits rather quickly. Certainly for a space the size of, say, a standard residential house at high resolution the amount of GPU memory required is (way too) high.
The Fusion SDK allows one to copy the data to/from GPU memory, and to reset the bounding volume, resolution etc at any time, so in theory one could synthesise a large volume by stitching together a number of small volumes, each of which a normal GPU can handle. It seems to me to be a technique with quite a few subtle and difficult problems associated with it though.
Nevertheless this seems to have been done with kintinuous. But kintinuous does not seem to have any source code (or object code for that matter) publicly available. This is also mentioned in this SO post.
I was wondering if this has been implemented in any form with public source code. I've not been able to find anything on this except the above mentioned kintinuous.
Kintinuous is opensource now since it's first commit on Oct 22, 2015
Here is another blog on the tag kintinuous: https://hackaday.com/tag/kintinuous/
You can have a experimental open source large-scale kinect fusion source code in PCL. they do volume stitch when camera pose crosses some threshold. check this
and there is actually new version of scalable kinect fusion in MSR, but they haven't yet put it into the SDK, so you can't use it right a way.
They use a hierarchical data structure to store the unbouded reconstruction volume. you can go check this then download their paper, and implement it by yourself.
============EDIT=============
you can have another code from technology university of Munich, named 'fastfusion'. they use a multi-resolution octree to store the voxels, and extract mesh every second in other thread. It uses OpenNI.
It doesn't contain camera tracking, but you can use their dvo-slam for visual odometry.
and a recently released project named InfiniTAM use Hash table to store the voxels. it can runs on Kinect v2.
I found this code that supports moving volume:
http://www.ccs.neu.edu/research/gpc/rxkinfu/index.html
It is based on KinFu from PCL.

Design of virtual trial room

As a part of my masters project I proposed to build a virtual trial room application intended for retail clothing stores. Currently its meant to be used directly in store though it may be extended for online stores as well.
This application will show customers how a selected apparel would look on them by showing it on their 3D replica on screen.
It involves 3 steps
Sizing up the customer
Building customer replica 3D humanoid model
Apply simulated cloth on the model
My question is about the feasibility of the project and choice of framework.
Can this be achieved in real time using a normal Desktop computer? If yes what would be appropriate framework ( hardware, software, programming language etc ) for this purpose?
On the work I have done till now, I was planning to achieve above steps in following ways
for step 1 : option a) Two cameras for front and side views or
option b) 1 Kinect or 2 Kinect for complete 3D data
for step 2: either use makehuman (http://www.makehuman.org/) code to build a customised 3D model using above data or build everything from scratch, unsure about the framework.
for step 3: Just need few cloth samples, so thought of building simulated clothes in blender.
Currently I have just the vague idea about different pieces but I am not sure of how to develop complete application.
Theoretically this can be achieved in real time. Many usefull algorithms for video tracking, stereo vision and 3d recostruction are available in OpenCV library. But it's very difficult to build robust solution. For example, you'll probably need to track human body which moves frame to frame and perform pose estimation (OpenCV contains POSIT algorithm), however it's not trivial to eliminate noise in resulting objects coordinates. For inspiration see a nice work on video tracking.
You might want to choose another way, simplify some things, avoid complicated stuff do things less dynamicaly and estimate only clothes size and approximate human location. I this case most likely you will create something usefull and interesting.
I've lost link to one online fiting room where hands and body detection implemented. Using Kinnect solves many problems. But If for some reason you won't use it then AR(augmented reality) helps you (yet another fitting room)

3D scene file format & viewer

I am looking for a cross-platform solution for saving and viewing 3D scenes (visualizations of engineering simulation models and results) but there (still) doesn't seem to be much out there.
I looked into this almost 10 years ago and settled on VRML then (and started the project that eventually turned in OpenVRML). Unfortunately, VRML/X3D has not become anywhere near ubiquitous in the past decade.
Ideally a solution would offer a C++ library that could be plugged in to a 3D rendering pipeline at some level to capture the 3D scene to a file; and a freely redistributable viewer that allowed view manipulation, part hiding, annotation, dimensioning, etc. At least linux, mac, and windows should be supported.
3D PDFs would seem to meet most of the viewer requirements, but the Adobe sdk is apparently only available on Windows.
Any suggestions ?
The closest thing that I'm aware of is Collada.
Many 3D engines can read it, and most 3D design tools can read and write it.
I believe the Ogre engine has pretty good support.
If you are using OpenGL, GLIntercept will save all OpenGL calls (with the data they were called with) to a XML file. It's only half the solution, though, but it shouldn't be hard to parse it and recreate the scene yourself.
Take a look at Ogre3d.org. Its just an engine, you must program with it. But OGRE is probably the better (free/open) platform to develop 3D right now.

CoreImage for Win32

For those not familiar with Core Image, here's a good description of it:
http://developer.apple.com/macosx/coreimage.html
Is there something equivalent to Apple's CoreImage/CoreVideo for Windows? I looked around and found the DirectX/Direct3D stuff, which has all the underlying pieces, but there doesn't appear to be any high level API to work with, unless you're willing to use .NET AND use WPF, neither of which really interest me.
The basic idea would be create/load an image, attach any number of filters that can be chained together, forming a graph, and then render the image to an HDC, using the GPU to do most of the hard work. DirectX/Direct3D has these pieces, but you have to jump through a lot of hoops (or so it appears) to use it.
There are a variety of tools for working with shaders (such as RenderMonkey and FX-Composer), but no direct equivalent to CoreImage.
But stacking up fragment shaders on top of each other is not very hard, so if you don't mind learning OpenGL it would be quite doable to build a framework that applies shaders to an input image and draws the result to an HDC.
Adobe's new Pixel Blender is the closest technology out there. It is cross-platform -- it's part of the Flash 10 runtime, as well as the key pixel-oriented CS4 apps, namely After Effects and (soon) Photoshop. It's unclear, however, how much is currently exposed for embedding in other applications at this point. In the most extreme case it should be possible to embed by embedding a Flash view, but that is more overhead than would obviously be idea.
There is also at least one smaller-scale 3rd party offering: Conduit Pixel Engine. It is commercial, with no licensing price clearly listed, however.
I've now got a solution to this. I've implemented an ImageContext class, a special Image class, and a Filter class that allows similar functionality to Apple's CoreImage. All three use OpenGL (I gave up trying to get this to work on DirectX due to image quality issues, if someone knows DirectX well contact me, because I'd love to have a Dx version) to render an image(s) to a context and use the filters to apply their effects (as HLGL frag shaders). There's a brief write up here:
ImageKit
with a screen shot of an example filter and some sample source code.