This should be simple, but when I look for it I just find web packages. I need something better than as oriented on This Blog. Maybe using .oms file or shapefiles. Some way to give bbox and get the OpenStreetMap background on Basemap map.
I found some questions like this on Stack, but the answers directs to, or download the .png file on OpenStreetMap website, or to use some web package.
I would suggest not to try to make something work, which is not made (yet) to work together.
There is a simple way to achieve what you want with Mplleaflet.
https://github.com/jwass/mplleaflet
The library allows you to visualize geographic data on a beautiful interactive openstreetmap. Map projection of data in long lat format is automatically performed.
Installation in windows and ubuntu is easy:
pip install mplleaflet
You can start with the provided examples and go from there.
There are many libraries today that can do this for you - smopy, folium and tilemapbase are three examples from my recent use.
Each of these tools fetch map tiles from the one of several servers that host OSM or other (Stamen, Carto, etc) map tiles and then allows you to display and plot on them using matplotlib. Tilemapbase also caches the tiles locally so that they are not fetched again the next time.
But there does not seem to be a readily available tool yet, based on my recent experience, to use offline tilesets (such as a compressed .mbtiles file) as background for matplotlib plotting.
This link contains a survey of the above tools and more - https://github.com/ispmarin/maps
EDIT
I had mentioned in my previous answer that Tilemapbase did not work for some geographical locations in the world, and hence explicitly recommended not to use it. But it turns out I was wrong, and I apologize for that. It actually works great! The problem in my case was embarrassingly simple - I had reversed the order or lat and lon while fetching tiles, and hence it always fetched blank tiles for certain geographical locations, leading me to assume that it did not work for those locations.
I had raised the issue in github and it was immediately resolved by the developers. See it here - https://github.com/MatthewDaws/TileMapBase/issues/7
Note the responses:
Coordinates are to be provided in order (1) longitude, (2) latitude. If you copied them from Google Maps, they will be in lat/lon order and you have to flip them. So your map image is not empty, it's just a location in the ocean north of Norway.
And from the developer himself:
Yes, when I wrote the code, it seemed that there wasn't a universal standard for ordering. So I chose the one which is different to Google Maps. The method name from_lonlat should give a hint as to the correct ordering...
For those who are using Cartopy, this is relatively simple:
import matplotlib.pyplot as pl
import numpy as np
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
request = cimgt.OSM()
# Bounds: (lon_min, lon_max, lat_min, lat_max):
extent = [1, 13, 45, 53]
ax = pl.axes(projection=request.crs)
ax.set_extent(extent)
ax.add_image(request, 5) # 5 = zoom level
# Just some random points/lines:
pl.scatter(4.92, 51.97, transform=ccrs.PlateCarree())
pl.plot([4.92, 9], [51.97, 47], transform=ccrs.PlateCarree())
This produces:
You can download the necessary tiles yourself from one of the tile servers. The OSM wiki explains the technical details behind slippy map tilenames and also includes examples for various programming and scripting languages.
Please also read about the tile usage policy and keep in mind that different tile serves may have different policies.
This is very easy with geopandas and contextily.
Have a look at https://geopandas.org/gallery/plotting_basemap_background.html.
Related
I have a database full of families with addresses. I want to print out mailing labels for each family. I have various avery labels to use, is there an easy way to do this task? Is there a library or some tutorials you know of that others have used to accomplish this?
I used a project that was ported to python 2.6 and used pyPDF to make a pdf with labels of specific dimensions, but I think it may be outdated. The labels printed don't line up. Do I just need to adjust these or is there an easier way to save the data and do a mail merge in Word?
If there is not another way, I guess I'll just create a spreadsheet with the fields to import into Word.
Not sure if anybody will find this helpful. I had an extremely tough time finding a way to print out mailing labels from my Django app.
Instead I decided to export an spreadsheet using the xlwt library. Then you can use MS Word's Mail Merge functions to get Avery labels for each of your contacts.
I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.
Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite
It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.
I would like to use R to generates some graphs/plots/charts and then use GTK to display them. One feature is that the plot must be able to auto-update and have some interactive features such as set maxima/minima labels, re-scale, allow for normalisation, etc... The data set is potentially of the order of several thousands data points, possibly up to ten of thousands.
Are there any libraries/modules that already do that? My Google-fu was weak. I do not mind either a c++ or a python one.
If there are no such library, how would I be able to achieve this?
Note: The system is kind of embedded -- it certainly has no Internet connection but does have an internal network. Using the web would increase the cost of the system drastically and thus it is not a good solution to my problem.
As you've put python in your tags too, maybe matplotlib would be of some interest? Just in case.
I wondered whether 10,000 points would be an issue with these graphics devices, and with this gWidgets script running under RGtk2 and Qt it was just about the border of fast enough to be acceptable (certainly on my aging machine 100,000 points was way too many):
library(gWidgets)
options(guiToolkit="RGtk2")
w <- gwindow("test")
pg <- gpanedgroup(cont=w)
fl <- glayout(cont=pg)
gg <- ggraphics(cont=pg)
size(gg) <- c(600, 600)
fl[1,1] <- "No. points"
fl[1,2] <- no_pts <- gedit("10", cont=fl, coerce.with=as.numeric)
fl[2,2] <- gbutton("click me", cont=fl, label="", handler=function(h,...) {
n <- svalue(no_pts)
plot(rnorm(n), rnorm(n))
})
If this speed is acceptable, one can make a GUI along the lines of playwith for your specific needs relatively easily. It might be that the cranvas package can make this faster for Qt.
Otherwise, I don't know if the rgl package of Duncan Murdoch would be useful, but it might be. Simon Urbanek gave a very nice presentation at the last useR meeting where the openGl graphics engine in some browsers allowed for very fast plots with over 1,000,000 points, and this was done over a websocket.
First of all, R at its core does not feature interactive plots -- this goes against the idea of controlling almost everything with the programming language itself.
There are some libraries that allow you to create more or less interactive plots, starting from the simplistic locator function that you would need to wrap into your R programs, and including the manipulate package from RStudio as well as the iplot package. There is even a GTK+ based R package called playwith.
Depending on what you actually want to achieve, maybe using gnuplot would be a better idea.
For a web based solution (web is the future :)) that allows this kind of functionality from a server, I would take a look at the shiny package just released by the people at Rstudio. It looks like what you need, without you having to do any programmng. And you get the bonus that anyone with a browser can open it from anywhere. See this lnks:
http://blog.rstudio.org/2012/11/08/introducing-shiny/
Because the open source geo-coders cannot begin to compare to Google's or even Yahoo's, I would like to start a project to create a good open source geo-coder. Just to clarify, a geo-coder takes some text (usually with some constraints) and returns one or more lat/lon pairs.
I realize that this is a difficult and garguntuan task, so I am wondering how you might get started. What would you read? What algorithms would you familiarize yourself with? What code would you review?
And also, assuming you were going to develop this very agilely, what would you want the first prototype to be able to do?
EDIT: Let's set aside the data question for now. I am going to use OpenStreetMap data, along with a database of waypoints that I have. I would later plan to include other data sets as well, and I realize the geo-coder would be inherently limited by the quality of the original data.
The first (and probably blocking) problem would be: where do you get your data from? (unless you are willing to pay thousands of dollars for proprietary sets).
You could build a geocoding-api on top of OpenStreetMap (they publish their data in dumps on a regular basis) I guess, but that one was still very incomplete last time I checked.
Algorithms are easy. Good mapping data, however, is expensive. Very expensive.
Google drove their cars all over the world, collecting this data among other things.
From a .NET point of view these articles might be interesting for you:
Writing Your Own GPS Applications: Part I
Writing Your Own GPS Applications: Part 2
Writing GIS and Mapping Software for .NET
I've only glanced at the articles but they've been on CodeProject's 'Most Popular' list for a long time.
And maybe this CodePlex project which the author of the articles above made available.
I would start at the absolute beginning by figuring out how you're going to get the data that matches a street address with a geocode. Either Google had people going around with GPS units, OR they got the information from some existing source. That existing source may have been... (all guesses)
The Postal Service
Some existing maps(printed)
A bunch of enthusiastic users that were early adopters of GPS technology who ere more than willing to enter in street addresses and GPS coordinates
Some government entity (or entities)
Their own satellites
etc
I guess what I'm getting at is the information was either imported from somewhere or was input by someone via some interface. As my starting point I would look at how to get that information. In an open source situation, you may be able to get a bunch of enthusiastic people to enter information.
So for my first prototype, boring as it would be, I would create a form for entering information.
Then you need to know the math for figuring out the closest distance (as the crow flies). From there, try to figure out how to include roads. (My guess is you would have to have data point for each and every curve, where you hold the geocode location of the curve, and the angle of the road on a north/south and east/west vector. You'd probably need to take incline into account, too to get accurate road measurements.)
That's just where I'd start.
But in all honesty, I wouldn't even start on this. Other programmers have done it already, I'm more interested in what hasn't already been done.
get my free raw data from somewhere like http://ipinfodb.com/ip_database.php
load it into a database, denormalizing for fast lookups
design my API
build it out as a RESTful web service
return results in varying formats: JSON, XML, CSV, raw text
The first prototype should accept a ZIP code and return lat/lon in raw text.
Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)