Leaflet - Tiled map shifted - python-2.7

I have two maps: A tiled satellite map from OpenMapTiles, which is stored locally and displayed in the background. I'd like to display another map above that. At the moment it consists of a simple world map, which I created in Python with mpl_toolkits.basemap and then split into tiles with gdal2tiles.py (later I would like to limit the overlay map to certain regions like USA). But if I display both maps on top of each other, they do not cover the same area (see below).
Shifted map after displaying it with Leaflet
Unfortunately I don't know anything about Leaflet except the tutorials. I have been looking for a solution for over a week and don't even have a clue what it could be. I really would appreciate your help.
The Python script:
# -*- coding: utf-8 -*-
import os
import math
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import subprocess # to execute shell commands
# Import settings
from settings import *
# 1. CREATE A BASEMAP
width = 4800
height = 4800
dpi = 120
plt.figure( figsize=(width/dpi, height/dpi) )
map = Basemap( projection='merc', resolution='c',
lat_0=51., lon_0=10.,
llcrnrlat=-85.051, urcrnrlat=85.051,
llcrnrlon=-180.000, urcrnrlon=180.000
)
map.fillcontinents(color='coral')
# Save the image
plt.axis('off')
filename = os.path.join(fileSettings.oPath, fileSettings.oFile)
plt.savefig( filename, dpi=float(dpi),
transparent=True,
bbox_inches='tight', pad_inches=0
)
# Call map tile generator
dirname = os.path.join(fileSettings.oPath, "tiles")
subprocess.check_output( "rm -rf " + dirname,
stderr=subprocess.STDOUT,
shell=True,
)
tilesize = 256
minZoom = 0
#maxZoom = 4
maxZoom = int(math.ceil( np.log2(max(width, height)/tilesize) )) # math.ceil: round up
try:
subprocess.check_output( "helpers/mapTileGenerator/gdal2tiles.py --leaflet --profile=raster --zoom=" + str(minZoom) + "-" + str(maxZoom) + " " + filename + " " + dirname,
stderr=subprocess.STDOUT,
shell=True,
)
print("Ready.")
except subprocess.CalledProcessError as e:
print("Error {}: {}".format(e.returncode, e.output))
The .js file:
var map;
function initMap() {
// Define zoom settings
var minZoom = 0, // The smallest zoom level.
maxZoom = 4, // The biggest zoom level.
zoomDelta = 1, // How many zoom levels to zoom in/out when using the zoom buttons or the +/- keys on the keyboard.
zoomSnap = 0; // Fractional zoom, e.g. if you set a value of 0.25, the valid zoom levels of the map will be 0, 0.25, 0.5, 0.75, 1., and so on.
// Example for locally stored map tiles:
var openMapTilesLocation = 'openMapTiles/tiles/{z}/{x}/{y}.png'; // Location of the map tiles.
var openMapTilesAttribution = 'Map data © OpenMapTiles Satellite contributors'; // Appropriate reference to the source of the map tiles.
// Example for including our self-created map tiles:
var myMapLocation = 'tiles/map/{z}/{x}/{y}.png'; // Location of the map tiles.
var myMapAttribution = 'Map data © ... contributors'; // Appropriate reference to the source of the tiles.
// Ceate two base layers
var satellite = L.tileLayer(openMapTilesLocation, { attribution: openMapTilesAttribution }),
myMap = L.tileLayer(myMapLocation, { attribution: myMapAttribution });
// Add the default layers to the map
map = L.map('map', { minZoom: minZoom,
maxZoom: maxZoom,
zoomDelta: zoomDelta, zoomSnap: zoomSnap,
crs: L.CRS.EPSG3857,
layers: [satellite, myMap], // Layers that are displayed at startup.
}
);
// Set the start position and zoom level
var startPosition = new L.LatLng(51., 10.); // The geographical centre of the map (latitude and longitude of the point).
map.setView(startPosition, maxZoom);
// Next, we’ll create two objects. One will contain our base layers and one will contain our overlays. These are just simple objects with key/value pairs. The key sets the text for the layer in the control (e.g. “Satellite”), while the corresponding value is a reference to the layer (e.g. satellite).
var baseMaps = {
"Satellite": satellite,
};
var overlayMaps = {
"MyMap": myMap,
};
// Now, all that’s left to do is to create a Layers Control and add it to the map. The first argument passed when creating the layers control is the base layers object. The second argument is the overlays object.
L.control.layers(baseMaps, overlayMaps).addTo(map);
}

Related

How to draw sub-structures of a polycyclic aromatic which shows bond angles correctly?

thank you for reading my question.
Assume that I have a polycyclic aromatic (let's call it "parent molecule") as shown below:
smile = "c1ccc2ocnc2c1"
mol = Chem.MolFromSmiles(smile)
When I draw sub-structures of the parent molecule, I notice that the bond angles in sub-structures are different from the bond angles in the parent molecule. Following is the code that I use:
from rdkit import Chem
from rdkit.Chem.Draw import rdMolDraw2D
from IPython.display import SVG
smile_1 = 'c(cc)cc'
smile_2 = 'n(co)c(c)c'
m1 = Chem.MolFromSmiles(smile_1,sanitize=False)
Chem.SanitizeMol(m1, sanitizeOps=(Chem.SanitizeFlags.SANITIZE_ALL^Chem.SanitizeFlags.SANITIZE_KEKULIZE^Chem.SanitizeFlags.SANITIZE_SETAROMATICITY))
m2 = Chem.MolFromSmiles(smile_2,sanitize=False)
Chem.SanitizeMol(m2, sanitizeOps=(Chem.SanitizeFlags.SANITIZE_ALL^Chem.SanitizeFlags.SANITIZE_KEKULIZE^Chem.SanitizeFlags.SANITIZE_SETAROMATICITY))
mols = [m1, m2]
smiles = ["smile_1", "smile_2"]
molsPerRow=2
subImgSize=(200, 200)
nRows = len(mols) // molsPerRow
if len(mols) % molsPerRow:
nRows += 1
fullSize = (molsPerRow * subImgSize[0], nRows * subImgSize[1])
d2d = rdMolDraw2D.MolDraw2DSVG(fullSize[0], fullSize[1], subImgSize[0], subImgSize[1])
d2d.drawOptions().prepareMolsBeforeDrawing=False
d2d.DrawMolecules(mols, legends=smiles)
d2d.FinishDrawing()
SVG(d2d.GetDrawingText())
Which results in the following drawing:
As can be seen, the angles between several bonds in sub-structures are different from the parent molecule.
Is there any way to draw sub-structures with the same bond angles as parent molecule?
Any help is greatly appreciated.
You can set the original positions of your parent to the substructure.
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import rdDepictor
rdDepictor.SetPreferCoordGen(True)
def getNiceSub(parent, sub):
# Get the coordinates of parent (also need to built a conformer)
mol = Chem.MolFromSmiles(parent)
rdDepictor.Compute2DCoords(mol)
# Get the coordinates of substructure to built a conformer
substruct = Chem.MolFromSmiles(sub, sanitize=False)
rdDepictor.Compute2DCoords(substruct)
# Get the index of the matched atoms
ms = mol.GetSubstructMatch(substruct)
# Get the positions of the matched atoms
conf1 = mol.GetConformer()
p = [list(conf1.GetAtomPosition(x)) for x in ms]
# Set the original positions of parent to substructure
conf2 = substruct.GetConformer()
for n in range(len(ms)):
conf2.SetAtomPosition(n, p[n])
return substruct
parent = 'c1ccc2ocnc2c1'
substructer = 'n(co)c(c)c'
nicesub = getNiceSub(parent, substructer)
parent
substructure

Registering screen change in python?

So using tkinter I have made a simple program that changes the color of all shapes on the canvas to a random color every second ,what I am looking for is a way to register every screen change and write down the time between screen changes in a separate file.I also need to do it without the help of too many external libraries if possible.
My code so far:
#!/usr/bin/python -W ignore::DeprecationWarning
import Tkinter
import tkMessageBox
import time
import random
colors = ['DarkOrchid1','chocolate3','gold2','khaki2','chartreuse2','deep pink','white','grey','orange']
top = Tkinter.Tk()
global b
b=0
C = Tkinter.Canvas(top, bg="blue", height=600, width=800)
bcg1=C.create_rectangle(0,0,800,100,fill=random.choice(colors))
bcg2=C.create_rectangle(0,100,800,200,fill=random.choice(colors))
bcg3=C.create_rectangle(0,200,800,300,fill=random.choice(colors))
bcg4=C.create_rectangle(0,300,800,400,fill=random.choice(colors))
bcg5=C.create_rectangle(0,400,800,500,fill=random.choice(colors))
bcg6=C.create_rectangle(0,500,800,600,fill=random.choice(colors))
bcgs=[bcg1,bcg2,bcg3,bcg4,bcg5,bcg6]
coord = 10, 50, 240, 210
rect1=C.create_rectangle(0,0,100,100,fill="green")
rect2=C.create_rectangle(700,500,800,600,fill="green")
rect3=C.create_rectangle(0,500,100,600,fill="green")
rect4=C.create_rectangle(700,0,800,100,fill="green")
def color():
global b
global bcgs
global color
if b==0:
C.itemconfig(rect1,fill='green')
C.itemconfig(rect2,fill='green')
C.itemconfig(rect3,fill='green')
C.itemconfig(rect4,fill='green')
b=1
count=0
for i in range(len(bcgs)):
C.itemconfig(bcgs[i],fill=random.choice(colors))
elif b==1:
C.itemconfig(rect1,fill='red')
C.itemconfig(rect2,fill='red')
C.itemconfig(rect3,fill='red')
C.itemconfig(rect4,fill='red')
b=0
for i in range(len(bcgs)):
C.itemconfig(bcgs[i],fill=random.choice(colors))
top.after(2000, color)
C.pack()
color()
top.mainloop()
Unless you use a real time OS, you will never get perfect timing. You can bank on being a few milliseconds off the mark. To see how much, you can calculate the difference in time.time(). For the best accuracy, move the after call to the first thing in the function.
That plus some other improvements:
#!/usr/bin/python -W ignore::DeprecationWarning
import Tkinter
import time
import random
from itertools import cycle
colors = ['DarkOrchid1','chocolate3','gold2','khaki2','chartreuse2','deep pink','white','grey','orange']
rect_colors = cycle(['green', 'red'])
top = Tkinter.Tk()
C = Tkinter.Canvas(top, bg="blue", height=600, width=800)
bcg1=C.create_rectangle(0,0,800,100,fill=random.choice(colors))
bcg2=C.create_rectangle(0,100,800,200,fill=random.choice(colors))
bcg3=C.create_rectangle(0,200,800,300,fill=random.choice(colors))
bcg4=C.create_rectangle(0,300,800,400,fill=random.choice(colors))
bcg5=C.create_rectangle(0,400,800,500,fill=random.choice(colors))
bcg6=C.create_rectangle(0,500,800,600,fill=random.choice(colors))
bcgs=[bcg1,bcg2,bcg3,bcg4,bcg5,bcg6]
coord = 10, 50, 240, 210
rect1=C.create_rectangle(0,0,100,100,fill="green")
rect2=C.create_rectangle(700,500,800,600,fill="green")
rect3=C.create_rectangle(0,500,100,600,fill="green")
rect4=C.create_rectangle(700,0,800,100,fill="green")
rects = [rect1,rect2,rect3,rect4]
last_time = time.time()
def color():
top.after(2000, color)
global last_time
rect_color = next(rect_colors)
for item in rects:
C.itemconfig(item, fill=rect_color)
for item in bcgs:
C.itemconfig(item, fill=random.choice(colors))
print("{} seconds since the last run".format(time.time() - last_time))
last_time = time.time()
C.pack()
color()
top.mainloop()

How to obtain the contour plot data for each scatter points?

I have plotted a contour plot as background which represent the altitude of the area.
And 100 scatter points were set represent the real pollutant emission source. Is there a method to obtain the altitude of each point?
This is my code:
%matplotlib inline
fig=plt.figure(figsize=(16,16))
ax=plt.subplot()
xi,yi = np.linspace(195.2260,391.2260,50),
np.linspace(4108.9341,4304.9341,50)
height=np.array(list(csv.reader(open("/Users/HYF/Documents/SJZ_vis/Concentration/work/terr_grd.csv","rb"),delimiter=','))).astype('float')
cmap = cm.get_cmap(name='terrain', lut=None)
terrf = plt.contourf(xi, yi, height,100, cmap=cmap)
terr = plt.contour(xi, yi, height, 100,
colors='k',alpha=0.5
)
plt.clabel(terr, fontsize=7, inline=20)
ax.autoscale(False)
point= plt.scatter(dat_so2["xp"], dat_so2["yp"], marker='o',c="grey",s=40)
ax.autoscale(False)
for i in range(0,len(dat_so2["xp"]),1):
plt.text(dat_so2["xp"][i], dat_so2["yp"][i],
str(i),color="White",fontsize=16)
ax.set_xlim(225,275)
ax.set_ylim(4200,4260)
plt.show()
You can do this with scipy.interpolate.interp2d
For example, you could add to your code:
from scipy import interpolate
hfunc = interpolate.interp2d(xi,yi,height)
pointheights = np.zeros(dat_so2["xp"].shape)
for i,(x,y) in enumerate(zip(dat_so2["xp"],dat_so2["yp"])):
pointheights[i]=hfunc(x,y)
Putting this together with the rest of your script, and some sample data, gives this (I've simplified a couple of things here, but you get the idea):
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
from scipy import interpolate
fig=plt.figure(figsize=(8,8))
ax=plt.subplot()
#xi,yi = np.linspace(195.2260,391.2260,50),np.linspace(4108.9341,4304.9341,50)
xi,yi = np.linspace(225,275,50),np.linspace(4200,4260,50)
# A made up function of height (in place of your data)
XI,YI = np.meshgrid(xi,yi)
height = (XI-230.)**2 + (YI-4220.)**2
#height=np.array(list(csv.reader(open("/Users/HYF/Documents/SJZ_vis/Concentration/work/terr_grd.csv","rb"),delimiter=','))).astype('float')
cmap = cm.get_cmap(name='terrain', lut=None)
terrf = plt.contourf(xi, yi, height,10, cmap=cmap)
terr = plt.contour(xi, yi, height, 10,
colors='k',alpha=0.5
)
plt.clabel(terr, fontsize=7, inline=20)
ax.autoscale(False)
# Some made up sample points
dat_so2 = np.array([(230,4210),(240,4220),(250,4230),(260,4240),(270,4250)],dtype=[("xp","f4"),("yp","f4")])
point= plt.scatter(dat_so2["xp"], dat_so2["yp"], marker='o',c="grey",s=40)
# The interpolation function
hfunc = interpolate.interp2d(xi,yi,height)
# Now, for each point, lets interpolate the height
pointheights = np.zeros(dat_so2["xp"].shape)
for i,(x,y) in enumerate(zip(dat_so2["xp"],dat_so2["yp"])):
pointheights[i]=hfunc(x,y)
print pointheights
ax.autoscale(False)
for i in range(0,len(dat_so2["xp"]),1):
plt.text(dat_so2["xp"][i], dat_so2["yp"][i],
str(i),color="White",fontsize=16)
# We can also add a height label to the plot
plt.text(dat_so2["xp"][i], dat_so2["yp"][i],
"{:4.1f}".format(pointheights[i]),color="black",fontsize=16,ha='right',va='top')
ax.set_xlim(225,275)
ax.set_ylim(4200,4260)
plt.show()

Opencv: detect mouse position clicking over a picture

I have this code in which I simply display an image using OpenCV:
import numpy as np
import cv2
class LoadImage:
def loadImage(self):
self.img=cv2.imread('photo.png')
cv2.imshow('Test',self.img)
self.pressedkey=cv2.waitKey(0)
# Wait for ESC key to exit
if self.pressedkey==27:
cv2.destroyAllWindows()
# Start of the main program here
if __name__=="__main__":
LI=LoadImage()
LI.loadImage()
Once the window displayed with the photo in it, I want to display on the console (terminal) the position of the mouse when I click over the picture. I have no idea how to perform this. Any help please?
Here is an example mouse callback function, that captures the left button double-click
def draw_circle(event,x,y,flags,param):
global mouseX,mouseY
if event == cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img,(x,y),100,(255,0,0),-1)
mouseX,mouseY = x,y
You then need to bind that function to a window that will capture the mouse click
img = np.zeros((512,512,3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_circle)
then, in a infinite processing loop (or whatever you want)
while(1):
cv2.imshow('image',img)
k = cv2.waitKey(20) & 0xFF
if k == 27:
break
elif k == ord('a'):
print mouseX,mouseY
What Does This Code Do?
It stores the mouse position in global variables mouseX & mouseY every time you double click inside the black window and press the a key that will be created.
elif k == ord('a'):
print mouseX,mouseY
will print the current stored mouse click location every time you press the a button.
Code "Borrowed" from here.
Below is my implementation:
No need to store the click position, ONLY display it:
def onMouse(event, x, y, flags, param):
if event == cv2.EVENT_LBUTTONDOWN:
# draw circle here (etc...)
print('x = %d, y = %d'%(x, y))
cv2.setMouseCallback('WindowName', onMouse)
If you want to use the positions in other places of your code, you can use below way to obtain the coordinates:
posList = []
def onMouse(event, x, y, flags, param):
global posList
if event == cv2.EVENT_LBUTTONDOWN:
posList.append((x, y))
cv2.setMouseCallback('WindowName', onMouse)
posNp = np.array(posList) # convert to NumPy for later use
import cv2
cv2.imshow("image", img)
cv2.namedWindow('image')
cv2.setMouseCallback('image', on_click)
def on_click(event, x, y, p1, p2):
if event == cv2.EVENT_LBUTTONDOWN:
cv2.circle(lastImage, (x, y), 3, (255, 0, 0), -1)
You can detect mouse position clicking over a picture via performing the various mouse click events.
You just to remember one thing while performing the mouse clicks events is that, you should have to use the same window name at all places wherever you are using the cv2.imshow or cv2.namedWindow
I given the working code in answer that uses python 3.x and opencv in the following the stackoverflow post:
https://stackoverflow.com/a/60445099/11493115
You can refer the above link for better explanation.
Code:
import cv2
import numpy as np
#This will display all the available mouse click events
events = [i for i in dir(cv2) if 'EVENT' in i]
print(events)
#This variable we use to store the pixel location
refPt = []
#click event function
def click_event(event, x, y, flags, param):
if event == cv2.EVENT_LBUTTONDOWN:
print(x,",",y)
refPt.append([x,y])
font = cv2.FONT_HERSHEY_SIMPLEX
strXY = str(x)+", "+str(y)
cv2.putText(img, strXY, (x,y), font, 0.5, (255,255,0), 2)
cv2.imshow("image", img)
if event == cv2.EVENT_RBUTTONDOWN:
blue = img[y, x, 0]
green = img[y, x, 1]
red = img[y, x, 2]
font = cv2.FONT_HERSHEY_SIMPLEX
strBGR = str(blue)+", "+str(green)+","+str(red)
cv2.putText(img, strBGR, (x,y), font, 0.5, (0,255,255), 2)
cv2.imshow("image", img)
#Here, you need to change the image name and it's path according to your directory
img = cv2.imread("D:/pictures/abc.jpg")
cv2.imshow("image", img)
#calling the mouse click event
cv2.setMouseCallback("image", click_event)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is class based implementation of OpenCV mouse call back for getting point on an image,
import cv2
import numpy as np
#events = [i for i in dir(cv2) if 'EVENT' in i]
#print (events)
class MousePts:
def __init__(self,windowname,img):
self.windowname = windowname
self.img1 = img.copy()
self.img = self.img1.copy()
cv2.namedWindow(windowname,cv2.WINDOW_NORMAL)
cv2.imshow(windowname,img)
self.curr_pt = []
self.point = []
def select_point(self,event,x,y,flags,param):
if event == cv2.EVENT_LBUTTONDOWN:
self.point.append([x,y])
#print(self.point)
cv2.circle(self.img,(x,y),5,(0,255,0),-1)
elif event == cv2.EVENT_MOUSEMOVE:
self.curr_pt = [x,y]
#print(self.point)
def getpt(self,count=1,img=None):
if img is not None:
self.img = img
else:
self.img = self.img1.copy()
cv2.namedWindow(self.windowname,cv2.WINDOW_NORMAL)
cv2.imshow(self.windowname,self.img)
cv2.setMouseCallback(self.windowname,self.select_point)
self.point = []
while(1):
cv2.imshow(self.windowname,self.img)
k = cv2.waitKey(20) & 0xFF
if k == 27 or len(self.point)>=count:
break
#print(self.point)
cv2.setMouseCallback(self.windowname, lambda *args : None)
#cv2.destroyAllWindows()
return self.point, self.img
if __name__=='__main__':
img = np.zeros((512,512,3), np.uint8)
windowname = 'image'
coordinateStore = MousePts(windowname,img)
pts,img = coordinateStore.getpt(3)
print(pts)
pts,img = coordinateStore.getpt(3,img)
print(pts)
cv2.imshow(windowname,img)
cv2.waitKey(0)
I have ported the PyIgnition library from Pygame to opencv2. Find the code at https://github.com/bunkahle/particlescv2
There also several examples on how to use the particle engine for Python.
In case, you want to get the coordinates by hovering over the image in Python 3, you could try this:
import numpy as np
import cv2 as cv
import os
import sys
# Reduce the size of image by this number to show completely in screen
descalingFactor = 2
# mouse callback function, which will print the coordinates in console
def print_coord(event,x,y,flags,param):
if event == cv.EVENT_MOUSEMOVE:
print(f'{x*descalingFactor, y*descalingFactor}\r', end="")
img = cv.imread(cv.samples.findFile('TestImage.png'))
imgheight, imgwidth = img.shape[:2]
resizedImg = cv.resize(img,(int(imgwidth/descalingFactor), int(imgheight/descalingFactor)), interpolation = cv.INTER_AREA)
cv.namedWindow('Get Coordinates')
cv.setMouseCallback('Get Coordinates',print_coord)
cv.imshow('Get Coordinates',resizedImg)
cv.waitKey(0)
If anyone wants a multi-process-based GUI for drawing points and dragging to move them, here is a single file script for same.

Basemap Change color of neighbour countries

from mpl_toolkits.basemap import Basemap
fig = plt.figure(figsize=(20,10)) # predefined figure size, change to your liking.
# But doesn't matter if you save to any vector graphics format though (e.g. pdf)
ax = fig.add_axes([0.05,0.05,0.9,0.85])
# These coordinates form the bounding box of Germany
bot, top, left, right = 5.87, 15.04, 47.26, 55.06 # just to zoom in to only Germany
map = Basemap(projection='merc', resolution='l',
llcrnrlat=left,
llcrnrlon=bot,
urcrnrlat=right,
urcrnrlon=top)
map.readshapefile('./DEU_adm/DEU_adm1', 'adm_1', drawbounds=True) # plots the state boundaries, read explanation below code
map.drawcoastlines()
map.fillcontinents(color='lightgray')
long1 = np.array([ 13.404954, 11.581981, 9.993682, 8.682127, 6.960279,
6.773456, 9.182932, 12.373075, 13.737262, 11.07675 ,
7.465298, 7.011555, 12.099147, 9.73201 , 7.628279,
8.801694, 10.52677 , 8.466039, 8.239761, 10.89779 ,
8.403653, 8.532471, 7.098207, 7.216236, 9.987608,
7.626135, 11.627624, 6.852038, 10.686559, 8.047179,
8.247253, 6.083887, 7.588996, 9.953355, 10.122765])
lat1 = np.array([ 52.520007, 48.135125, 53.551085, 50.110922, 50.937531,
51.227741, 48.775846, 51.339695, 51.050409, 49.45203 ,
51.513587, 51.455643, 54.092441, 52.375892, 51.36591 ,
53.079296, 52.268874, 49.487459, 50.078218, 48.370545,
49.00689 , 52.030228, 50.73743 , 51.481845, 48.401082,
51.960665, 52.120533, 51.47512 , 53.865467, 52.279911,
49.992862, 50.775346, 50.356943, 49.791304, 54.323293])
x, y = map(long1, lat1)
map.plot(x,y,'.') # Use the dot-marker or use a different marker, but specify the `markersize`.
How can i change the color of neighbour contries of germany.
The data that is at the basis for the states is obtained from a shapefile. These can be obtained from e.g. Global Administrative Areas (the ones from this website can can be used for non-commercial purposes only)