Through this code, I aim to detect an object in real-time that will be put in front of the video camera and classify it. My reasoning is the following I tried to make two for loops the first one for the detection, and once the object is detected I want to apply the classification through the second for loop. I don't know if my reasoning is correct or not, I tested the code but I received this error
ValueError Traceback (most recent call last)
<ipython-input-1-88a18bf89e71> in <module>()
85 for obj_coordinates in objs:
---> 87 x1, x2, y1, y2 = apply_offsets(obj_coordinates, class_offsets)
88 gray_obj = gray_obj[y1:y2, x1:x2]
89 try:
/home/nada/Desktop/testforimage/src/utils/inference.pyc in apply_offsets(obj_coordinates, offsets)
25
26 def apply_offsets(obj_coordinates, offsets):
---> 27 x, y, width, height = obj_coordinates
28 x_off, y_off = offsets
29 return (x - x_off, x + width + x_off, y - y_off, y + height + y_off)
ValueError: too many values to unpack
Could you please correct the following code and tell me if my reasoning is correct or not and thank you in advance.
video_capture = cv2.VideoCapture(0)
if video_capture.isOpened():
frame = video_capture.read()
else:
rval = False
while True:
rval, frame = video_capture.read()
gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray_image = cv2.cvtColor(np.float32(imgUMat), cv2.COLOR_RGB2GRAY)
blur = cv2.GaussianBlur(gray_image, (5,5) , 0)
ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
rects = [cv2.boundingRect(ctr) for ctr in ctrs]
for coordinates in rects:
a1, a2, b1, b2 = app_offsets(coordinates, obj_offsets)
gray_image = gray_image[b1:b2, a1:a2]
try:
gray_image = cv2.resize(gray_image, (obj_target_size))
except:
continue
gray_image = preprocess_input(gray_image, True)
gray_image = np.expand_dims(gray_image, 0)
gray_image = np.expand_dims(gray_image, -1)
objs = obj_detection.predict(gray_image)
key = cv2.waitKey(1)
b,g,r = cv2.split(frame) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
for obj_coordinates in objs:
x1, x2, y1, y2 = apply_offsets(obj_coordinates, class_offsets)
gray_obj = gray_obj[y1:y2, x1:x2]
try:
gray_obj = cv2.resize(gray_obj, (class_target_size))
except:
continue
gray_obj = preprocess_input(gray_obj, True)
gray_obj = np.expand_dims(gray_obj, 0)
gray_obj = np.expand_dims(gray_obj, -1)
class_prediction = class_classifier.predict(gray_obj)
class_probability = np.max(class_prediction)
class_label_arg = np.argmax(class_prediction)
class_text = emotion_labels[class_label_arg]
class_window.append(class_text)
In line 20 ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE), the function returns the contours and its hierarchy.
To draw the bounding box for each contour, you need to pass the first output (contours)
Change the line to the following:
ctrs = cv2.findContours(blur.copy(),cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[0]
It is your obj_coordinates that does not seem to be a 4-uple. It is an element of objs produced by obj_detection.predict(gray_image). The code context you shared to us is insufficient to tell what is wrong in that function.
Related
I am trying to generate a single UV-texture map in the format of the SURREAL dataset. There is a notebook in the original DensePose repository that discusses how to apply texture transfer using an image from SMPL: github.com/facebookresearch/DensePose/blob/master/notebooks/DensePose-RCNN-Texture-Transfer.ipynb
However, in this case I am trying to use the outputs we get from DensePose directly:
In dump mode, I get the uv coordinates in data[0]['pred_densepose'][0].uv with dimensions: torch.Size([2, 1098, 529])
I overlayed the output from running inference on an image with dp_u,dp_v visualization on a black background. Here is the link to the image: https://densepose.s3.amazonaws.com/test1uv.0001.png
This is the command I used to get this inference: python3 apply_net.py show configs/densepose_rcnn_R_101_FPN_DL_WC2M_s1x.yaml model_final_de6e7a.pkl input.jpg dp_u,dp_v -v --output output.png
This is the link to the original image: https://densepose.s3.amazonaws.com/02_1_front.jpg
Using these components, I am trying to generate the 24 part uv texture map in the same format as SMPL:
https://densepose.s3.amazonaws.com/extracted_smpl_texture_apprearance.png
https://densepose.s3.amazonaws.com/texture_from_SURREAL.png
It would be extremely helpful if someone can share how to solve this problem. Please let me know if additional information is needed.
I don't know if the problem still persists or you were able to find a solution. In case that anyone else would challenge the same issues, here is my solution. I put together several different codes and ideas from official github issue page for densepose (https://github.com/facebookresearch/DensePose/issues/68).
I assume that we already have output of apply_net.py utility from github denspose repository. From your post it is a data output (one you were able to obtain data[0]['pred_densepose'][0].uv from).
Let's do some coding:
import copy
import cv2
import matplotlib
import numpy as np
from matplotlib import pyplot as plt
matplotlib.use('TkAgg')
# I assume the data are stored in pickle, and you are able to read them
results = data[0]
IMAGE_FILE = 'path/to/image.png'
def parse_iuv(result):
i = result['pred_densepose'][0].labels.cpu().numpy().astype(float)
uv = (result['pred_densepose'][0].uv.cpu().numpy() * 255.0).astype(float)
iuv = np.stack((uv[1, :, :], uv[0, :, :], i))
iuv = np.transpose(iuv, (1, 2, 0))
return iuv
def parse_bbox(result):
return result["pred_boxes_XYXY"][0].cpu().numpy()
def concat_textures(array):
texture = []
for i in range(4):
tmp = array[6 * i]
for j in range(6 * i + 1, 6 * i + 6):
tmp = np.concatenate((tmp, array[j]), axis=1)
texture = tmp if len(texture) == 0 else np.concatenate((texture, tmp), axis=0)
return texture
def interpolate_tex(tex):
# code is adopted from https://github.com/facebookresearch/DensePose/issues/68
valid_mask = np.array((tex.sum(0) != 0) * 1, dtype='uint8')
radius_increase = 10
kernel = np.ones((radius_increase, radius_increase), np.uint8)
dilated_mask = cv2.dilate(valid_mask, kernel, iterations=1)
region_to_fill = dilated_mask - valid_mask
invalid_region = 1 - valid_mask
actual_part_max = tex.max()
actual_part_min = tex.min()
actual_part_uint = np.array((tex - actual_part_min) / (actual_part_max - actual_part_min) * 255, dtype='uint8')
actual_part_uint = cv2.inpaint(actual_part_uint.transpose((1, 2, 0)), invalid_region, 1,
cv2.INPAINT_TELEA).transpose((2, 0, 1))
actual_part = (actual_part_uint / 255.0) * (actual_part_max - actual_part_min) + actual_part_min
# only use dilated part
actual_part = actual_part * dilated_mask
return actual_part
def get_texture(im, iuv, bbox, tex_part_size=200):
# this part of code creates iuv image which corresponds
# to the size of original image (iuv from densepose is placed
# within pose bounding box).
im = im.transpose(2, 1, 0) / 255
image_w, image_h = im.shape[1], im.shape[2]
bbox[2] = bbox[2] - bbox[0]
bbox[3] = bbox[3] - bbox[1]
x, y, w, h = [int(v) for v in bbox]
bg = np.zeros((image_h, image_w, 3))
bg[y:y + h, x:x + w, :] = iuv
iuv = bg
iuv = iuv.transpose((2, 1, 0))
i, u, v = iuv[2], iuv[1], iuv[0]
# following part of code iterate over parts and creates textures
# of size `tex_part_size x tex_part_size`
n_parts = 24
texture = np.zeros((n_parts, 3, tex_part_size, tex_part_size))
for part_id in range(1, n_parts + 1):
generated = np.zeros((3, tex_part_size, tex_part_size))
x, y = u[i == part_id], v[i == part_id]
# transform uv coodrinates to current UV texture coordinates:
tex_u_coo = (x * (tex_part_size - 1) / 255).astype(int)
tex_v_coo = (y * (tex_part_size - 1) / 255).astype(int)
# clipping due to issues encountered in denspose output;
# for unknown reason, some `uv` coos are out of bound [0, 1]
tex_u_coo = np.clip(tex_u_coo, 0, tex_part_size - 1)
tex_v_coo = np.clip(tex_v_coo, 0, tex_part_size - 1)
# write corresponding pixels from original image to UV texture
# iterate in range(3) due to 3 chanels
for channel in range(3):
generated[channel][tex_v_coo, tex_u_coo] = im[channel][i == part_id]
# this part is not crucial, but gives you better results
# (texture comes out more smooth)
if np.sum(generated) > 0:
generated = interpolate_tex(generated)
# assign part to final texture carrier
texture[part_id - 1] = generated[:, ::-1, :]
# concatenate textures and create 2D plane (UV)
tex_concat = np.zeros((24, tex_part_size, tex_part_size, 3))
for i in range(texture.shape[0]):
tex_concat[i] = texture[i].transpose(2, 1, 0)
tex = concat_textures(tex_concat)
return tex
iuv = parse_iuv(results)
bbox = parse_bbox(results)
image = cv2.imread(IMAGE_FILE)[:, :, ::-1]
uv_texture = get_texture(image, iuv, bbox)
# plot texture or do whatever you like
plt.imshow(uv_texture)
plt.show()
Enjoy
I create empty list and add name of detected objects to in it.
the output to new list every loop added one object and output it directly without waiting to finish adding
I just need the output all objects as list output and disappear rest outputs
this my code:
import rospy
import numpy
import tf
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs import point_cloud2 as pc2
from sensor_msgs.msg import Image, PointCloud2
from dodo_detector.detection import SingleShotDetector
from dodo_detector_ros.msg import DetectedObject, DetectedObjectArray
import math
class Detector:
def __init__(self):
self._detector = SingleShotDetector('frozen_inference_graph.pb', 'mscoco_label_map.pbtxt', confidence=0.5)
self._global_frame = rospy.get_param('~global_frame', None)
self._tf_listener = tf.TransformListener()
self._bridge = CvBridge()
rospy.Subscriber("/camera/rgb/image_color", Image, self.image_callback)
rospy.Subscriber('/camera/depth/points', PointCloud2, self.pc_callback)
self._current_image = None
self._current_pc = None
self._imagepub = rospy.Publisher('~labeled_image', Image, queue_size=10)
self._publishers = {None: (None, rospy.Publisher('~detected', DetectedObjectArray, queue_size=10))}
self._tfpub = tf.TransformBroadcaster()
rospy.loginfo('Ready to detect!')
def image_callback(self, image):
"""Image callback"""
self._current_image = image
def pc_callback(self, pc):
"""Point cloud callback"""
self._current_pc = pc
def run(self):
while not rospy.is_shutdown():
if self._current_image is not None:
try:
if self._global_frame is not None:
(trans, _) = self._tf_listener.lookupTransform('/' + self._global_frame, '/camera_link', rospy.Time(0))
scene = self._bridge.imgmsg_to_cv2(self._current_image, 'rgb8')
marked_image, objects = self._detector.from_image(scene) # detect objects
self._imagepub.publish(self._bridge.cv2_to_imgmsg(marked_image, 'rgb8')) # publish detection results
msgs = {}
for key in self._publishers:
msgs[key] = DetectedObjectArray()
my_tf_id = []
my_dis =[]
for obj_class in objects:
rospy.logdebug('Found ' + str(len(objects[obj_class])) + ' object(s) of type ' + obj_class)
for obj_type_index, coordinates in enumerate(objects[obj_class]):
#
rospy.logdebug('...' + obj_class + ' ' + str(obj_type_index) + ' at ' + str(coordinates))
ymin, xmin, ymax, xmax = coordinates
y_center = ymax - ((ymax - ymin) / 2)
x_center = xmax - ((xmax - xmin) / 2)
detected_object = DetectedObject()
detected_object.type.data = obj_class
detected_object.image_x.data = xmin
detected_object.image_y.data = ymin
detected_object.image_width.data = xmax - xmin
detected_object.image_height.data = ymax - ymin
publish_tf = False
if self._current_pc is None:
rospy.loginfo('No point cloud information available to track current object in scene')
else:
pc_list = list(pc2.read_points(self._current_pc, skip_nans=True, field_names=('x', 'y', 'z'), uvs=[(x_center, y_center)]))
if len(pc_list) > 0:
publish_tf = True
tf_id = obj_class + '_' + str(obj_type_index) #object_number
my_tf_id.append(tf_id)
print my_tf_id
detected_object.tf_id.data = tf_id
point_x, point_y, point_z = pc_list[0] #point_z = x, point_x = y
if publish_tf:
object_tf = [point_z, -point_x, -point_y]
frame = 'cam_asus_link'
if self._global_frame is not None:
object_tf = numpy.array(trans) + object_tf
frame = self._global_frame
self._tfpub.sendTransform((object_tf), tf.transformations.quaternion_from_euler(0, 0, 0), rospy.Time.now(), tf_id, frame)
except CvBridgeError as e:
print(e)
except (tf.LookupException, tf.ConnectivityException, tf.ExtrapolationException) as e:
print(e)
if __name__ == '__main__':
rospy.init_node('dodo_detector_ros', log_level=rospy.INFO)
try:
Detector().run()
except KeyboardInterrupt:
rospy.loginfo('Shutting down')
I used line 120
print my_tf_id
output:
[u'person_0']
[u'person_0', u'chair_0']
[u'person_0', u'chair_0', u'chair_1']
[u'person_0', u'chair_0', u'chair_1', u'book_0']
[u'person_0', u'chair_0', u'chair_1', u'book_0', u'book_1']
I just need this output:
[u'person_0', u'chair_0', u'chair_1', u'book_0', u'book_1']
and disappear those outputs:
[u'person_0']
[u'person_0', u'chair_0']
[u'person_0', u'chair_0', u'chair_1']
[u'person_0', u'chair_0', u'chair_1', u'book_0']
please help me
thank you in advance or some suggestions
Just to reiterate your question, you are creating a list on the fly and only want to display the last element you are adding. In general when asking a question like this, please create a simple example relevant to your question. No point adding complications from ROS, subcribers and callbacks etc.
To your questions, here are a couple ways to handle this:
Print your reponse only after you finish your loop, this will print everything just once.
Print just the last element you added, tf_id in your case. If you want it on the same line you can use print statement as: print(tf_id, end='', flush=True)
I am using wx.python along with VPython to make an orbit simulator, however i'm having trouble trying to get the sliders in the GUI to effect the simulation, I assume it's because I am trying to get the number associated with the slider button to go into a while loop when it is running.
So my question is, how do i get the function SetRate to update in the while loop located at the bottom of the code? (I have checked to see that the slider is retuning values)
Here is my code for reference:
Value = 1.0
dt = 100.0
def InputValue(Value):
dt = Value
def SetRate(evt):
global Value
Value = SpeedOfSimulation.GetValue()
return Value
w = window(menus=True, title="Planetary Orbits",x=0, y=0, width = 1000, height = 1000)
Screen = display(window = w, x = 30, y = 30, width = 700, height = 500)
gdisplay(window = w, x = 80, y = 80 , width = 40, height = 20)
p = w.panel # Refers to the full region of the window in which to place widgets
SpeedOfSimulation = wx.Slider(p, pos=(800,10), size=(200,100), minValue=0, maxValue=1000)
SpeedOfSimulation.Bind(wx.EVT_SCROLL, SetRate)
TestData = [2, 0, 0, 0, 6371e3, 5.98e24, 0, 0, 0, 384400e3, 0, 0, 1737e3, 7.35e22, 0, 1e3, 0]
Nstars = TestData[0] # change this to have more or fewer stars
G = 6.7e-11 # Universal gravitational constant
# Typical values
Msun = 2E30
Rsun = 2E9
vsun = 0.8*sqrt(G*Msun/Rsun)
Stars = []
colors = [color.red, color.green, color.blue,
color.yellow, color.cyan, color.magenta]
PositionList = []
MomentumList = []
MassList = []
RadiusList = []
for i in range(0,Nstars):
s=i*8
x = TestData[s+1]
y = TestData[s+2]
z = TestData[s+3]
Radius = TestData[s+4]
Stars = Stars+[sphere(pos=(x,y,z), radius=Radius, color=colors[i % 6],
make_trail=True, interval=10)]
Mass = TestData[s+5]
SpeedX = TestData[s+6]
SpeedY = TestData[s+7]
SpeedZ = TestData[s+8]
px = Mass*(SpeedX)
py = Mass*(SpeedY)
pz = Mass*(SpeedZ)
PositionList.append((x,y,z))
MomentumList.append((px,py,pz))
MassList.append(Mass)
RadiusList.append(Radius)
pos = array(PositionList)
Momentum = array(MomentumList)
Mass = array(MassList)
Mass.shape = (Nstars,1) # Numeric Python: (1 by Nstars) vs. (Nstars by 1)
Radii = array(RadiusList)
vcm = sum(Momentum)/sum(Mass) # velocity of center of mass
Momentum = Momentum-Mass*vcm # make total initial momentum equal zero
Nsteps = 0
time = clock()
Nhits = 0
while True:
InputValue(Value) #Reprensents the change in time
rate(100000) #No more than 100 loops per second on fast computers
# Compute all forces on all stars
r = pos-pos[:,newaxis] # all pairs of star-to-star vectors (Where r is the Relative Position Vector
for n in range(Nstars):
r[n,n] = 1e6 # otherwise the self-forces are infinite
rmag = sqrt(sum(square(r),-1)) # star-to-star scalar distances
hit = less_equal(rmag,Radii+Radii[:,newaxis])-identity(Nstars)
hitlist = sort(nonzero(hit.flat)[0]).tolist() # 1,2 encoded as 1*Nstars+2
F = G*Mass*Mass[:,newaxis]*r/rmag[:,:,newaxis]**3 # all force pairs
for n in range(Nstars):
F[n,n] = 0 # no self-forces
Momentum = Momentum+sum(F,1)*dt
# Having updated all momenta, now update all positions
pos = pos+(Momentum/Mass)*dt
# Update positions of display objects; add trail
for i in range(Nstars):
Stars[i].pos = pos[i]
I know nothing about vpython but in a normal wxPython app, you will use wx.Timer instead of while loop.
here is an example of wx.Timer modified from https://www.blog.pythonlibrary.org/2009/08/25/wxpython-using-wx-timers/
You will want to separate the while loop part from your SetRate class method and put it in update.
import wx
class MyForm(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, "Timer Tutorial 1",
size=(500,500))
# Add a panel so it looks the correct on all platforms
panel = wx.Panel(self, wx.ID_ANY)
self.timer = wx.Timer(self)
self.Bind(wx.EVT_TIMER, self.update, self.timer)
SpeedOfSimulation = wx.Slider(p, pos=(800,10), size=(200,100), minValue=0, maxValue=1000)
SpeedOfSimulation.Bind(wx.EVT_SCROLL, SetRate)
self.SpeedOfSimulation = SpeedOfSimulation
def update(self, event):
# Compute all forces on all stars
SpeedOfSimulation = self.SpeedOfSimulation.GetValue()
I can't figure out why I'm getting this index out of bounds error. I've done test prints of the values at those indices and they print out correctly. Can someone explain where exactly my array is being restructured?
class Particle:
def __init__(self,fourvector = [1.0,1.0,1.0,-1.0],
origin=(0,0)):
self.mass = 2.5 # Mass in kg
self.fourvector = np.asarray(fourvector,dtype='float')
self.Vx_init = self.fourvector[2]
self.x_init = self.fourvector[0]
self.y_init = self.fourvector[1]
self.Vy_init = self.fourvector[3]
self.time_passed = 0
self.origin = origin
print fourvector[0]
print fourvector[2]
def position(self):
x0 = self.origin[0]
x1 = self.fourvector[0]
Vx = self.fourvector[2]
y0 = self.origin[1]
y1 = self.fourvector[1]
Vy = self.fourvector[3]
x = x0 + x1 * Vx
y = x0 + y1 * Vy
return (x,y)
def derivs(self,fourvector):
'''derivative computation setup'''
x_pos = fourvector[0]
y_pos = fourvector[1]
dydx = np.zeros_like(fourvector)
dydx[0] = fourvector[2] #x-comp of velocity
dydx[1] = (-x_pos)/((x_pos)**2 + (y_pos)**2)**1.5
dydx[2] = fourvector[3] #y-comp of velocity
dydx[3] = (-y_pos)/((x_pos)**2 + (y_pos)**2)**1.5
return dydx
def time_step(self,dt):
'''Time progression and state fourvector update'''
self.fourvector = integrate.odeint(self.derivs,0,dt)
self.time_passed += dt
body = Particle([1.0,1.0,1.0,2.0]) #Object of Particle created.
dt = 1./30
fig = plt.figure()
ax = fig.add_subplot(111,aspect='equal',autoscale_on=False,xlim=(-3,3),ylim=(-3,3))
ax.grid()
line, = ax.plot([],[],'o-',lw=2)
time_text = ax.text(0.02,0.95,'',transform=ax.transAxes)
def init():
line.set_data([],[])
time_text.set_text('')
return line, time_text
def animate(i):
global body, dt
body.time_step(dt)
line.set_data(*body.position())
time_text.set_text('time = %.1f' %body.time_passed)
return line, time_text
from time import time
t0 = time()
animate(0)
t1 = time()
interval = 1000*dt - (t1 - t0)
ani = animation.FuncAnimation(fig, animate, frames = 300,
interval = interval, blit=True, init_func=init)
plt.show()
The error traceback:
bash-3.2$ python MoreCrap.py
1.0
1.0
Traceback (most recent call last):
File "MoreCrap.py", line 80, in <module>
animate(0)
File "MoreCrap.py", line 74, in animate
line.set_data(*body.position())
File "MoreCrap.py", line 26, in position
Vx = self.fourvector[2]
IndexError: index out of bounds
Your call to integrate.odeint is wrong. Look in the manual if it really does what you think.
In any case, the fourvector has value [1. 1. 1. 2.] before the call to it and value [[0.]] after the call to it, so it doesn't contain any value with index [2].
The manpage for odeint is here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm new to Openstreetmap and mapnick,
I'm trying to export map image which will be geo-referenced
(So it can be used in other applications)
I've installed osm and mapnik inside ubuntu virtual machine
I've tried using generate_image.py script, but generated image is not equal to the bounding box. My python knowledge is not good enough for me to fix the script.
I've also tried using nik2img.py script using verbose mode, for example:
nik2img.py osm.xml sarajevo.png --srs 900913 --bbox 18.227 43.93 18.511 43.765 --dimensions 10000 10000
and tried using the log bounding box
Step: 11 // --> Map long/lat bbox: Envelope(18.2164733537,43.765,18.5215266463,43.93)
Unfortunately generated image is not equal to the bounding box :(
How can I change scripts so I can georeference generated image?
Or do you know an easier way to accomplish this task?
Image i'm getting using the http://www.openstreetmap.org/ export is nicely geo-referenced, but it's not big enough :(
I've managed to change generate_tiles.py to generate 1024x1024 images together with correct bounding box
Changed script is available bellow
#!/usr/bin/python
from math import pi,cos,sin,log,exp,atan
from subprocess import call
import sys, os
from Queue import Queue
import mapnik
import threading
DEG_TO_RAD = pi/180
RAD_TO_DEG = 180/pi
# Default number of rendering threads to spawn, should be roughly equal to number of CPU cores available
NUM_THREADS = 4
def minmax (a,b,c):
a = max(a,b)
a = min(a,c)
return a
class GoogleProjection:
def __init__(self,levels=18):
self.Bc = []
self.Cc = []
self.zc = []
self.Ac = []
c = 1024
for d in range(0,levels):
e = c/2;
self.Bc.append(c/360.0)
self.Cc.append(c/(2 * pi))
self.zc.append((e,e))
self.Ac.append(c)
c *= 2
def fromLLtoPixel(self,ll,zoom):
d = self.zc[zoom]
e = round(d[0] + ll[0] * self.Bc[zoom])
f = minmax(sin(DEG_TO_RAD * ll[1]),-0.9999,0.9999)
g = round(d[1] + 0.5*log((1+f)/(1-f))*-self.Cc[zoom])
return (e,g)
def fromPixelToLL(self,px,zoom):
e = self.zc[zoom]
f = (px[0] - e[0])/self.Bc[zoom]
g = (px[1] - e[1])/-self.Cc[zoom]
h = RAD_TO_DEG * ( 2 * atan(exp(g)) - 0.5 * pi)
return (f,h)
class RenderThread:
def __init__(self, tile_dir, mapfile, q, printLock, maxZoom):
self.tile_dir = tile_dir
self.q = q
self.m = mapnik.Map(1024, 1024)
self.printLock = printLock
# Load style XML
mapnik.load_map(self.m, mapfile, True)
# Obtain <Map> projection
self.prj = mapnik.Projection(self.m.srs)
# Projects between tile pixel co-ordinates and LatLong (EPSG:4326)
self.tileproj = GoogleProjection(maxZoom+1)
def render_tile(self, tile_uri, x, y, z):
# Calculate pixel positions of bottom-left & top-right
p0 = (x * 1024, (y + 1) * 1024)
p1 = ((x + 1) * 1024, y * 1024)
# Convert to LatLong (EPSG:4326)
l0 = self.tileproj.fromPixelToLL(p0, z);
l1 = self.tileproj.fromPixelToLL(p1, z);
# Convert to map projection (e.g. mercator co-ords EPSG:900913)
c0 = self.prj.forward(mapnik.Coord(l0[0],l0[1]))
c1 = self.prj.forward(mapnik.Coord(l1[0],l1[1]))
# Bounding box for the tile
if hasattr(mapnik,'mapnik_version') and mapnik.mapnik_version() >= 800:
bbox = mapnik.Box2d(c0.x,c0.y, c1.x,c1.y)
else:
bbox = mapnik.Envelope(c0.x,c0.y, c1.x,c1.y)
render_size = 1024
self.m.resize(render_size, render_size)
self.m.zoom_to_box(bbox)
self.m.buffer_size = 128
# Render image with default Agg renderer
im = mapnik.Image(render_size, render_size)
mapnik.render(self.m, im)
im.save(tile_uri, 'png256')
print "Rendered: ", tile_uri, "; ", l0 , "; ", l1
# Write geo coding informations
file = open(tile_uri[:-4] + ".tab", 'w')
file.write("!table\n")
file.write("!version 300\n")
file.write("!charset WindowsLatin2\n")
file.write("Definition Table\n")
file.write(" File \""+tile_uri[:-4]+".jpg\"\n")
file.write(" Type \"RASTER\"\n")
file.write(" ("+str(l0[0])+","+str(l1[1])+") (0,0) Label \"Pt 1\",\n")
file.write(" ("+str(l1[0])+","+str(l1[1])+") (1023,0) Label \"Pt 2\",\n")
file.write(" ("+str(l1[0])+","+str(l0[1])+") (1023,1023) Label \"Pt 3\",\n")
file.write(" ("+str(l0[0])+","+str(l0[1])+") (0,1023) Label \"Pt 4\"\n")
file.write(" CoordSys Earth Projection 1, 104\n")
file.write(" Units \"degree\"\n")
file.close()
def loop(self):
while True:
#Fetch a tile from the queue and render it
r = self.q.get()
if (r == None):
self.q.task_done()
break
else:
(name, tile_uri, x, y, z) = r
exists= ""
if os.path.isfile(tile_uri):
exists= "exists"
else:
self.render_tile(tile_uri, x, y, z)
bytes=os.stat(tile_uri)[6]
empty= ''
if bytes == 103:
empty = " Empty Tile "
self.printLock.acquire()
print name, ":", z, x, y, exists, empty
self.printLock.release()
self.q.task_done()
def render_tiles(bbox, mapfile, tile_dir, minZoom=1,maxZoom=18, name="unknown", num_threads=NUM_THREADS):
print "render_tiles(",bbox, mapfile, tile_dir, minZoom,maxZoom, name,")"
# Launch rendering threads
queue = Queue(32)
printLock = threading.Lock()
renderers = {}
for i in range(num_threads):
renderer = RenderThread(tile_dir, mapfile, queue, printLock, maxZoom)
render_thread = threading.Thread(target=renderer.loop)
render_thread.start()
#print "Started render thread %s" % render_thread.getName()
renderers[i] = render_thread
if not os.path.isdir(tile_dir):
os.mkdir(tile_dir)
gprj = GoogleProjection(maxZoom+1)
ll0 = (bbox[0],bbox[3])
ll1 = (bbox[2],bbox[1])
for z in range(minZoom,maxZoom + 1):
px0 = gprj.fromLLtoPixel(ll0,z)
px1 = gprj.fromLLtoPixel(ll1,z)
# check if we have directories in place
zoom = "%s" % z
if not os.path.isdir(tile_dir + zoom):
os.mkdir(tile_dir + zoom)
for x in range(int(px0[0]/1024.0),int(px1[0]/1024.0)+1):
# Validate x co-ordinate
if (x < 0) or (x >= 2**z):
continue
# check if we have directories in place
str_x = "%s" % x
if not os.path.isdir(tile_dir + zoom + '/' + str_x):
os.mkdir(tile_dir + zoom + '/' + str_x)
for y in range(int(px0[1]/1024.0),int(px1[1]/1024.0)+1):
# Validate x co-ordinate
if (y < 0) or (y >= 2**z):
continue
str_y = "%s" % y
tile_uri = tile_dir + zoom + '_' + str_x + '_' + str_y + '.png'
# Submit tile to be rendered into the queue
t = (name, tile_uri, x, y, z)
queue.put(t)
# Signal render threads to exit by sending empty request to queue
for i in range(num_threads):
queue.put(None)
# wait for pending rendering jobs to complete
queue.join()
for i in range(num_threads):
renderers[i].join()
if __name__ == "__main__":
home = os.environ['HOME']
try:
mapfile = "/home/emir/bin/mapnik/osm.xml" #os.environ['MAPNIK_MAP_FILE']
except KeyError:
mapfile = "/home/emir/bin/mapnik/osm.xml"
try:
tile_dir = os.environ['MAPNIK_TILE_DIR']
except KeyError:
tile_dir = home + "/osm/tiles/"
if not tile_dir.endswith('/'):
tile_dir = tile_dir + '/'
#-------------------------------------------------------------------------
#
# Change the following for different bounding boxes and zoom levels
#
#render sarajevo at 16 zoom level
bbox = (18.256, 43.785, 18.485, 43.907)
render_tiles(bbox, mapfile, tile_dir, 16, 16, "World")
Try Maperitive's export-bitmap command, it generates various georeferencing sidecar files
(worldfile, KML, OziExplorer .MAP file).