Indexing Faces with AWS Rekognition - amazon-web-services

I am new to AWS and am trying to use Rekognition to identify certain people in a crowd. I am currently trying to index the images of the separate individuals but have hit a snag when trying to create a collection. There seems to a data type compatibility issue when I try using Amazon.Rekognition.Model.S3Object(). I have provided the code below. Does anyone have a solution or a better method? Thank you for your time!
private static void TryIndexFaces()
{
S3Client = new AmazonS3Client();
RekognitionClient = new AmazonRekognitionClient();
IndexFacesRequest indexRequest = new IndexFacesRequest();
Amazon.Rekognition.Model.Image img = new Amazon.Rekognition.Model.Image();
ListObjectsV2Request req = new ListObjectsV2Request();
req.BucketName = "wem0020";
ListObjectsV2Response listObjectsResponse = S3Client.ListObjectsV2(req);
CreateCollectionRequest ccr = new CreateCollectionRequest();
ccr.CollectionId = "TestFaces";
//RekognitionClient.CreateCollection(ccr);
ListVersionsResponse lvr = S3Client.ListVersions(req.BucketName);
string version = lvr.Versions[0].VersionId;
foreach(Amazon.S3.Model.S3Object s3o in listObjectsResponse.S3Objects)
{
Console.WriteLine(s3o.Key);
try
{
if (s3o.Key.EndsWith(".jpg"))
{
Amazon.Rekognition.Model.S3Object reks3o = new Amazon.Rekognition.Model.S3Object();
reks3o.Bucket = req.BucketName;
reks3o.Name = s3o.Key;
Console.WriteLine(version);
reks3o.Version = version;
img.S3Object = reks3o;
indexRequest.Image = img;
indexRequest.CollectionId = ccr.CollectionId;
RekognitionClient.IndexFaces(indexRequest);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}

To index faces, use the bounding boxes value returned from aws rekognition. I have done with python
widtho = 717 #width of the given image
heighto = 562 #height of the given image
counter = 0
facecount = 1
s3 = boto3.resource('s3')
bucket = s3.Bucket('rek')
if __name__ == "__main__":
#Choosing the file in s3 bucket
photo = 'sl.jpg'
bucket = 'rek'
#Intilization of rekognition and performing detect_faces
client = boto3.client('rekognition', region_name='eu-west-1')
response = client.detect_faces(
Image={'S3Object': {'Bucket': bucket, 'Name': photo}}, Attributes=['ALL'])
print('Detected faces for ' + photo)
print('The faces are detected and labled from left to right')
for faceDetail in response['FaceDetails']:
print('Face Detected= ', i)
#To mark a bounding box of the image using coordinates
print('Bounding Box')
bboxlen = len(faceDetail['BoundingBox'])
print(bboxlen)
width = faceDetail['BoundingBox'].get('Width')
height = faceDetail['BoundingBox'].get('Height')
left = faceDetail['BoundingBox'].get('Left')
top = faceDetail['BoundingBox'].get('Top')
w = int(width * widtho)
h = int(height * heighto)
x = int(left * widtho)
y = int(top * heighto)
cv2.rectangle(imagere, (x, y), (x + w, y + h), (255, 0, 0), 2)
this loop will index faces one by one in the single frame

Related

How to Add Image/Shape with chart data label in ASPOSE Slide

Can we add any shape/image with chart data labels in ASPOSE .I need to show one arrow with different colors according to certain values with each data labels in the chart. i am using ASPOSE to generate my ppt. Or there is any way to find the data label positions in ASPOSE.
I have observed your requirements and like to share that MS PowerPoint supports LineWithMarker chart that can display different predefined or custom marker symbols(in the form of image) for different series data points. Please try using following sample code for possible options using Aspose.Slides and MSO charts.
public static void TestScatter()
{
var location = System.Reflection.Assembly.GetExecutingAssembly().Location;
//Open a presentation
Presentation pres = new Presentation();
IChart chart = pres.Slides[0].Shapes.AddChart(ChartType.StackedLineWithMarkers, 10, 10, 400, 400);
//populating cycle
var serie = chart.ChartData.Series[0];
var wbk = chart.ChartData.ChartDataWorkbook;
chart.ChartData.Series.RemoveAt(1);
chart.ChartData.Series.RemoveAt(1);
serie.Marker.Format.Fill.FillType = FillType.Picture;
serie.Marker.Size = 20;
// Set the picture
System.Drawing.Image img = (System.Drawing.Image)new Bitmap(#"C:\Users\Public\Pictures\Sample Pictures\Tulips.jpg");
IPPImage imgx = pres.Images.AddImage(img);
serie.Marker.Format.Fill.PictureFillFormat.Picture.Image = imgx;
//For individual data point
serie.DataPoints[0].Marker.Format.Fill.FillType = FillType.Solid;
serie.DataPoints[0].Marker.Format.Fill.SolidFillColor.Color = Color.Red;
serie.DataPoints[0].Marker.Size = 20;
serie.DataPoints[0].Marker.Symbol = MarkerStyleType.Triangle;
serie.DataPoints[0].Label.DataLabelFormat.ShowValue = true;
serie.DataPoints[1].Label.DataLabelFormat.ShowValue = true;
serie.DataPoints[2].Label.DataLabelFormat.ShowValue = true;
serie.DataPoints[3].Label.DataLabelFormat.ShowValue = true;
pres.Save(Path.Combine(Path.GetDirectoryName(location), "Result2.pptx"), SaveFormat.Pptx);
}
I am working as Support developer/ Evangelist at Aspose.
Many Thanks,
I have further observed your requirements and have observed that you have also posted the similar requirements in Aspose.Slides official support forum as well. Please try using following sample code on your end to serve the purpose.
public static void TestLineChart()
{
var location = System.Reflection.Assembly.GetExecutingAssembly().Location;
//Open a presentation
Presentation pres = new Presentation();
IChart chart = pres.Slides[0].Shapes.AddChart(ChartType.StackedLineWithMarkers, 10, 10, 400, 400);
//populating cycle
var serie = chart.ChartData.Series[0];
var wbk = chart.ChartData.ChartDataWorkbook;
chart.ChartData.Series.RemoveAt(1);
chart.ChartData.Series.RemoveAt(1);
serie.Marker.Format.Fill.FillType = FillType.Picture;
serie.Marker.Size = 20;
serie.Marker.Symbol = MarkerStyleType.Diamond;
serie.Marker.Format.Fill.FillType = FillType.Solid;
serie.Marker.Format.Fill.SolidFillColor.Color=Color.Orange;
serie.Marker.Format.Line.FillFormat.FillType = FillType.Solid;
serie.Marker.Format.Line.FillFormat.SolidFillColor.Color=Color.Red;
serie.Marker.Format.Line.Width=1.0F;
serie.Format.Line.Width = 3.0f;
serie.Format.Line.FillFormat.FillType=FillType.Solid;
serie.Format.Line.FillFormat.SolidFillColor.Color = Color.FromArgb(209,225,91) ;
for(int i=0;i<serie.DataPoints.Count;i++)
{
serie.DataPoints[i].Label.DataLabelFormat.ShowValue = true;
IDataLabel label=serie.Labels[i];
chart.ValidateChartLayout();
IAutoShape ashp=chart.UserShapes.Shapes.AddAutoShape(ShapeType.Triangle,chart.X + label.ActualX + 5, chart.Y + label.ActualY + 5, 20,20);
ashp.FillFormat.FillType = FillType.Solid;
ashp.LineFormat.FillFormat.FillType = FillType.NoFill;
if (i % 2 == 0)//even data points
{
ashp.FillFormat.SolidFillColor.Color = Color.Green;
}
else
{
ashp.Rotation = 180;
ashp.FillFormat.SolidFillColor.Color = Color.Red;
}
}
pres.Save(Path.Combine(Path.GetDirectoryName(location), "Result2.pptx"), Aspose.Slides.Export.SaveFormat.Pptx);
}
I am working as Support developer/ Evangelist at Aspose.
Many Thanks,

Fitting an Image to a ROI

I have an ROI and an image. I have to fill the ROI with the image that I have. The image should scale according to the ROI shape and size and should fill the entire ROI without repeating the image. How can I achieve this using opencv? Is there any method in opencv to achieve this?
Suppose this white section is my ROI and
this is my input image
Is there any solution using imageMagick???
Finding optimal fit of one shape inside another is not trivial, but if you can settle for suboptimal result you can do the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
bg_contours, bg_hierarchy = cv2.findContours(bg_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
bg_contour = bg_contours[0]
bg_ellipse = cv2.fitEllipse(bg_contour)
p_contours, p_hierarchy = cv2.findContours(fruit_alpha, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
pear_hull = cv2.convexHull(p_contours[0])
pear_ellipse = cv2.fitEllipse(pear_hull)
min_ratio = min(bg_ellipse[1][0] / pear_ellipse[1][0], bg_ellipse[1][1] / pear_ellipse[1][1])
x_shift = bg_ellipse[0][0] - pear_ellipse[0][0] * min_ratio
y_shift = bg_ellipse[0][1] - pear_ellipse[0][1] * min_ratio
(Heuristic) Resize the fruit contour, start with an initial guess based on the ellipses, refine using the contour (this can be improved but it is a non trivial optimization problem, you can look more here):
r_contour = np.array([[[int(j) for j in i[0]]] for i in min_ratio * p_contours[max_c_ix]])
min_dist, bad_pt = GetMinDist(outer_contour=bg_contour, inner_contour=r_contour, offset=(int(x_shift), int(y_shift)))
mask_size = max(bg_ellipse[1][0], bg_ellipse[1][1])
scale = min_ratio * (mask_size + min_dist) / mask_size
r_contour = np.array([[[int(j) for j in i[0]]] for i in scale * p_contours[max_c_ix]])
Combine the images using the alpha channel:
combined = CombineImages(bg, fruit_rgb, fruit_alpha, scale, (int(x_shift), int(y_shift)))
Utility functions:
def GetMinDist(outer_contour, inner_contour, offset):
min_dist = 10000
bad_pt = (0,0)
for i_pt in inner_contour:
#pt = (float(i_pt[0][0]), float(i_pt[0][1]))
pt = (i_pt[0][0] + int(offset[0]), i_pt[0][1] + int(offset[1]))
dst = cv2.pointPolygonTest(outer_contour, pt, True)
if dst < min_dist:
min_dist = dst
bad_pt = pt
return min_dist, bad_pt
def CombineImages(mask_img, fruit_img, fruit_alpha, scale, offset):
mask_height, mask_width, mask_dim = mask_img.shape
combined_img = np.copy(mask_img)
resized_fruit = np.copy(mask_img)
resized_fruit[:] = 0
resized_alpha = np.zeros( (mask_height, mask_width), fruit_alpha.dtype)
f_height, f_width, f_dim = fruit_img.shape
r_fruit = cv2.resize(fruit_img, (int(f_width*scale), int(f_height*scale)) )
r_alpha = cv2.resize(fruit_alpha, (int(f_width*scale), int(f_height*scale)) )
height, width, channels = r_fruit.shape
roi_x_from = offset[0]
roi_x_to = offset[0] + width
roi_y_from = offset[1]
roi_y_to = offset[1] + height
resized_fruit[roi_y_from:roi_y_to, roi_x_from:roi_x_to, :] = r_fruit
resized_alpha[roi_y_from:roi_y_to, roi_x_from:roi_x_to] = r_alpha
for y in range(0,mask_height):
for x in range(0, mask_width):
if resized_alpha[y,x] > 0:
combined_img[y,x,:] = resized_fruit[y,x,:]
return combined_img
I Hope that helps.
(I omitted parts of the code that do not contribute to the understanding of the flow)

I'm trying to do real time object detection and tracking in MATLAB. But it's giving me error

function multiObjectTracking()
% create system objects used for reading video, detecting moving objects,
% and displaying the results
obj = setupSystemObjects();
tracks = initializeTracks(); % create an empty array of tracks
nextId = 1; % ID of the next track
% detect moving objects, and track them across video frames
while ~isDone(obj.reader)
frame = readFrame();
[centroids, bboxes, mask] = detectObjects(frame);
predictNewLocationsOfTracks();
[assignments, unassignedTracks, unassignedDetections] = ...
detectionToTrackAssignment();
updateAssignedTracks();
updateUnassignedTracks();
deleteLostTracks();
createNewTracks();
displayTrackingResults();
end
%% Create System Objects
% Create System objects used for reading the video frames, detecting
% foreground objects, and displaying results.
function obj = setupSystemObjects()
% Initialize Video I/O
% Create objects for reading a video from a file, drawing the tracked
% objects in each frame, and playing the video.
vid = videoinput('winvideo', 1, 'YUY2_320x240');
src = getselectedsource(vid);
vid.FramesPerTrigger = 1;
% TriggerRepeat is zero based and is always one
% less than the number of triggers.
vid.TriggerRepeat = 899;
preview(vid);
start(vid);
stoppreview(vid);
savedvideo = getdata(vid);
% create a video file reader
obj.reader = vision.VideoFileReader(savedvideo);
% create two video players, one to display the video,
% and one to display the foreground mask
obj.videoPlayer = vision.VideoPlayer('Position', [20, 400, 700, 400]);
obj.maskPlayer = vision.VideoPlayer('Position', [740, 400, 700, 400]);
obj.detector = vision.ForegroundDetector('NumGaussians', 3, ...
'NumTrainingFrames', 40, 'MinimumBackgroundRatio', 0.7);
obj.blobAnalyser = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', true, 'CentroidOutputPort', true, ...
'MinimumBlobArea', 400);
end
function tracks = initializeTracks()
% create an empty array of tracks
tracks = struct(...
'id', {}, ...
'bbox', {}, ...
'kalmanFilter', {}, ...
'age', {}, ...
'totalVisibleCount', {}, ...
'consecutiveInvisibleCount', {});
end
%% Read a Video Frame
% Read the next video frame from the video file.
function frame = readFrame()
frame = obj.reader.step();
end
function [centroids, bboxes, mask] = detectObjects(frame)
% detect foreground
mask = obj.detector.step(frame);
% apply morphological operations to remove noise and fill in holes
mask = imopen(mask, strel('rectangle', [3,3]));
mask = imclose(mask, strel('rectangle', [15, 15]));
mask = imfill(mask, 'holes');
% perform blob analysis to find connected components
[~, centroids, bboxes] = obj.blobAnalyser.step(mask);
end
%% Predict New Locations of Existing Tracks
% Use the Kalman filter to predict the centroid of each track in the
% current frame, and update its bounding box accordingly.
function predictNewLocationsOfTracks()
for i = 1:length(tracks)
bbox = tracks(i).bbox;
% predict the current location of the track
predictedCentroid = predict(tracks(i).kalmanFilter);
% shift the bounding box so that its center is at
% the predicted location
predictedCentroid = int32(predictedCentroid) - bbox(3:4) / 2;
tracks(i).bbox = [predictedCentroid, bbox(3:4)];
end
end
function [assignments, unassignedTracks, unassignedDetections] = ...
detectionToTrackAssignment()
nTracks = length(tracks);
nDetections = size(centroids, 1);
% compute the cost of assigning each detection to each track
cost = zeros(nTracks, nDetections);
for i = 1:nTracks
cost(i, :) = distance(tracks(i).kalmanFilter, centroids);
end
% solve the assignment problem
costOfNonAssignment = 20;
[assignments, unassignedTracks, unassignedDetections] = ...
assignDetectionsToTracks(cost, costOfNonAssignment);
end
function updateAssignedTracks()
numAssignedTracks = size(assignments, 1);
for i = 1:numAssignedTracks
trackIdx = assignments(i, 1);
detectionIdx = assignments(i, 2);
centroid = centroids(detectionIdx, :);
bbox = bboxes(detectionIdx, :);
% correct the estimate of the object's location
% using the new detection
correct(tracks(trackIdx).kalmanFilter, centroid);
% replace predicted bounding box with detected
% bounding box
tracks(trackIdx).bbox = bbox;
% update track's age
tracks(trackIdx).age = tracks(trackIdx).age + 1;
% update visibility
tracks(trackIdx).totalVisibleCount = ...
tracks(trackIdx).totalVisibleCount + 1;
tracks(trackIdx).consecutiveInvisibleCount = 0;
end
end
%% Update Unassigned Tracks
% Mark each unassigned track as invisible, and increase its age by 1.
function updateUnassignedTracks()
for i = 1:length(unassignedTracks)
ind = unassignedTracks(i);
tracks(ind).age = tracks(ind).age + 1;
tracks(ind).consecutiveInvisibleCount = ...
tracks(ind).consecutiveInvisibleCount + 1;
end
end
function deleteLostTracks()
if isempty(tracks)
return;
end
invisibleForTooLong = 10;
ageThreshold = 8;
% compute the fraction of the track's age for which it was visible
ages = [tracks(:).age];
totalVisibleCounts = [tracks(:).totalVisibleCount];
visibility = totalVisibleCounts ./ ages;
% find the indices of 'lost' tracks
lostInds = (ages < ageThreshold & visibility < 0.6) | ...
[tracks(:).consecutiveInvisibleCount] >= invisibleForTooLong;
% delete lost tracks
tracks = tracks(~lostInds);
end
function createNewTracks()
centroids = centroids(unassignedDetections, :);
bboxes = bboxes(unassignedDetections, :);
for i = 1:size(centroids, 1)
centroid = centroids(i,:);
bbox = bboxes(i, :);
% create a Kalman filter object
kalmanFilter = configureKalmanFilter('ConstantVelocity', ...
centroid, [200, 50], [100, 25], 100);
% create a new track
newTrack = struct(...
'id', nextId, ...
'bbox', bbox, ...
'kalmanFilter', kalmanFilter, ...
'age', 1, ...
'totalVisibleCount', 1, ...
'consecutiveInvisibleCount', 0);
% add it to the array of tracks
tracks(end + 1) = newTrack;
% increment the next id
nextId = nextId + 1;
end
end
function displayTrackingResults()
% convert the frame and the mask to uint8 RGB
frame = im2uint8(frame);
mask = uint8(repmat(mask, [1, 1, 3])) .* 255;
minVisibleCount = 8;
if ~isempty(tracks)
% noisy detections tend to result in short-lived tracks
% only display tracks that have been visible for more than
% a minimum number of frames.
reliableTrackInds = ...
[tracks(:).totalVisibleCount] > minVisibleCount;
reliableTracks = tracks(reliableTrackInds);
% display the objects. If an object has not been detected
% in this frame, display its predicted bounding box.
if ~isempty(reliableTracks)
% get bounding boxes
bboxes = cat(1, reliableTracks.bbox);
% get ids
ids = int32([reliableTracks(:).id]);
% create labels for objects indicating the ones for
% which we display the predicted rather than the actual
% location
labels = cellstr(int2str(ids'));
predictedTrackInds = ...
[reliableTracks(:).consecutiveInvisibleCount] > 0;
isPredicted = cell(size(labels));
isPredicted(predictedTrackInds) = {' predicted'};
labels = strcat(labels, isPredicted);
% draw on the frame
frame = insertObjectAnnotation(frame, 'rectangle', ...
bboxes, labels);
% draw on the mask
mask = insertObjectAnnotation(mask, 'rectangle', ...
bboxes, labels);
end
end
% display the mask and the frame
obj.maskPlayer.step(mask);
obj.videoPlayer.step(frame);
end
displayEndOfDemoMessage(mfilename)
end
Your problem is that you are trying to use vision.VideoFileReader, while trying to read frames from a camera. vision.VideoFileReader is only for reading video files. If you are getting frames from the camera you do not need it at all. You should add your videoinput object to the obj struct, and you should try using getsnapshot inside readFrame().

Photoshop CS4 Script or Action to: Write 0 to 100 on separate text layers

My goal is to create 101 separate text layers containing 0-100 (i.e. 1, 2, 3...100) I know I can mass change the attributes but cant write or alter the containing text.
What you want can easily be done with a script (easier than renaming 100 layers in all the right order and record an action for it) This script will create 100 layers of text, each layer will be named 1,2,3..etc and the text will be the same. I think that's waht you are after, your description was rather short.
// call the source document
var srcDoc = app.activeDocument;
var numOfLayers = 100;
//var numOfLayers = srcDoc.layers.length;
var numPadding = "0";
var layerNum = 1; // change this to 0 to start layers at 0
var w = Math.floor(srcDoc.width.value/2);
var h = Math.floor(srcDoc.height.value/2);
// main loop starts here
for (var i = numOfLayers -1; i >= 0 ; i--)
{
if (layerNum < 10) numPadding = "0";
else numPadding ="";
createText("Arial-BoldMT", 20.0, 0,0,0, layerNum, w, h);
var currentLayer = srcDoc.activeLayer;
currentLayer.name = numPadding + layerNum;
layerNum +=1;
}
// function CREATE TEXT(typeface, size, R, G, B, content, Xpos, Ypos)
// --------------------------------------------------------
function createText(fface, size, colR, colG, colB, content, tX, tY)
{
var artLayerRef = srcDoc.artLayers.add()
artLayerRef.kind = LayerKind.TEXT
textColor = new SolidColor();
textColor.rgb.red = colR;
textColor.rgb.green = colG;
textColor.rgb.blue = colB;
textItemRef = artLayerRef.textItem
textItemRef.font = fface;
textItemRef.contents = content;
textItemRef.color = textColor;
textItemRef.size = size
textItemRef.position = new Array(tX, tY) //pixels from the left, pixels from the top
activeDocument.activeLayer.textItem.justification.CENTER
}
Save this out as numberLayers1-100.jsx and then reun it from Photoshop via the Files -> Scripts menu.

Plotting points on Mapnik

I followed this tutorial in the mapnik github wiki to make a world map: https://github.com/mapnik/mapnik/wiki/GettingStartedInPython
I modified this example, and have now embedded the code into a Pyside Qt Widget. My question now is, how does one plot points on this map using x and y coordinates, or latitude and longitude points?
Here is the code I'm using to generate the map and to embed it in the widget:
import mapnik
m = mapnik.Map(1200,600)
m.background = mapnik.Color('steelblue')
s = mapnik.Style()
r = mapnik.Rule()
polygon_symbolizer = mapnik.PolygonSymbolizer(mapnik.Color('#f2eff9'))
r.symbols.append(polygon_symbolizer)
line_symbolizer = mapnik.LineSymbolizer(mapnik.Color('rgb(50%,50%,50%)'),0.1)
r.symbols.append(line_symbolizer)
s.rules.append(r)
m.append_style('My Style',s)
ds = mapnik.Shapefile(file='/home/lee/shapefiles/ne_110m_admin_0_countries.shp')
layer = mapnik.Layer('world')
layer.datasource = ds
layer.styles.append('My Style')
m.layers.append(layer)
m.zoom_all()
im = mapnik.Image(1200,600)
mapnik.render(m, im)
qim = QImage()
qim.loadFromData(QByteArray(im.tostring('png')))
label = QLabel(self)
label.setPixmap(QPixmap.fromImage(qim))
self.layout.addWidget(label)
Usually, you would connect your map to a datasource such as a PostGIS or SQLite database and let mapnik populate the points from said database, similar to something like this. Either in a python script or generated from xml.
However, in answer to your question, you could plot Lat/Lon points by creating a new Feature from a WKT string and adding that feature to a mapnik.MemoryDatasource().
Below is a simple snippet from a script using the mapfile found here
First we create our style and add it to our map:
s = mapnik.Style() # style object to hold rules
r = mapnik.Rule() # rule object to hold symbolizers
point_sym = mapnik.PointSymbolizer()
point_sym.filename = './symbols/airport.p.16.png'
r.symbols.append(point_sym) # add the symbolizer to the rule object
s.rules.append(r)
m.append_style('airport point', s)
Now we create our data source and add a Point geometry in WKT format:
ds = mapnik.MemoryDatasource()
f = mapnik.Feature(mapnik.Context(), 1)
f.add_geometries_from_wkt("POINT(-92.289595 34.746481)")
ds.add_feature(f)
Now we must create a new layer, add our style that we created, and add the layer to our map:
player = mapnik.Layer('airport_layer')
#since our map is mercator but you wanted to add lat lon points
#we must make sure our layer projection is set to lat lon
player.srs = longlat.params()
player.datasource = ds
player.styles.append('airport point')
m.layers.append(player)
m.zoom_all()
You can look at the entire script here.
If you need to get a geographic coordinate(ie:lat/lon) from the pixel coordinate, you probably need to add your converter functions.
The Google Maps JS code is as follow could perhaps help :
https://developers.google.com/maps/documentation/javascript/examples/map-coordinates
var TILE_SIZE = 256;
function bound(value, opt_min, opt_max) {
if (opt_min != null) value = Math.max(value, opt_min);
if (opt_max != null) value = Math.min(value, opt_max);
return value;
}
function degreesToRadians(deg) {
return deg * (Math.PI / 180);
}
function radiansToDegrees(rad) {
return rad / (Math.PI / 180);
}
/** #constructor */
function MercatorProjection() {
this.pixelOrigin_ = new google.maps.Point(TILE_SIZE / 2,
TILE_SIZE / 2);
this.pixelsPerLonDegree_ = TILE_SIZE / 360;
this.pixelsPerLonRadian_ = TILE_SIZE / (2 * Math.PI);
}
MercatorProjection.prototype.fromLatLngToPoint = function(latLng,
opt_point) {
var me = this;
var point = opt_point || new google.maps.Point(0, 0);
var origin = me.pixelOrigin_;
point.x = origin.x + latLng.lng() * me.pixelsPerLonDegree_;
// Truncating to 0.9999 effectively limits latitude to 89.189. This is
// about a third of a tile past the edge of the world tile.
var siny = bound(Math.sin(degreesToRadians(latLng.lat())), -0.9999,
0.9999);
point.y = origin.y + 0.5 * Math.log((1 + siny) / (1 - siny)) *
-me.pixelsPerLonRadian_;
return point;
};
MercatorProjection.prototype.fromPointToLatLng = function(point) {
var me = this;
var origin = me.pixelOrigin_;
var lng = (point.x - origin.x) / me.pixelsPerLonDegree_;
var latRadians = (point.y - origin.y) / -me.pixelsPerLonRadian_;
var lat = radiansToDegrees(2 * Math.atan(Math.exp(latRadians)) -
Math.PI / 2);
return new google.maps.LatLng(lat, lng);
};