Conversion shapefile's coord system from: GCS_Assumed_Geographic_1 to WGS '84 - shapefile

I have my shapefile in arcmap 10.8.1 with this coordinate system: Geographic Coordinate System: GCS_Assumed_Geographic_1 Datum: D_North_American_1927 Prime Meridian: Greenwich Angular Unit: Degree
I want to convert in in WGS 84 but i failed. Any help?
I tried and opened one blanck map, from view i set WGS 84, i uploaded a basemap, and them from Define projection, i set the ccord system to WGS 84, but it did not load correctly,
Any idea?
Ps* maybe it is ESRI,104000?

Related

GeoDjango: How to create a circle anywhere on earth based on point and radius?

I have a similar question to this one. Using geodjango, I want to draw a circle on a map with a certain radius in km. However, the suggested solution
a) does not use km but instead degrees, and
b) becomes an oval further north or south.
Here is what I do:
from django.contrib.gis import geos
lat = 49.17
lng = -123.96
center = geos.Point(lng, lat)
radius = 0.01
circle = center.buffer(radius)
# And I then use folium to show a map on-screen:
map = folium.Map(
location=[lat,lng],
zoom_start=14,
attr="Mapbox"
)
folium.GeoJson(
circle.geojson,
name="geojson",
).add_to(map)
The result is this:
How can I
a) draw a circle that is always 3 km in radius, independent from the position on the globe, and
b) ensure this is a circle and not an oval at all latitudes?
Here is the Code
from django.contrib.gis import geos
import folium
lat = 49.17
lng = -123.96
center = geos.Point(x=lng, y=lat, srid=4326)
center.transform(3857) # Transform Projection to Web Mercator
radius = 3000 # now you can use meters
circle = center.buffer(radius)
circle.transform(4326) # Transform back to WGS84 to create geojson
# And I then use folium to show a map on-screen:
map = folium.Map(
location=[lat,lng],
zoom_start=14,
attr="Mapbox"
)
geojson = folium.GeoJson(
circle.geojson,
name="geojson",
)
geojson.add_to(map)
Explanation
This problem occurs due to Map Projections.
Lat/Long Coordinates are represented by the Map Projection WGS84. The Values are in degrees.
The map you see in folium has another map projection (Web Mercator). It tries to represent the world as a plane, which produces distortions to the north and south. The coordinate values are in meters.
On a globe your created circle would look completely round, but because folium uses another projection it gets distorted.
It is also important to know that every projection is represented by a number (EPSG Code). With this epsg codes, you can transform your coordinates from one projection into another.
Web Mercator -> EPSG 3857
WGS84 -> EPSG 4326
With my Code you now get a round circle in folium for Web Mercator, but be aware that it would look oval and distorted, when looking at it on a globe.
This is just a very easy explanation. You might have a look at Map Projections to better understand the problem.
This guide gives a good overview:
Map Projections
try this
folium.Circle(
radius=3000,
location=[lat,lng],
popup="Whatever name",
color="#3186cc",
fill=True,
fill_color="#3186cc",
).add_to(m)

MediaPipe TensorflowLite Iris Model

I am trying to understand the output of the tflite Iris landmarks model available from mediapipe.
The model card
describes the output as 71 2D landmarks and 5 2D landmarks. When inspecting the model as follows:
interpreter = tf.lite.Interpreter(model_path='iris_landmark.tflite')
interpreter.allocate_tensors()
output_details = interpreter.get_output_details()
print(output_details)
[{'dtype': numpy.float32,
'index': 384,
'name': 'output_eyes_contours_and_brows',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([ 1, 213], dtype=int32),
'shape_signature': array([ 1, 213], dtype=int32),
'sparsity_parameters': {}},
{'dtype': numpy.float32,
'index': 385,
'name': 'output_iris',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([ 1, 15], dtype=int32),
'shape_signature': array([ 1, 15], dtype=int32),
'sparsity_parameters': {}}]
I see 213 values and 15 values in the model outputs - so I assume I am getting an x/y/z coordinate for each point. After running the model on an image I get values in the -7000 to +7000 range. My input was a 64x64 image, any idea of how these points correspond to the original image?
I would like to have pixel coordinates of the eye keypoints, which are rendered in the mediapipe examples.
The model card appears to be wrong, it actually outputs 3D coordinates, there are also some normalization on the model input and output that isn't clear, but is used for drawing the 2d landmarks.
I opened a github issue with my findings here. I haven't seen any changes related to the model card.
I created a colab demonstrating proper usage, here. You can ignore the z coordinate and plot the x/y coordinates onto your image to see the landmarks.
I probably should update the colab with an iris picture example.

geodjango query format coordinates- convert lat lon , open street map

I have a polygon in my model citys but in my map for example
bogota has the coordinate -8243997.66798 , 517864.86656 -> open street maps; but i need make query with coordinates like (4.697857, -74.144554) -> google maps.
pnt = 'POINT(%d %d)'%(lon, lat)
zonas = ciudad.zona_set.filter(polygon__contains=pnt)
zonas is empty :/ ,
how i can convert lat and lon to standar coordinates in open street map , or how know the srid code for one input (lat,lon)
pnt = GEOSGeometry('POINT(-96.876369 29.905320)', srid=srid_from_lat_lon)
thanks
When making spatial queries, its good practice to pass a geometry that has a specified spatial reference system (srid). Like this, GeoDjango will automatically convert the input query geometry to the coordinate system of your table (the coordinate systems of your city model in your case).
In the first code example you gave, you do not specify an srid on the geometry, so pnt = 'POINT(%d %d)'%(lon, lat) does not have an srid. In this case, GeoDjango will assume the srid is the same for the input and the model data table. Which is not the case in your example, and that is why you dont get any matches.
So you will need to create you point with the correct SRID. If you get the coordinates from OSM, most likely the coordinates are in the Web Mercator projection, which has the srid 3857. This projection is often used in web mapping.
For this, you can use the EWKT format (which is essentially SRID + WKT) like so:
pnt = 'SRID=4326;POINT(-96.876369 29.90532)'
Or if you have the coordinates in Web Mercator Projection, the following should work:
pnt = 'SRID=3857;POINT(-8243997.66798 517864.86656)'
zonas = ciudad.zona_set.filter(polygon__contains=pnt)
Just for reference, here are a few examples on how to go back an forth between EWKT and GEOSGeometries:
So this (normal WKT, with srid specified on creation of geometry)
GEOSGeometry('POINT(-8243997.66798 517864.86656)', srid=3857)
is equivalent to this (srid contained in EWKT string):
GEOSGeometry('SRID=3857;POINT(-8243997.66798 517864.86656)')

How to get pixel coordinates as latlng from a jpg image

I have a map image as jpg format. I am trying to calculate latitude and longitude for each pixel. I have 3 points coordinate also pixel row and column.
At first actually I used python utm library and calculated distance difference between two points by using lat and lng. I used:
diff_meters_x / diff_pixels_x
diff_meters_y / diff_pixels_y
After I found one pixel difference in x-axis and y-axis, I tried to test with third point but it shows a different point nearly 100 km tolerated but pixels and lat lngs are correct.
Then I tried to use gdal library but it calculates .tif extension images. I am using python 2.7 but if there is another way, also I can you use another platform like matlab. Could you help me about the way to figure this problem out?
Map Image
Here are three points:
BLK:
LatLng: 39.760751, 27.840443 || UTM_Meters: 571990.1817772812, 4401541.17886532 || Zone/Band: 35, 'S' || Pixel: [210, 247]
KUT:
LatLng: 39.495730, 29.997087 || UTM_Meters: 757725.0341605131, 4376079.988600584 || UTM_Zone/Band: 35, 'S' || Pixel: [288, 260]
USAK:
LatLng: 38.754252, 29.337908 || UTM_Meters: 703154.2913673887, 4292101.594408637 || UTM_Zone/Band: 35, 'S' || Pixel: [265, 296]
Ttake a look at the basemap python package from matplotlib: http://matplotlib.org/basemap/users/mapcoords.html
Lat/long calculation heavily depend on the projection of the picture and basemap does this for you.

How to detect water level in a transparent container?

I am using opencv-python library to do the liquid level detection. So far I was able to convert the image to gray scale and applying canny edge detection the container has been identified.
import numpy as np
import cv2
import math
from matplotlib import pyplot as plt
from cv2 import threshold, drawContours
img1 = cv2.imread('botone.jpg')
kernel = np.ones((5,5),np.uint8)
#convert the image to grayscale
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(imgray,120,230)
I need to know how to find water level from this stage.
Should I try machine learning, or is there any other option or algorithm available?
I took an approach of finding out the horizontal line in the edge detected image. If the horizontal line crosses certain threshold I can consider it as level.But the result is not consistent.
I want to know if there are any other approaches i can go with or white papers for reference?
I don't know how you would do that with numpy and opencv, because I use ImageMagick (which is installed on most Linux distros and is avilable for OSX and Windows), but the concept should be applicable.
First, I would probably go for a Sobel filter that is rotated to find horizontal edges - i.e. a directional filter.
convert chemistry.jpg -morphology Convolve Sobel:90 sobel.jpg
Then I would probably look at adding in a Hough Transform to find the lines within the horizontal edge-detected image. So, my one-liner looks like this in the Terminal/shell:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 level.jpg
If I add in some debug, you can see the coefficients of the Sobel filter:
convert chemistry.jpg -define showkernel=1 -morphology Convolve Sobel:90 -hough-lines 5x5+30 sobel.jpg
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
If I add in some more debug, you can see the coordinates of the lines detected:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 -write lines.mvg level.jpg
lines.mvg
# Hough line transform: 5x5+30
viewbox 0 0 86 196
line 0,1.52265 86,18.2394 # 30 <-- this is the topmost, somewhat diagonal line
line 0,84.2484 86,82.7472 # 40 <-- this is your actual level
line 0,84.5 86,84.5 # 40 <-- this is also your actual level
line 0,94.5 86,94.5 # 30 <-- this is the line just below the surface
line 0,93.7489 86,95.25 # 30 <-- so is this
line 0,132.379 86,124.854 # 32 <-- this is the red&white valve(?)
line 0,131.021 86,128.018 # 34
line 0,130.255 86,128.754 # 34
line 0,130.5 86,130.5 # 34
line 0,129.754 86,131.256 # 34
line 0,192.265 86,190.764 # 86
line 0,191.5 86,191.5 # 86
line 0,190.764 86,192.265 # 86
line 0,192.5 86,192.5 # 86
As I said in my comments, please also think about maybe lighting your experiment better - either with different coloured lights, more diffuse lights, different direction lights. Also, if your experiment happens over time, you could consider looking at differences between images to see which line is moving...
Here are the lines on top of your original image: