Google Earth: is a dynamic parametrized KML possible? - web-services

I made a large KML file ca. 23 MBytes long. Google Earth renders it very long period of time, lags and occupies 1GB RAM and more. On slower computers it may also not to render some areas.
So the idea is to use a parametrized GET request to the server returning kml data only for a region with specified boundaries.
Can GoogleEarth initiate and use such requests?

What you're asking can be done with a NetworkLink. If you dynamically generate the KML from a servlet, web-service, script, etc. then you can instruct Google Earth to send the bounding box for its view from which you can generate the KML to return. This approach requires hosting a custom "service" on an application server/web server that can generate KML in response to requests sent by Google Earth.
In your root-level NetworkLink you need to define refreshMode=onChange to refesh when view is changed along with the URL to the servlet. Recommended to set viewRefreshMode=onStop with a viewRefreshTime element so the data is only fetched 1 second after the user stops zooming/moving around otherwise the data is continually refreshed. Also the viewFormat is needed to instruct Google Earth to return the bounding box of the view. In this example, the BBOX parameter is added to the HTTP parameters sent to the servlet in an HTTP GET request.
<Link>
<href>servlet-url</href>
<refreshMode>onChange</refreshMode>
<viewRefreshMode>onStop</viewRefreshMode>
<viewRefreshTime>1</viewRefreshTime>
<viewFormat>BBOX=[bboxWest],[bboxSouth],[bboxEast],[bboxNorth]</viewFormat>
</Link>
If your data spans a large area then you could break up the data into separate KML files then specify Region-based NetworkLinks in parent KML file. This approach would allow you to generate the data once as static KML files and only serve up what data is "active" based on the user's view.
Related Tutorial:
https://developers.google.com/kml/documentation/regions#regionbasednl
Reference:
https://developers.google.com/kml/documentation/kmlreference#networklink
https://developers.google.com/kml/documentation/kmlreference#region
https://developers.google.com/kml/documentation/kmlreference#viewformat

Yes, this isn't a problem at all. You add the source URL of you KML in Google Earth as a URL with parameters and then load it as several separate sources. With that approach though you are only 'dynamically' providing the criteria at the time you add the KML to GE, and from then on it looks like any other static KML file you would have loaded.
EDIT I see now (logging into GE) that it actually calls these network links as described by #JasonM1 (under Add->NetworkLink)

Related

Correct way to fetch data from an aws server into a flutter app?

I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!

Google Vision - Batch Image Annotation, map result to request images

I'm using Google Vision to detect text in images (on my backend written in kotlin).
I want to do a batch request with multiple images from a web url but the problem I'm facing is how to know what results maps to what image in the request?
Can I rely on Google to return the result in the same order as I put them into the batch request?
Currently I do not get any information in the response that I can use to figure out to what image the annotated text belongs to. And it's important that the text can be mapped to the correct image.
If you need more information please let me know and I'll provide it to you.
The responses are in the same order as they are in the request.

How to reference other collections in postman

Is there a way to reference other collections from a specific collection. For example, if I have a file upload collection (something that uploads a file), I want to be able to use that from other collections. How would I reference the file upload?
Here's an example of what I'm talking about.
I have a collection where a file is uploaded and a calculation needs to be performed. The test or collection would go something like this where each step is a POST, GET, etc
Upload and run calculation:
Generate a token
make call
copy/save token value
Upload specific file (these would be 3 individual requests)
Upload file
Monitor upload status
Return ID of file uploaded
run calculation
use ID to pass as parameter
pass other values to set up calculation
monitor run
validate results
In another collection I need to validate uploaded files metadata is correct. Not directly related to the one above, but has some similarities
Generate a token
make call
copy/save token value
Upload specific file (these would be 3 individual requests)
Upload file
Monitor upload status
Get final result and return ID of file uploaded
Get me
validate metadata is correct.
Steps 1 and 2 are common functionality, there would be no difference there. How could I extract those two steps as modular components or functionality so I can reference them from any collection?
For additional clarity, we use ReadyAPI and are able to do 'Run Test Case' which can obviously run another test case. We've separated the functionality of token and file uploads into it's own test case and use it as a modular component. I'd like to achieve something similar with Postman.
Unfortunately Postman collections are working a little bit different.
But you can Merge your two collections to a single one, and execute it as one single collections.

Displaying image using ColdFusion from BLOB

I want to display an image from a database. I'm using data type BLOB for that image.
I've already tried #CharsetEncode(viewPoint.ppp_icons, "ASCII")#, but it didn't work.
<cfimage action="writeToBrowser" source="#imageBlob#">
http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_i_02.html
OR... Use Data URI scheme (w/ limited browser support).
<img src="data:image/png;base64,#toBase64(imageBlob)#" />
<cfcontent reset="Yes" type="image/gif" variable="#QueryName.BlobColumn#" />
EDIT: To clarify, you would place this code in a separate template. Which you would place the call for in the src attribute of your img tag. You would pass the primary key of the database table and the new template would lookup and grab the blob column to be outputted.
I believe it would be better to keep all blob data in separate requests, because if you were trying to display many files on a page it would be best to let the page load up first rather than have the page load time wait until all blob data could be downloaded from the sql server to the coldfusion server. Reducing initial page load times is of crucial importance to usability.
As an addendum, if these files are going to be accessed frequently, then it will also be best to have the file be cache on the web server and have the separate template redirect to the cached file.

How do I track images embedded in HTML?

I'd like to track the views/impressions of images on web pages, but still allow the images to be embedded in HTML, like in the "img src="http://mysite.com/upload/myimage.jpg"/" element.
I know in Windows I can write a handler for ".jpg" so the URL will actually trigger a handling function instead of loading the images from disk. Is it possible to do that in python/django on Ubuntu server? Can web browser still cache the jpg files if it is not a straight file path?
It looks to me that this is how google picasaweb handles the image file name. I'd like to get some ideas on how to implement that.
Thanks!
-Yi
If you want images to not be cached, just append a timestamp to them. This example is in PHP, but you get the idea:
<img src=<?php echo '"../images/myimage.gif?'.time().'"'; /* Append time so image is not cached */ ?>
Then you can analyze your logs to track views, etc.