How to batch download large number of high resolution satellite images from Google Map directly? - c++

I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.

Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite

It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.

Related

How do I find ID to download ImageNet Subset?

I am new to ImageNet and would like to download full sized images of one of the subsets/synsets however I have found it incredibly difficult to actually find what subsets are available and where to find the ID code so I can download this.
All previous answers (from only 7 months ago) contain links which are now all invalid. Some seem to imply there is some sort of algorithm to making up an ID as it is linked to wordnet??
Essentially I would like a dataset of plastic or plastic waste or ideally marine debris. Any help on how to get the relevant ImageNet ID or suggestions on other datasets would be much much appreciated!!
I used this repo to achieve what you're looking for. Follow the following steps:
Create an account on Imagenet website
Once you get the permission, download the list of WordNet IDs for your task
Once you've the .txt file containing the WordNet IDs, you are all set to run main.py
As per your need, you can adjust the number of images per class
By default ImageNet images are automatically resized into 224x224. To remove that resizing, or implement other types of preprocessing, simply modify the code in line #40
Source: Refer this medium article for more details.
You can find all the 1000 classes of ImageNet here.
EDIT:
Above method doesn't work post March 2021. As per this update:
The new website is simpler; we removed tangential or outdated functions to focus on the core use case—enabling users to download the data, including the full ImageNet dataset and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
So with this, to parse and search imagenet now you may have to use nltk.
More recently, the organizers hosted a Kaggle challenge based on the original dataset with additional labels for object detection. To download the dataset you need to register a Kaggle account and join this challenge. Please note that by doing so, you agree to abide by the competition rules.
Please be aware that this file is very large (168 GB) and the download will take anywhere from minutes to days depending on your network connection.
Install the Kaggle CLI and set up credentials as per this guideline.
pip install kaggle
Then run these:
kaggle competitions download -c imagenet-object-localization-challenge
unzip imagenet-object-localization-challenge.zip -d <YOUR_FOLDER>
Additionally to understand ImageNet hierarchy refer this.

How to convert food-101 dataset into usable format for AWS SageMaker

I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms

Can I receive a boudingPoly for LABEL_DETECTION results?

How can this be completed with the Google Vision-API please?
send image to vision-api
request: 'features': [{': 'LABEL_DETECTION','maxResults': 10,}]
receive the labels in particular the one I'm interest in is a "clock"
receive the boundingPoly so that I know the exact location of the clock within the image
having received the boundingPoly I would want to use it to create a dynamic AR marker to be tracked by the AR library
Currently it doesn't look like Google Vision-API supports a boudingPoly for LABELS hence the question if there is a way to solve it with the Vision-API.
Currently Label Detection does not provide this functionality. We are always looking at ways to enhance the API
After two years, its the same. I am facing similar challenges and I am thinking of opting other solutions. I think custom solutions like TensorFlow object detection API or DarkNet YOLO object API will do this job very easily.

og:image is not being fetched correctly by Facebook on liking or sharing

I'll try to be direct explaining my problem:
When liking/sharing an article from my website (http://www.radiopico.com) og:image does not work well (you can try with this url for example:http://www.radiopico.com/index.php?n=noticias&menu=noticias&id_noticia=14350)
on the debugger I always get the corret image url
It never shows the image
If I click the image I get a not found error.. if I copy the url and past it on the address bar I get the image
Sometimes it says image is to small.. it is wrong, because it is very larger then the minimum sizes facebook encourage. Also there is no ratio problem
When I try to post the address on facebook it never shows the picture
I have tried to add a time var (?T=...) after the address to make sure it is not caching
I have put multiple og:images.. still does not works.
I have read and tried all the "tricks" I find here (Stackoverdlow) or on google
Thanks for your support and best wishes on resolving this mystery
I notice that your image is technically not defined at 72 dpi, but rather at 300 dpi:
I cannot say definitively that this would cause your problem. Browsers are generally smart enough to display an image at 72 dpi (using the pixel dimensions) regardless of the "Resolution" that is defined within the image. However, it might be worth re-saving the image with the resolution correctly set for the web at 72dpi. Perhaps Facebook is getting confused about the image size due to the resolution setting.
Also, just to eliminate potential problems with the image, if you have Photoshop, use the Save for Web command to re-save the image, which will automatically define the re-saved image at 72dpi AND save it without a built-in preview, which you should always do for the web anyway to reduce file size. Then try again. Again, I'm not sure that these suggestions will work, but it seems like a good idea to eliminate the resolution, embedded preview and potential issues with the original file by re-saving.
Debuging your link
I see linter response is 206.
I don't know if it can help u but I've found this
check it out ;)

Are there any Autopano Server alternative or automated panorama stitching/video panorama open source?

I haven't found any server side panorama making from stitching images or a video. I would like an open source alternative, but found any. I just don't want to go trough the hassle of developing all this on my own since but paid software usually are closed source and not very flexible.
I've seen some nifty panorama from video software in the iphone and thought it would be easy to find on *nix systems but with no luck.
Any help will be appreciated. Thanks in advance.
The only option I know is to use panostart (which is part of hugin) from whatever server language you are using.
See here for more information and other command line tools that do parts of the process more specifically.
panostart just works with images, so obviously if you wanted it to work with videos then you would have to process the videos with something like ffmpeg -i z.mov -f image2 export2\%d.png to generate images to pass to panostart.
The other alternative which requires more effort is to write something that uses libpano13 and libffmpeg directly.
You can have a look to VideoStitch, a command line is provided to automate the stitching.