Google cloud vision with Google storage - google-cloud-platform

I am making a text detection application, using google vision api.
I want to figure out the way for a OCR detection function be able to load the jpg file.
This is a code referance I get from google codelab, but when try to open the url= gs:// like the diagram demonstartes, the error message saying invalid arguments. I wonder if I have missed anything??
Then, i found out that, when it is deployed on cloud functions, google vision will load image from storage. but HOW? I can not find any relenvant documents giving a detailed process about this. I am new to the code and bad at finding these instructions. Does anyone knows how can I successfuly reads/connect to the jpg file? or maybe provide a reference link regarding this? Thank you!

The code on my end is running fine and appears to be correct, I just copied the code and run it through Google Cloud Shell, Be sure to install the Vision API python client library in your cloud shell: pip install --upgrade google-cloud-vision If your intention is to open the the image you can access the image sample provided in the reference you mentioned here: https://storage.cloud.google.com/cloud-samples-data/vision/text/screen.jpg the uri provided in the code is the resource location of the image that is stored in Google Cloud Storage, the link above is the url equivalent of it .
Output:
I would suggest reading through official documents for more information about the API using client libraries here and although different implementation you can view this OCR usage here.

Related

How to accessing the SCORM package from s3 bucket?

Successfully able to upload the SCORM package zip and unzip in S3 bucket using drupal 8.
While trying to read the SCORM files in the extracted data folder we got the error message like
"ERROR – unable to acquire LMS API, content may not play properly and results may not be recorded. Please contact technical support"
I checked the access stuff all are in public only
Can anyone tell me where i missed
image
That content sounds like its setup to look for the API or API_1484_11 (SCORM API's for 1.2 and 2004) and pop up an alert.
With a runtime API present that alert would go away. Your next question - "How do I expose a runtime API?" the answer is normally you hand roll one, or look for a Runtime API paid or otherwise.
Something like https://github.com/cybercussion/SCOBot/blob/master/QUnit-Tests/js/scorm/SCOBot_API_1484_11.js might get you started if your looking for free.
If you plan to build a LMS you may want to look into paid options.

Google Built CentOS Image - Anyone have a download for this?

I've looked for this across the web a few times, and I feel like this hasn't been asked exactly, or I may just be getting bogged down with the wrong syntax. Hoping to get an easy answer here (yes, you can't get this, is an acceptable answer).
The variations from the base CentOS image are listed here: Link to GCP
However, they don't actually provide a download for this image. I'm trying to get a local VM running in VMWare with this image.
I feel as though they'd provide this to their clients to make it easier to prepare for use of their product, but I'm not finding it anywhere.
If anyone could toss me a link to a pre-configured CentOS ISO with the minor changes, I'd definitely take that as an alternative. I'm just not confident in my skills with Linux enough to configure the firewall properly :)
GCP doesn't support Google-provied images for exporting. However, they support exporting images for custom images.
I don't have any experience about image exporting, but I think this works.
Create custom images
You can create custom images based on your GCE VM instance.
Go navigation -> Compute engine -> images page.
You can create custom image via disk or snapshot in this page.
Select one and create a custom image.
Export your image
After creating custom image successfully, Go custom image page and click "export" on upper side.
Select export format and GCS destination. then click export.
Now you have an image in the Google Cloud storage.
Download image file and import to your local VM machine.

gcloud sdk cannot find the file by gs link

It is weird to see this as the result, please tell me what should I do now.
I cannot find anything helpful through Google, please help me, thanks
When I execute the command:
gcloud ml speech recognize 'gs://cloud-smaples-test-01/speech/test.wav' --language-code='en-US'
on my computer, the only response that I can see is this:
ERROR: (gcloud.ml.speech.recognize) Invalid audio source ['gs://cloud-smaples-test-01/speech/test.wav']. The source must either be a local path or a Google Cloud Storage URL (such as gs://bucket/object).
smaples is correct, I do change the order to avoid the same name.
However, when I execute the same command on Google Cloud Shell, I can see the result of speech to text. I do not know what happened exactly.
I use the same Google account to execute command whatever on my computer or Google Cloud Shell. I also set the file even the whole storage can be read by anyone.
What could cause this problem?
result on my computer
result on google cloud shell
You seem to be running Windows in your computer. The problem is that Windows interprets quotation marks as part of the string.
Removing the quotes in both your bucket path and the language code tag will resolve the issue.
Example:
gcloud ml speech recognize gs://cloud-smaples-test-01/speech/test.wav --language-code=en-US

How to get file content and move file to different Google Cloud Storage using Google Cloud functions

I'm trying to get the file that was uploaded to Google Cloud Storage, do some work with its content, and move it to a different bucket using Google Cloud Functions with python3.7. Following their documentation I was only able to get file name. I tried using import cloudstorage but it errors module 'cloudstorage' has no attribute 'NotFoundError', and googling did not get me anywhere.
Does any one have a sample code that could do what I need?
The cloudstorage library is specific to the Standard environment of App Engine.
A library compatible with Cloud Storage would be google-cloud-storage. You must declare it in your requirements.txt file for your function.
This example on how to copy from one bucket to another should suffice. After copying it, you can just do source_blob.delete() to get rid of it.

scikit learn on google cloud platform through datalab or compute engine?

I am running a Django App inside GCP. My idea was to call a python script from "view.py" for some machine learning algorithm and then display the result on the page.
But now I understand that running a machine learning library like Scikit-learn on GAE will not be possible (read Tim's answer here and this thread).
But suppose I need to still do this, I believe there are 2 ways possible, but I am not sure weather my guess is right or wrong
1) As the Google-Datalab provides the entire anaconda like distribution, if we have any datalab api which can be called from a python file in the Django app, I can achieve my goal ?
2) If I can install the scikit-learn library on any compute engine on GCP and somehow send it the request to run my code and then return the output back to the python file in the Django app ?
I am very new to client-server and cloud computing on the whole, so please provide examples (if possible) for any suggestion/ pointer for the help.
Regards,
I believe what you want is to use the App Engine Flex environment rather than the standard App Engine environment.
App Engine Flex uses a compute engine VM for running your code, so it does not have the library limitations that standard App Engine has.
Specifically, you'll need to add a 'requirements.txt' file to specify the version of scikit-learn that you want installed, and then add a 'vm: true' clause to your app.yaml file.
sklearn is now supported on ML Engine.
So, another alternative now is to use online prediction on Cloud ML Engine, and deploy your scikit-learn model as a web service.
Here is a fully worked out example of using fully-managed scikit-learn training, online prediction and hyperparameter tuning:
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/sklearn/babyweight_skl.ipynb