Google Speech API compliance in Canada - google-cloud-platform

A customer I am working with wants to use Google Speech API for transcribing audio but there are compliance concerns.
I know that you can upload files directly or have the API access files in Google Cloud Storage. For either of these methods is anyone familiar with how they interact with the data compliance laws in Canada?
For instance if the audio files are uploaded to a Cloud Storage bucket at the Montreal datacenter and we make an API call on it does the file ever leave that datacenter?
Thanks in advance for any insights!

Stack Overflow is not a great place to get a legal opinion, but is there a particular standard for compliance that they require? Google Cloud has a number of international data compliance certifications, one of which might be the one your customer requires. Talk to your customer and see what they need, and take a look at Google Cloud's list of standards that they are compliant with to see if it meets those needs: https://cloud.google.com/security/compliance
For example, the Cloud Speech API is compliant with ISO 27018, an international standard for cloud service privacy. Is that sufficient for your customer? You'll need to ask them.

Related

How to use Google Cloud Video Intelligence Celebrity Recognition?

I have been using Google Cloud Video Intelligence Api happily and succesfully until this point. However, now if I am not mistaken, I noticed that Celebrity API is only open to approved selected media companies. Amazon Rekognition provides this support to public. This is quite unbelievable. How can this kind of service be a private service on such a public cloud service such as Google's ?
Does anyone know how to use Celebrity Recognition API from Google Cloud ?
In answer to your question why Celebrity recognition is not made publicly available, there are legal reasons that Google may be dealing with. This type of technology is powerful and in the wrong hands could cause serious issues for all parties involved.
See the “Restricted access feature” note in Google’s documentation [1].
[1] https://cloud.google.com/vision/docs/celebrity-recognition

Google Cloud Vision - Which region does Google upload the images to?

I am building an OCR based solution to extract information from certain financial documents.
As per the regulation in my country (India), this data cannot leave India.
Is it possible to find the region where Google Cloud Vision servers are located?
Alternately, is it possible to restrict the serving region from the GCP console?
This is what I have tried:
I went through GCP Data Usage FAQ: https://cloud.google.com/vision/docs/data-usage
GCP Terms of Service:
https://cloud.google.com/terms/
(Look at point 1.4 Data Location on this page)
Talking to the GCP Sales rep. He did not know the answer.
I know that I can talk to Google support but that requires $100 to activate, which is a lot for for me.
Any help would be appreciated. I went through the documentation for Rekognition as well but it seems to send some data outside for training so not considering it at the moment.
PS - Edited to make things I have tried clearer.
For anyone looking at this topic recently, Google Vision has introduced multi-region support in December 2019, as can be see in their release notes.
Currently Google Vision supports 2 different processing regions: eu and us, and they say that using a specific endpoint guarantees that processing will only take place in the chosen territory.
The documentation for regionalization mentions that you can simply replace the default API endpoint vision.googleapis.com with either of the regional ones:
eu-vision.googleapis.com
us-vision.googleapis.com
The vision client libraries offer options for selecting the endpoint as well, and the documentation gives code samples.
For example, here is how you would do it in Python:
from google.cloud import vision
client_options = {'api_endpoint': 'eu-vision.googleapis.com'}
client = vision.ImageAnnotatorClient(client_options=client_options)
As pointed out by #Tedinoz in a comment above, the answer can be found here: https://groups.google.com/forum/#!topic/cloud-vision-discuss/at43gnChLNY
To summarise:
1. Google stores images uploaded to Cloud Vision only in memory
2. Data is not restricted to a particular region (as of Dec 6, 2018)
3. They might add data residency features in Q1, 2019.

Is Aspose Cloud FDA Part 11 compliant?

The US FDA has regulations for electronic record keeping:
TITLE 21--FOOD AND DRUGS
CHAPTER I--FOOD AND DRUG ADMINISTRATION
DEPARTMENT OF HEALTH AND HUMAN SERVICES
SUBCHAPTER A--GENERAL
PART 11 ELECTRONIC RECORDS; ELECTRONIC SIGNATURES
https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=11
Q1. Is the Aspose Cloud service Part 11 compliant?
(The forums have no existing questions)
Q2. Are you using Aspose Cloud currently in a Part 11 compliant app, what are the pros/cons?
Thanks,
If we are supposed to be certified by an external authority then we are not compliant. Though while reading the CFR document, we found no reason to not comply this certification.
Aspose for Cloud provides Cloud Storage to its customers but we also let our customers use their own Cloud Storage or some other third party Storage they are comfortable with. So you can use Microsoft Azure or Dropbox that are Part 11 compliant with our APIs as a storage provider.
In short, you should not rely on our system alone to get the compliance. But if you use our system from a system that is Part 11 compliant, we don't find any reason why we will break the compliance.

Google Vision privacy: image deletion

I'm planning to use Google Vision for document recognition.
For example, I will upload driver license and I should get all text data and verify that it is driver license and not the cover of a magazine.
The question is: does Google Vision has API for deletion of uploaded images?
Does Google Vision fit my case if I have some security requirements?
If you use Google's mobile vision API, text and face detection is done on device rather than being uploaded:
https://developers.google.com/vision/
For those who wondering the same problem, You can check their data policy here.
https://cloud.google.com/vision/docs/data-usage
My reading of Google APIs Terms of Service indicates that you will not be able to delete the images.
5b. Submission of Content
Some of our APIs allow the submission of content. Google does not acquire any ownership of any intellectual property rights in the content that you submit to our APIs through your API Client, except as expressly provided in the Terms. For the sole purpose of enabling Google to provide, secure, and improve the APIs (and the related service(s)) and only in accordance with the applicable Google privacy policies, you give Google a perpetual, irrevocable, worldwide, sublicensable, royalty-free, and non-exclusive license to Use content submitted, posted, or displayed to or from the APIs through your API Client. "Use" means use, host, store, modify, communicate, and publish. Before you submit content to our APIs through your API Client, you will ensure that you have the necessary rights (including the necessary rights from your end users) to grant us the license.
Being able to "publish" your driver's licenses is probably not something you want.
The above terms are also completely at odds with the GDPR where the user has the right to delete and modify their data.
7a. Google Privacy Policies
By using our APIs, Google may use submitted information in accordance with our privacy policies.
Note that those privacy policies are the ones that govern normal users, not cloud specifically. In plain text, and IANAL, it means that Google assumes that for whatever content you give them, the user has agreed to anything that Google does for a user that directly use, say Google Docs.
That's another indication that it's impossible to use their APIs and be GDPR compliant.
This should solve your issue
tl;dr "The stored image is typically deleted in a few hours."
Will the image I send to the Cloud Vision API, the results or other
information about the request itself, be stored on Google servers? If
so, how long and where is the information kept, and do I have access
to it? When you send an image to Cloud Vision API, we must store that
image for a short period of time in order to perform the analysis and
return the results to you. The stored image is typically deleted in a
few hours. Google also temporarily logs some metadata about your
Vision API requests (such as the time the request was received and the
size of the request) to improve our service and combat abuse.
Some of the other answers a bit outdated so adding my own answer. The data usage FAQ states
When you send an image to Vision API, we must store that image for a short period of time in order to perform the analysis and return the results to you. For asynchronous offline batch operations, the stored image is typically deleted right after the processing is done, with a failsafe Time to live (TTL) of a few hours. For online (immediate response) operations, the image data is processed in memory and not persisted to disk.
If you use the synchronous Vision API methods, the image is never persisted in Vision API and so there is nothing to delete. If you use the asynchronous Vision API methods, the image is only persisted during the operation and is deleted immediately after the operation completes with a fail-safe of a few hours. Again there is nothing for the user to delete, Vision API takes care of deleting the data for you.
A related question that sometimes comes up is about enforcing usage to take palce in a particular region. You can see the answer here: Google Vision: How to enforce processing in EU
Depends on your security requirements, and the exact privacy law one needs to abide by. In my case, it was HIPAA, one needs to jump through a lot of hoops, but according to https://cloud.google.com/security/compliance/hipaa, Google Cloud Vision API is a HIPAA covered product.

Is it possible to get pricing for AWS S3 programmatically through their SDK?

I'd like to be able to estimate the cost of a download operation from S3 programmatically but I hesitate to hard-code the prices they list (per GB) on their pricing page in the event they change. Does the SDK provide any kind of access to current pricing data. If it does, I can't seem to find it.
CLARIFICATION: I'm asking if the official Amazon SDK has hooks for pricing data, not if it's possible to get pricing data at all. Obviously, it is possible to get pricing data through non-documented means.
Since you're asking for SDK support and SDKs are language-specific, I have to stress I can only speak for the Ruby SDK.
The Ruby SDK in the latest major version (2.x) is mostly auto-generated by an API description in JSON format for each documented API. There is no official pricing API – only a static file that states:
This file is intended for use only on aws.amazon.com. We do not guarantee its availability or accuracy.
This means there is no way for the Ruby SDK to give you pricing information. Your mileage may vary in other languages (but I doubt it).