I am evaluating push notification services and cannot use services on the cloud as laws prohibit customer identification data being stored off-premise.
Question
Is there any chance data will be stored off-premise if I use AWS-SNS API (not the console) to send push notifications to end user devices via code hosted on-premise(using AWS SDK)? In other words, will SNS retain my data or will it forget it right after it send the notification?
What have I tried so far?
Combed through the documentation as much as I could, but couldn't find anything to be 100% sure.
Would appreciate any pointers on this. TIA.
I would pose this question directly to AWS as it pertains to a legal requirement. I would clarify if the laws you need to comply with are in relation to data at rest or in transit, or both. Additionally if there are any circumstances where it would be ok for one or both of the aforementioned if there was certain security aspects that have been met.
Knowing no real detail about your use case I will say that AWS has a Region specifically for use by the US Government. If your solution is for the US Government then you should be making use of this Region as it ticks off a lot of compliance forms for you well in advance.
You can open a support ticket in the AWS console.
Again if there is a legal requirement for your data I thoroughly recommend that you ask AWS directly so that you may reference their answer in writing in the future.
Even if they didn't store it, how can you prove that to auditors?
Besides, what is the difference between storing something in memory (which they obviously have to do) and storing something on disk? One is volatile and the other isn't I guess. But from a compliance point of view, an admin on the box can get both, so who cares if the hardware with your data on it is a stick of RAM or a disk plugged into a SATA port?
Related
Wanted to know if cloud based platforms such as Azure and Amazon zeroize the content on the hard disk whenever an 'instance' is 'deleted' and prior to making it available for other users?
I've tried using 'dd' command on an Amazon-LightSail instance and it appears that the raw data is indeed zeroized. However was not sure if it was by chance (i just tried few random lengths) or if they actually take care to do that.
The concern is, if I leave passwords in configuration files, then someone who comes along would be able to read them (theoretically). Same goes for data in a database.
Generically, the solution to your concern typically used by Azure is storage encryption.
Your data is encrypted by default at the platform level with a key specific to your subscription; when the data or resource is removed, whether or not the storage is zeroed, it is effective inaccessible to a resource deployed on the same storage in another subscription.
To secure AWS account it is good to have virtual MFA device, such as Google Authenticator.
Usually, you can just take a picture of the QR code, and use it on as many devices as you want (as here suggested https://webapps.stackexchange.com/a/66666/188445, sorry, couldn't comment on that answer, don't have the reputation).
However, on AWS it asks two codes to confirm, that makes me think it is device specific. Is any way to make an AWS MFA on two devices or use backup if lose my phone?
First, I'll be that guy and say - don't backup your MFA key. If you lose your device, just jump through the steps of resetting it by contacting support.
While it doesn't necessarily defeat the purpose of increasing the security, and while it's also probably not likely that someone will attempt to steal your key, I don't think you're doing yourself any favors, security-wise.
But that's not what you're asking about.
When you say "on AWS it asks two codes to confirm, that makes me think it is device specific," I'm not sure I follow. Yes, it's device specific, in that you need the specific device that either scanned the QR code, or entered the key in, in order to auth via MFA.
But just because there are two fields, it doesn't mean that there are two different QR codes or MFA keys you need - you just need the one they show you.
After you set up your authenticator, you enter the first code you see into the first field, then wait for that to cycle out, then enter the next one into the second field. Asking for two codes just ensures that your authenticator is working correctly. It's not any different that other services that use an authenticator as MFA - some only ask for the first code that appears, some ask for two. (Personally I think two is better.)
I'm planning to use Google Vision for document recognition.
For example, I will upload driver license and I should get all text data and verify that it is driver license and not the cover of a magazine.
The question is: does Google Vision has API for deletion of uploaded images?
Does Google Vision fit my case if I have some security requirements?
If you use Google's mobile vision API, text and face detection is done on device rather than being uploaded:
https://developers.google.com/vision/
For those who wondering the same problem, You can check their data policy here.
https://cloud.google.com/vision/docs/data-usage
My reading of Google APIs Terms of Service indicates that you will not be able to delete the images.
5b. Submission of Content
Some of our APIs allow the submission of content. Google does not acquire any ownership of any intellectual property rights in the content that you submit to our APIs through your API Client, except as expressly provided in the Terms. For the sole purpose of enabling Google to provide, secure, and improve the APIs (and the related service(s)) and only in accordance with the applicable Google privacy policies, you give Google a perpetual, irrevocable, worldwide, sublicensable, royalty-free, and non-exclusive license to Use content submitted, posted, or displayed to or from the APIs through your API Client. "Use" means use, host, store, modify, communicate, and publish. Before you submit content to our APIs through your API Client, you will ensure that you have the necessary rights (including the necessary rights from your end users) to grant us the license.
Being able to "publish" your driver's licenses is probably not something you want.
The above terms are also completely at odds with the GDPR where the user has the right to delete and modify their data.
7a. Google Privacy Policies
By using our APIs, Google may use submitted information in accordance with our privacy policies.
Note that those privacy policies are the ones that govern normal users, not cloud specifically. In plain text, and IANAL, it means that Google assumes that for whatever content you give them, the user has agreed to anything that Google does for a user that directly use, say Google Docs.
That's another indication that it's impossible to use their APIs and be GDPR compliant.
This should solve your issue
tl;dr "The stored image is typically deleted in a few hours."
Will the image I send to the Cloud Vision API, the results or other
information about the request itself, be stored on Google servers? If
so, how long and where is the information kept, and do I have access
to it? When you send an image to Cloud Vision API, we must store that
image for a short period of time in order to perform the analysis and
return the results to you. The stored image is typically deleted in a
few hours. Google also temporarily logs some metadata about your
Vision API requests (such as the time the request was received and the
size of the request) to improve our service and combat abuse.
Some of the other answers a bit outdated so adding my own answer. The data usage FAQ states
When you send an image to Vision API, we must store that image for a short period of time in order to perform the analysis and return the results to you. For asynchronous offline batch operations, the stored image is typically deleted right after the processing is done, with a failsafe Time to live (TTL) of a few hours. For online (immediate response) operations, the image data is processed in memory and not persisted to disk.
If you use the synchronous Vision API methods, the image is never persisted in Vision API and so there is nothing to delete. If you use the asynchronous Vision API methods, the image is only persisted during the operation and is deleted immediately after the operation completes with a fail-safe of a few hours. Again there is nothing for the user to delete, Vision API takes care of deleting the data for you.
A related question that sometimes comes up is about enforcing usage to take palce in a particular region. You can see the answer here: Google Vision: How to enforce processing in EU
Depends on your security requirements, and the exact privacy law one needs to abide by. In my case, it was HIPAA, one needs to jump through a lot of hoops, but according to https://cloud.google.com/security/compliance/hipaa, Google Cloud Vision API is a HIPAA covered product.
I have no other choice but to adopt iCloud right now. In the near future i would like to build my own cloud service. Is there any problem if the app transfers all the data from iCloud to my own cloud?
Only the data related to my app of course.
After user's permission.
Is Apple positive about this?
If you mean, would Apple approve an app for the store that was going to transfer the user's iCloud data to some other online service, as usual all we can do is try and gauge the odds.
None of Apple's guidelines even hint that apps may not use non-iCloud services.
Neither do they hint that there's any issue with moving data from one service to another, even if one of them is iCloud.
Apple does not look kindly on apps that transfer user data to online storage without the user's knowledge. Assuming you make it clear to users what you're doing, this is probably not an issue, but users should have the chance to opt out of your service.
Based on information available right now, what you suggest is probably OK so long as your app makes clear what's happening. It's unwise to try and predict Apple's app-approval actions too closely. They might change their policies tomorrow, or they might decide to reject your app for reasons that had not previously been stated. At the moment though, switching services like that seems likely to be accepted.
We had a debate in the office with respect to audit logging of messages received and sent via Web Services.
I am of the opinion that the entire SOAP message should not be logged in the application audit logs unless there is a requirement that states that this is required. Only salient elements of the request need to be part of the audit log as this provides evidence that is required in the audit trail.
My reasons are:
(1) Audit logs by definition are always turned on and should not be turned off. So if we take the decision of logging the entire message for audit trail they will be turned on always and can cause a huge performance impact during production runs (particularly during peak loads)
(2) If the business/technical requirement does not explicitly state this as a requirement this is an un-necessary overhead. If information is required, the run-time engines tracing capability can be used to turn on/off to get the SOAP messages.
What are the generic thoughts of experts in this space.
Thanks,
Manglu
Don't confuse auditing with logging. If there is a requirement for auditing then you need to perform auditing.
Since auditing is typically required for legal or policy reasons you need to understand what actions and activities need to be logged as well as what data needs to be logged. This is not a technical decision but needs to be determined by the business. Once you have your requirements then you can project your audit volumes and design your application to take these into account (e.g. performance, storage, etc.).
If you think you have an auditing requirement but it is not explicitly stated then ask for clarification. You don't want to find this out only after you have been sued.
If you truly have an auditing requirement then you should probably audit the entire soap request message as well as the response. This is to support non-repudiation.
As an example let's say that you have a health care application and only audit the key information: personal identifiers (e.g. SSN) and whether the patient is allergic to penicillin. But what happens when a patient dies because is allergic to penicillin was false when it shouldn't have been? The audit logs are checked and you say that you were sent a value of false for that patient but the other system says that they actually sent you a value of true and that you must have a problem with your system. In this scenario what you need to do is to show the exact message that was sent to the web service and that because it was signed by the service consumer you can prove that it came from them and also prove that the data in the message is correct. Then you would follow that information through your system via the audit logs.
Of course, it all goes back to the requirements; if the business finds that only auditing x and y satisfies whatever legislation or policies then go with that.
I know from experience that logging it all can lead to pretty huge files or a lot of data if kept on database. It's very helpful during development time, but in production it becomes a problem. I would suggest logging as you said. But be aware of a situation I came across: We were providing a webservice for 3rd-party companies use. When there's some dispute about who's fault is the error. We needed the exact SOAP message to prove that it wasn't our fault. I don't know if this scenario applies to you.