Google Cloud SQL column-level encryption - google-cloud-platform

Does Google Cloud SQL support column-level encryption?
I know that it is possible for BigQuery tables but not sure about Cloud SQL!
link

It's not a out of the bow feature on Cloud SQL, you need to do it manually when you read and write the data. You can use Cloud KMS for hat.
With BigQuery, keep in mind that you need to keep the key in BigQuery also and only the IAM permission allow to access or not to the keyset.
Eventually, all the data are encrypted at rest, but I'm sure that your use case is for a specific column, not for the whole database.

Related

Can I use temporary AWS IAM credentials with the BigQuery Data Transfer Service?

Currently, we use AWS IAM User permanent credentials to transfer customers' data from our company's internal AWS S3 buckets to customers' Google BigQuery tables following BigQuery Data Transfer Service documentation.
Using permanent credentials possesses security risks related to the data stored in AWS S3.
We would like to use AWS IAM Role temporary credentials, which require the support of a session token on the BiqQuery side to get authorized on the AWS side.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
We considered Omni framework (https://cloud.google.com/bigquery/docs/omni-aws-cross-cloud-transfer) to transfer data from S3 to BQ, however, we faced several concerns/limitations:
Omni framework targets data analysis use-case rather than data transfer from external services. This concerns us that the design of Omni framework may have drawbacks in relation to data transfer at high scale
Omni framework currently supports only AWS-US-EAST-1 region (we require support at least in AWS-US-WEST-2 and AWS-EU-CENTRAL-1 and corresponding Google regions). This is not backward compatible with current customers' setup to transfer data from internal S3 to customers' BQ.
Our current customers will need to signup for Omni service to properly migrate from the current transfer solution we use
We considered a workaround with exporting data from S3 through staging in GCS (i.e. S3 -> GCS -> BQ), but this will also require a lot of effort from both customers and our company's sides to migrate to the new solution.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
No unfortunately.
The official Google BigQuery Data Transfer Service only mentions AWS access keys all throughout the documentation:
The access key ID and secret access key are used to access the Amazon S3 data on your behalf. As a best practice, create a unique access key ID and secret access key specifically for Amazon S3 transfers to give minimal access to the BigQuery Data Transfer Service. For information on managing your access keys, see the AWS general reference documentation.
The irony of the Google documentation is that while it refers to best practices and links to the official AWS docs, it actually doesn't endorse best practices and ignores what AWS mention:
We recommend that you use temporary access keys over long term access keys, as mentioned in the previous section.
Important
Unless there is no other option, we strongly recommend that you don't create long-term access keys for your (root) user. If a malicious user gains access to your (root) user access keys, they can completely take over your account.
You have a few options:
hook into both sides manually (i.e. link up various SDKs and/or APIs)
find an alternative BigQuery-compatible service, which does as such
accept the risk of long-term access keys.
In conclusion, Google is at fault here of not following security best practices and you - as a consumer - will have to bear the risk.

AWS Glue View to restrict access to S3

From reading the AWS manuals,
it is not clear to me whether one can enforce access to S3 data for an IAM user via a AWS View only.
I am going to try it next week, but some up-front research is what I am doing first so as to save time.
No, you cannot, via AWS Glue Data Catalog. Need Athena or Redshift Spectrum for such an approach.

What is the Google BigQuery equivalent AWS service?

I heard Athena is data analytics service from AWS which provides the same features like big query. Can we use Athena as alternative product for bigquery?
Athena is often used as a SQL layer for structured data in S3 such as formatted logs rather than a production DB like BigQuery which brings built-in multi-region support, etc. The AWS equivalent would still be a dedicated DB, either Postgres for SQL or Dynamo for NoSQL.

Does Cloud Datastore support Customer Supplied Encryption Keys?

The documentation just lists server side encryption, where Google handles the keys. Is it possible to use customer supplied keys ala Cloud Storage?
It is not possible to use Customer Supplied Encryption keys, at this time, with Cloud Datastore.

Google Cloud KMS Best Practice with BigQuery

I need to Encrypt the Sensitive fields in the Bq Table but my Loading Is Done through the Dataflow. I thought of 3 Different way to Use it.
Encrypt the whole Table using Customer Managed Key and Make 3 Views on Different Classifications and provide Service account to Users to access the View and Provide that Service account role as Decrypter in KMS and Dataflow Service Account as Encrypter Load the Table. (Problem We do not have View Level Access so that views Required to Maintain in Different Datasets which makes our job more Difficult)
Encrypt the Fields Using The API call in Dataflow While Loading and Make a UDF function to Decrypt that Colum Data at Runtime in Bq Using Service Account.
Example Id Fields are Encrypted Using API call in Dataflow And we defined a UDF function in Bq to Decrypt it but only those can decrypt that Data who have access in KMS else it will throw an Exception
In this way we keep a Single Table Open to All Users but Only Authenticate Use can only See the that.
Problem: (Continuous Call of API at Runtime which makes our quota Exhausted and Cost is Another Matter)
Maintaining Different tables in different datasets which a. Encrypted Tables with Sensitive Field b. Non-Encrypted Table with Non-Sensitive Fields.
Problem: (Maintenance and Making Data in Sink and Join at Run Time in BQ)
The Above are My Approach and Use case Is Anyone able to help me to see what to Use and Why its better than others.