I'm working on a Slack app which will have to store access token per each customer using the app (ex. 1000 teams using it = 1000 tokens). Token enables the app to access Slack API for customers workspace and will be used frequently every day.
App will be running on AWS, using Lambda's and DynamoDB.
What would be the best practice to store those access tokens securly?
I cannot find any strict recomendation for this scenario. Was thinking initially to put those in DynamoDB in a dedicated table but thinking now if I should use other AWS services for that use case. I've checked Secrets Manager but looks like a rather expensive option and not sure if it applies to my scenario.
Appreciate any suggestions.
I would probably use a dedicated DynamoDB table for this purpose. At a minimum, I would configure it to use a KMS CMK to encrypt the data at-rest, and also restrict access to the table through fairly granular IAM permissions in your AWS account. If you also wanted to encrypt each value separately you could look into client-side encryption.
Your findings on the Secrets Manager costs are a good point. You could also look at Systems Manager Parameter Store as an alternative that is generally cheaper than Secrets Manager. Secrets Manager does have the added security of being able to set an IAM resource policy on the secret itself.
Ultimately it's up to you to determine how secure your solution needs to be, and how much you are willing to pay for that. You could even spin up an AWS HSM to encrypt the values, but that would increase the cost by quite a bit.
Related
Currently, we use AWS IAM User permanent credentials to transfer customers' data from our company's internal AWS S3 buckets to customers' Google BigQuery tables following BigQuery Data Transfer Service documentation.
Using permanent credentials possesses security risks related to the data stored in AWS S3.
We would like to use AWS IAM Role temporary credentials, which require the support of a session token on the BiqQuery side to get authorized on the AWS side.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
We considered Omni framework (https://cloud.google.com/bigquery/docs/omni-aws-cross-cloud-transfer) to transfer data from S3 to BQ, however, we faced several concerns/limitations:
Omni framework targets data analysis use-case rather than data transfer from external services. This concerns us that the design of Omni framework may have drawbacks in relation to data transfer at high scale
Omni framework currently supports only AWS-US-EAST-1 region (we require support at least in AWS-US-WEST-2 and AWS-EU-CENTRAL-1 and corresponding Google regions). This is not backward compatible with current customers' setup to transfer data from internal S3 to customers' BQ.
Our current customers will need to signup for Omni service to properly migrate from the current transfer solution we use
We considered a workaround with exporting data from S3 through staging in GCS (i.e. S3 -> GCS -> BQ), but this will also require a lot of effort from both customers and our company's sides to migrate to the new solution.
Is there a way that the BigQuery Data Transfer Servce can use AWS IAM roles or temporary credentials to authorise against AWS and transfer data?
No unfortunately.
The official Google BigQuery Data Transfer Service only mentions AWS access keys all throughout the documentation:
The access key ID and secret access key are used to access the Amazon S3 data on your behalf. As a best practice, create a unique access key ID and secret access key specifically for Amazon S3 transfers to give minimal access to the BigQuery Data Transfer Service. For information on managing your access keys, see the AWS general reference documentation.
The irony of the Google documentation is that while it refers to best practices and links to the official AWS docs, it actually doesn't endorse best practices and ignores what AWS mention:
We recommend that you use temporary access keys over long term access keys, as mentioned in the previous section.
Important
Unless there is no other option, we strongly recommend that you don't create long-term access keys for your (root) user. If a malicious user gains access to your (root) user access keys, they can completely take over your account.
You have a few options:
hook into both sides manually (i.e. link up various SDKs and/or APIs)
find an alternative BigQuery-compatible service, which does as such
accept the risk of long-term access keys.
In conclusion, Google is at fault here of not following security best practices and you - as a consumer - will have to bear the risk.
I have an AWS account with full access to DynamoDB.
I am writing an application that uses DynamoDB. I would like to test this application backed by the real DynamoDB (and not any local compatible/mock solution). However, the test application is not as secure a real production-ready application, and there is a real risk that during my tests an attacker may break into the test machine. If my real AWS credentials (needed to write to DynamoDB) are on that machine, they may be stolen and the attacker can basically do anything that I can do on DynamoDB - e.g., create expensive VMs in my account next week and mine for bitcoin.
So I'm looking for an alternative to saving my real AWS credentials (access key id and secret access key) on the test machine.
I read about Amazon's signature algorithm v4, and it turns out that its signature process is actually two-staged: First a "signing key" is calculated from the full credentials and this signing key works only for a single day on a single service - and then this "signing key" is used to sign the individual messages. This suggests that I could calculate the signing key on a secure machine and send it to the test machine - and the test machine will only do the second stage of the signature algorithm, and will only be able to use DynamoDB and only for a single day.
If I could do this, this would solve my problem, but the problem is that I couldn't figure out how I can tell boto3 to only do the second stage of the signing. It seems it always takes the full credentials aws_access_key_id and aws_secret_access_key - and does both stages of the signature. Is there a way to configure it to only do the second stage?
Alternatively, is there a different way in AWS or IAM or something, where someone like me that has credentials can use them to create temporary keys that can be used only for a short amount of time and/or only one specific service?
create temporary keys that can be used only for a short amount of time and/or only one specific service
Yes, that's why AWS STS service exists. Specifically you can use GetSessionToken which:
Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token.
You can also create IAM roles, and used STS's AssumeRole for the same thing. Actually using IAM roles for instances is the most preferred way to give temporary permissions for the applications on EC2 instances. This way you don't have to use your own credentials at all.
I am proposing to use AWS KMS to encrypt my database. However by boss challenge me that what if the someone in Amazon staff has access to steal my KMS and decrypt my database.
The information inside the database is very important and cannot take any risk other people can decrypt it.
Is there other solution to solve this issue? to make sure no one can steal the Key?
Should we use some on-prem HSM to store the key instead ?
As the FAQ points out, AWS KMS is designed such that
no one, including AWS employees, can retrieve your plaintext KMS keys from the service.
If you read further down, it also provides links to various articles detailing the specification and design of the KMS. And as you can see from the volumes of these articles, the full scope of design consideration and how it complies with FIPS certification is beyond the scope of this answer.
However, as an example, refer to the cryptographic details tech paper for some ideas of how it works. There are 2 areas mentioned where keys are present:
In the KMS Keys Repository
In the HSM modules
KMS Keys Repository
The repository serves as durable storage for the keys. Keys are, of course, stored encrypted. The article further explains that the key repository leverages on IAM roles.
Only under AWS IAM roles and accounts administered by each customer can customer KMS keys be created, deleted, or used to encrypt, decrypt, sign, or verify data.
This is the same way authentication and authorization to any other AWS services are managed. Hence, this is one way to prevent AWS employees from gaining access to the keys. How IAM works and how it is secured is once again beyond the scope of this answer.
HSM Modules
Unlike the KMS keys repository, the HSM Modules will have access to the plain text keys. However, the plain text keys are only loaded in-memory for the duration that they are used. They are not durably stored in the HSM modules.
These keys are made available only on the HSMs and only in memory for the necessary time needed to process your cryptographic request.
Hence, employees with access to these modules would be able to theoretically gain access to these keys. To mitigate this risk, if you go to the design goals section, the article further explains the modules use quorum-based access controls.
Multiple Amazon employees with role-specific access to quorum-based access controls are required to perform administrative actions on the HSMs.
That is, no single employee will have administrative access to these modules. Multiple employees are always required. Once again, how AWS assigns which roles to which employees at which management level is beyond the scope of this answer.
As the question requested, these are just some of the considerations of how the service is secured against AWS employees. For an organization to make a decision on whether to use AWS, usually it should be based on a comprehensive set of security policies and an audit whether AWS complies to these requirements.
EDIT
Since you mentioned also how to convince stakeholders, this is usually a business question rather than a technical one.
I would refer them to AWS compliance for evidence that AWS goes through rigorous 3rd party audits. Would then point out the security of a system is only as strong as the weakest link. That is, using AWS does not mean we automatically have AWS security. We have to ensure our software, our people, and our processes are secure against exploits. So unless we are sure we have better security profile than AWS (with all their compliance and audits), our focus and worry should be more on securing our resources.
Following Coursera Architecting with Google Kubernetes Engine for switching to Service Account.
It says create and download a key file and authenticate using the key. Is this the common way in GCP? There will be many keys created by developers and downloaded to many laptops or servers scattering the keys in many places, which seems to be not secure manner.
Answering your question, yes. The service accounts are the common way to authenticate in GCP.
There are two different service account types, and the recommendation is to use the second one:
User Managed Service Accounts: to authenticate you will then need a “password” that comes in the form of Service Account Key (json file), and if you leak the service account key, the service account can be considered compromised.
Using keys implies that you are in charge of their lifecycle and security, and it’s a lot to ask because:
You need a robust system for secrets distribution.
You need to implement a key rotation policy.
You need to implement safeguards to prevent key leaks.
Google Managed Service Account: Google Managed Service Accounts, are SAs for which you don’t need to generate keys and your applications can just assume their identity. No keys are involved: the VM will continuously request short lived authorization tokens from the metadata service.
Documentation
NO, no and no, don't use service account key file. As you smell it, you are right, it's a terrible thing for the security.
Today, there are several way to prevent the service account key usage, even if, in some corner case, you need them.
I have wrote bunch of articles on that topics:
the limits
the service account credential API
and a fight against a Google dev advocate and one of his article
Because YES, even Google tutorials, courses, documentation (...) promote that bad practice for years and continue. It was my nightmare in my previous company, and I increased my knowledge and skill to prevent key usage and find workarounds. Let me know your use case, I will try to help your the most
I need to develop a solution to store both symmetric and asymmetric keys securely in AWS. These keys will be used by applications that are running on EC2s and Lambdas. The applications will need to be set up with policies that will allow the application or lambda to pull the keys out of the key store. The key store should also manage the key expiry, notifying various people when keys are going to expire. The initial key exchange is between my company and its partners meaning that we may have either a public or private key for a key pair depending upon the data transfer direction.
We have looked at KMS but from what I have seen KMS does not support asymmetric keys. I have also seen online that some people are using either S3 (protected by KMS) or parameter store to store the keys but this does not address the issue of key management.
Do you guys have any thoughts on this? or even SaaS/PaaS suggestions?
My recommendation would be to use AWS Secrets Manager for this. Secrets Manager allows you to store any type of credential/key, you can set up fine-grained cross account permissions to secrets, encryption at rest is used (via KMS), and secrets can be automatically rotated (by providing an expiration time and an AWS Lambda function owned by you to perform the rotation).
More details on the official docs:
Basic tutorial on how to use AWS Secrets Manager
Encryption at rest on Secrets Manager
Secrets rotation
Managing secrets policies