Currently, we heavily rely on AWS SSM to store and read secrets from. All of our services and CodePipelines use AWS SSM for fetching the secrets.Secret rotation with AWS SSM requires the use of lambda functions, it would get quite tiresome since we have large number of secrets.
We researched about HashiCorp Vault and the two features that we would like to use our Secret rotation and Dynamic secrets. The secret rotation with HashiCorp seems painless as compared to SSM secret rotation.
Is it possible to use both of them together? Basically, the secret rotation would be done by HashiCorp Vault and the new values can be written back to AWS SSM. All the services (such as ECS, Beanstalk) will need to be restarted to fetch the new secrets.
In short, does Vault provide some sort of integration for the above or do I have to include the writing back part in the same secret rotation cron job script for the Vault?
HashiCorp Vault provides functionality of dynamic secretsts so there will be a vault agent which will fetch secrets from HashiCorp Vault and put it into a file or set as an environment variable. Application will read from the file or use environment variables. so you should not use SSM to fully leverage HashiCorp Vault.
But, if you want to use both HashiCorp Vault and SSM then what you can do is like Vault agent will fetch secrets from HashiCorp Vault and there will be a cron job which will change secrets into SSM.
Related
When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/
I am trying to figure out is there a way to access secrets like db password from AWS Batch job using Vault/ K8S secrets. The option to use Secret Manager provided by AWS is there, but as per my company policy we are using vault across the projects. AWS batch documentation does not have any documentation on options to store secrets other than Secret Manager.
Can anyone help me if using vault/ k8s secret is an option with AWS Batch.
Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?
With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.
You can use them together via this sync service: https://external-secrets.io/
We have an API app that uses Firebase Admin to send messages to devices.
Earlier, we used to specify the service account key using environment variable like GOOGLE_APPLICATION_CREDENTIALS="path_to_file.json".
But now, since we are shifting to AWS Elastic Container Service on Fargate, I am unable to figure out how to put this file in the container for AWS ECS.
Any advice highly appreciated.
Thanks
Solved it by storing the service key as a JSON Stringified environment variable & using admin.credential.cert() instead of defaultAppCredentials.
Refer: https://firebase.google.com/docs/reference/admin/node/admin.credential#cert
I would suggest instead AWS Secrets Manager that is purpose-built for storing secrets. Take a look to his blog post:
https://aws.amazon.com/blogs/compute/securing-credentials-using-aws-secrets-manager-with-aws-fargate/
Even better than using environment variables which have their own downsides, you can leverage AWS Parameter Store which is a secure way to manage secrets in the AWS environment (where secrets are encrypted both in transit and at rest).
You'd need to create an IAM role for Amazon ECS for your code to have access to the Parameter Store.
You may want to check this article: https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/
Use the specific method from_service_account_info as described here. You then pass the content of the credentials json file as a dictionary.
I am having properties file specific for dev, test and other environments. I have to store this files in some secure place in aws. I am using AWS Native tools for build and deployment. Please let me know how to store these files in aws
There are many ways to deal with a secret in case of AWS, but one thing is clear it depends on the service where you will use and consume these secret.
But you explore these two
The simplest way is to use the Environment variable.
AWS Secrets Manager
s3 ( for keeping files)
One common approach is to pass your secret as an environment variables, but in case of AWS I will recommend to go with AWS Secrets Manager
AWS Secrets Manager is an AWS service that makes it easier for you to
manage secrets. Secrets can be database credentials, passwords,
third-party API keys, and even arbitrary text. You can store and
control access to these secrets centrally by using the Secrets Manager
console, the Secrets Manager command line interface (CLI), or the
Secrets Manager API and SDKs.
Basic Secrets Manager Scenario
The following diagram illustrates the most basic scenario. It shows
how you can store credentials for a database in Secrets Manager, and
then use those credentials in an application that needs to access the
database.
Compliance with Standards
AWS Secrets Manager has undergone auditing for the these standards and can be part of your solution when you need to obtain compliance certification.
You can explore this article to read and write secret.
If you need to maintain files, not only object then you can store in s3 and pull files during deployment. but better to enable server-side encprtion.
But I will prefer secret manager instead of s3 and environment variable.
You can for s3 here and here
Bajjuri,
As Adil said in answer,
AWS Secret Manager -- if you want to store data as key, value pairs.
AWS S3 -- if you want to store files securely.
Adding to his answer, you can use AWS CodeDeploy Environment
Variables to fetch the files according to the your environment.
Let's say you've CodeDeploy deployment group for dev environment with
name "DEV" and deployment group for prod environment with name "PROD",
you can use this variable in bash script and call it in life cycle
hooks of appspec file to fetch the files or secret accordingly.
I've been using this technique in production for long and it works like a charm.