How can I insure that my retrieval of secrets is secure? - amazon-web-services

Currently I am using Terraform and Aws Secrets Manager to store and retrieve secrets, and I would like to have some insight if my implementation is secure, and if not how can I make it more secure. Let me illustrate with what I have tried.
In secrets.tf I create a secret like (this needs to be implemented with targeting):
resource "aws_secretsmanager_secret" "secrets_of_life" {
name = "top-secret"
}
I then go to the console and manually set the secret in AWS Secrets manager.
I then retrieve the secrets in secrets.tf like:
data "aws_secretsmanager_secret_version" "secrets_of_life_version" {
secret_id = aws_secretsmanager_secret.secrets_of_life.id
}
locals {
creds = jsondecode(data.aws_secretsmanager_secret_version.secrets_of_life.secret_string)
}
And then I proceed to use the secret (export them as K8s secrets for example) like:
resource "kubernetes_secret" "secret_credentials" {
metadata {
name = "kubernetes_secret"
namespace = kubernetes_namespace.some_namespace.id
}
data = {
top_secret = local.creds["SECRET_OF_LIFE"]
}
type = "kubernetes.io/generic"
}
It's worth mentioning that I store tf state remotely. Is my implementation secure? If not, how can I make it more secure?

yes I can confirm it is secure since you accomplished the following:
plain text secrets out of your code.
Your secrets are stored in a dedicated secret store that enforces encryption and strict access control.
Everything is defined in the code itself. There are no extra manual steps or wrapper scripts required.
Secret manager support rotating secrets, which is useful in case a secret got compromised.
The only thing I can wonder about is using a Terraform backend that supports encryption like s3, and avoid commet the state file to your source control.

Looks good, as #asri suggests it a good secure implementation.
The risk of exposure will be in the remote state. It is possible that the secret will be stored there in plain text. Assuming you are using S3, make sure that the bucket is encrypted. If you share tf state access with other developers, they may have access to those values in the remote state file.
From https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1
These secrets will still end up in terraform.tfstate in plain text! This has been an open issue for more than 6 years now, with no clear plans for a first-class solution. There are some workarounds out there that can scrub secrets from your state files, but these are brittle and likely to break with each new Terraform release, so I don’t recommend them.

Hi I'm working on similar things, here're some thoughts:
when running Terraform for the second time, the secret will be in plain text in state files which are stored in S3, is S3 safe enough to store those sensitive strings?
My work is using the similar approach: run terraform create an empty secret / dummy strings as placeholder -> manually update to real credentials -> run Terraform again to tell the resource to use the updated credentials. The thing is that when we deploy in production, we want the process as automate as possible, this approach is not ideal ut I haven't figure out a better way.
If anyone has better ideas please feel free to leave a comment below.

Related

How can I reference mounted secrets from Secret Manager in a python Cloud Function?

I'm trying to reference a series of APIs and would like peace of mind for key security, so I am storing keys in Secret Manager. However, the documentation doesn't specify the best method of connecting to a mounted path within the Cloud Function.
Suppose my secret was named key6 and has a mount path of /api/secret/key6 - How would I call this in python?
I attempted this method: https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets#secretmanager-create-secret-python
However, given that this didn't use the mounted path, I wanted to see if there was a better implementation.
The process to read the secret is via standard file operations in Python. So if the path is /api/secret/key6 , then you could do something like:
secret_location = '/api/secret/key6'
with open(secret_location) as f:
YOUR_SECRET = f.readlines()[0]
Just ensure that you have given the service account running your Cloud Functions, the necessary permissions to access the Secrets.

terraform storing sensitive data in state file

I am using variables with sensitivity true, even though, state file stores id and password. Any how to avoid it?
variable "rs_master_pass" {
type = string
sensitive = true
}
In state file,
"master_password": 'password'
Even though, taking out from state manually, comes back in each apply.
There is no "easy" way to avoid that. You must simply not hard-code the values in your TF files. Setting sensitive = true does not protect against having the secrets in plain text as you noticed.
The general ways for properly handling secrets in TF are:
use specialized, external vaults, such as Terraform Vault, AWS Parameter Store or AWS Secret Manger. They have to be set separately as to again not having their secrets available in TF state file.
use local-exec to setup the secrets outside of TF. Whatever you do in local-exec does not get stored in TF state file. This often is done to change dummy secrets that may be required in your TF code (e.g. RDS password) to the actual values outside of TF knowledge.
if the above solutions are not accessible, then you have to protect your state file (its good practice anyway). This is often done by storing it remotely in S3 under strict access policies.

Concatenate AWS Secrets in aws-cdk for ECS container

how do you go about making a postgres URI connection string from a Credentials.fromGeneratedSecret() call without writing the secrets out using toString()?
I think I read somewhere making a lambda that does that, but man that seems kinda overkill-ish
const dbCreds = Credentials.fromGeneratedSecret("postgres")
const username = dbCreds.username
const password = dbCreds.password
const uri = `postgresql://${username}:${password}#somerdurl/mydb?schema=public`
Pretty sure I can't do the above. However my hasura and api ECS containers need connection strings like the above, so I figure this is probably a solved thing?
If you want to import a secret that already exists in the Secret's Manager you could just do a lookup of the secret by name or ARN. Take a look at the documentation referring how to get a value from AWS Secrets Manager.
Once you have your secret in the code it is easy to pass it on as an environment variable to your application. With CDK it is even possible to pass secrets from Secrets Manager or AWS Systems Manager Param Store directly onto the CDK construct. One such example would be (as pointed in the documentation):
taskDefinition.addContainer('container', {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
memoryLimitMiB: 1024,
environment: { // clear text, not for sensitive data
STAGE: 'prod',
},
environmentFiles: [ // list of environment files hosted either on local disk or S3
ecs.EnvironmentFile.fromAsset('./demo-env-file.env'),
ecs.EnvironmentFile.fromBucket(s3Bucket, 'assets/demo-env-file.env'),
],
secrets: { // Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
SECRET: ecs.Secret.fromSecretsManager(secret),
DB_PASSWORD: ecs.Secret.fromSecretsManager(dbSecret, 'password'), // Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)
PARAMETER: ecs.Secret.fromSsmParameter(parameter),
}
});
Overall, in this case, you would not have to do any parsing or printing of the actual secret within the CDK. You can handle all of that processing within you application using properly set environment variables.
However, only from your question it is not clear what exactly you are trying to do. Still, the provided resources should get you in the correct direction.

How to convert the aws secret manager string to map in terraform (0.11.13)

I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.

trouble with AWS SWF using IAM roles

I've noticed on AWS that if I get IAM role credentials (key, secret, token) and set them as appropriate environment variables in a python script, I am able to create and use SWF Layer1 objects just fine. However, it looks like the Layer2 objects do not work. For example, if I have boto and os imported, and do:
test = boto.swf.layer2.ActivityWorker()
test.domain = 'someDomain'
test.task_list = 'someTaskList'
test.poll()
I get an exception that the security token is not valid, and indeed, if I dig through the object, the security token is not set. This even happens with:
test = boto.swf.layer2.ActivityWorker(session_token=os.environ.get('AWS_SECURITY_TOKEN'))
I can fix this by doing:
test._swf.provider.security_token = os.environ.get('AWS_SECURITY_TOKEN')
test.poll()
but seems pretty hacky and annoying because I have to do this every time I make a new layer2 object. Anyone else noticed this? Is this behavior intended for some reason, or am I missing something here?
Manual management of temporary security credentials is not only "pretty hacky", but also less secure. A better alternative would be to assign an IAM role to the instances, so they will automatically have all permissions of that Role without requiring explicit credentials.
See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html