AWS has secret manager which stores secrets. It has the API to get individual secret. I want to fetch all the secrets related to an account at once. Any way we can achieve this?
You can use the method ListSecrets to list all secret metadata excluding SecretString or SecretBinary.
I tried to list secrets names in my secrets manager using boto3 python: using list.secrets()
secrets = secret_client.list_secrets()
secrets_manager = (secrets['SecretList'])
for secret in secrets_manager:
print ("{0}".format(secret['Name']))
The complete list was around 20, but the output was only around 5 secrets.
Updated the code to below, it worked:
secrets = secret_client.list_secrets()
secrets_manager = (secrets['SecretList'])
while "NextToken" in secrets:
secrets = secret_client.list_secrets(NextToken=secrets["NextToken"])
secrets_manager.extend(secrets['SecretList'])
for secret in secrets_manager:
print ("{0}".format(secret['Name']))
So basically, AWS secrets manager list.secrets() call paginates your output, so it is better to use 'NextToken' as mentioned in https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/secretsmanager.html#SecretsManager.Client.list_secrets
'The encrypted fields SecretString and SecretBinary are not included in the output' in ListSecrets.
If you're trying to fetch all secret values then options might include:
1) Scripting list-secrets and get-secret-value to fetch all secret values. This example will be slow since it's using serial requests.
#!/usr/bin/env python3
import json
import subprocess
secrets = json.loads(subprocess.getoutput("aws secretsmanager list-secrets"))
for secret in secrets.values():
for s in secret:
name = s.get('Name')
data = json.loads(subprocess.getoutput("aws secretsmanager get-secret-value --secret-id {}".format(name)))
value = data.get('SecretString')
print("{}: {}".format(name, value))
2) Use a 3rd party tools such as Summon with its AWS Provider which accepts secrets.yml file and makes async calls to inject secrets into the environment of whatever command you're calling.
Related
I'm working on the AWS glue job, can someone please help me to give me a script in AWS Glue Job Spark that would retrieve my secrets from secret manager. Help is appreciated.
It is fairly simple (if you have all of the required IAM permissions) using boto3 and get_secret_value() function.
sm_client = boto3.client('secretsmanager')
response = sm_client.get_secret_value(
SecretId=<your_secret_id>
)
If the value is a string, you can extract it like this:
value = secret_value_response['SecretString']
If it is a binary value:
secret_value_response['SecretBinary']
Additionally, if the secret has multiple versions of the secret, you have to use VersionId and/or VersionStage as explained in the linked documentation.
I must be missing something in how AWS secrets can be accessed through Terraform. Here is the scenario I am struggling with:
I create an IAM user named "infra_user", create ID and secret access key for the user, download the values in plain txt.
"infra_user" will be used to authenticate via terraform to provision resources, lets say an S3 and an EC2 instance.
To protect the ID and secret key of "infra_user", I store them in AWS secrets manager.
In order to authenticate with "infra_user" in my terraform script, I will need to retrieve the secrets via the following block:
data "aws_secretsmanager_secret" "arn" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
But, to even use the data block in my script and retrieve the secrets wouldn't I need to authenticate to AWS in some other way in my provider block before I declare any resources?
If I create another user, say "tf_user", to just retrieve the secrets where would I store the access key for "tf_user"? How do I avoid this circular authentication loop?
The Terraform AWS provider documentation has a section on Authentication and Configuration and lists an order of precedence for how the provider discovers which credentials to use. You can choose the method that makes the most sense for your use case.
For example, one (insecure) method would be to set the credentials directly in the provider:
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
Or, you could set the environment variables in your shell:
export AWS_ACCESS_KEY_ID="my-access-key"
export AWS_SECRET_ACCESS_KEY="my-secret-key"
export AWS_DEFAULT_REGION="us-west-2"
now your provider block simplifies to:
provider "aws" {}
when you run terraform commands it will automatically use the credentials in your environment.
Or as yet another alternative, you can store the credentials in a profile configuration file and choose which profile to use by setting the AWS_PROFILE environment variable.
Authentication is somewhat more complex and configurable than it seems at first glance, so I'd encourage you to read the documentation.
I need to write Git-revisioned Terraform code to put a secret string into AWS Secrets Manager. Given a secret string in a textfile:
% cat /tmp/plaintext-password
my-super-secret-password
I am able to make an encrypted version of it using a KMS key:
# Prints base64-encoded, encrypted string.
aws kms encrypt --key-id my_kms_uuid --plaintext fileb:///tmp/plaintext-password --output text --query CiphertextBlob
# abcdef...123456789/==
What Terraform code can be written to get that base64 string in AWS Secrets Manager, such that AWS knows it was encrypted with my_kms_uuid? I have tried the following:
resource "aws_secretsmanager_secret" "testing-secrets-secret" {
name = "secret-for-testing"
kms_key_id = "<my_kms_uuid>"
}
resource "aws_secretsmanager_secret_version" "testing-secrets-version" {
secret_string = "abcdef...123456789/=="
}
The problem is I can't figure out a way to tell AWS Secrets Manager that the string is already encrypted by KMS so it doesn't have to encrypt it again. Can this be done?
If your goal is to keep secret values out of the statefile, then you have two choices:
Encrypt the secret outside of Terraform, then store the encrypted value in Secrets Manager.
This will force all consumers of the secret to decrypt it before use. Since an encrypted secret includes the CMK used to encrypt it, there's no need for you to separately track the key ID.
There are several drawbacks to this approach. For one thing, you have to do two steps to use any secret: retrieve it and decrypt it. If you use ECS, you can't provide the name of the secret and let ECS to provide the decrypted value to your container.
A bigger drawback is that it can be very easy to forget which CMK is used for which secret, and accidentally delete the CMK (at which point the secret becomes unusable). Related is knowing which permissions to grant to the consumers, especially if you have a lot of CMKs.
Create the secret inside Terraform, and set its value manually.
This keeps the actual value in Secrets Manager, so you don't need to use two steps to decrypt it.
It is possible to use local-exec to generate the secret within the Terraform configuration: write a script that generates random data and then invokes the AWS CLI to store the value. However, this technique is more frequently used for things like SSH private keys that are created outside of the Terraform provisioning process.
Better than either of these solutions is to store your statefile somewhere that it isn't generally accessible. There are a bunch of backends that can do this for you.
When encrypting a secret string with KMS, the key that was used is actually specified within the output of the resulting ciphertext. This means we can do this the following way:
Encrypt the secret string with an existing KMS key in AWS aws kms encrypt --key-id arn:aws:kms:us-east-1:<account_id>:key/mrk-blahblahblah --plaintext fileb://<(printf 'SOME_SECRET_TEXT') --output text --query CiphertextBlob --region us-east-1
Then use aws_kms_secrets to decrypt it within Terraform (something like this).
Then you push the decrypted secret up into AWS Secrets Manager using aws_secretsmanager_secret_version.
The net result is only the encoded secret is kept in version control, but the password that's actually stored in Secrets Manager is the decoded string.
We deployed our complete application in AWS environment and We find AWS Secret Manager is the right choice to store the secrets for the database and a few other components.
Our ultimate aim is not to store any credentials in the config file / database. It is achieved using AWS Secret Manager.
But when I try to connect the AWS Secret Manager for retrieving the secret value, I see it expects a field like "secret-id" as shown below, I need to protect this secret-id in some location so that I can use this in the application for accessing the secret value.
aws secretsmanager get-secret-value --secret-id tutorials/MyFirstTutorialSecret
If you want to hide your secret-id, you better have another security layer. How about store those secret-id in somewhere, AWS DynamoDB?
| id | secret-id |
| abc123 |tutorials/MyFirstTutorialSecret|
Then create a customized script (Bash/Python) which can be only accessed by you and privileged users?
$MYSECRETID = <Retrieve it from DynamoDB using `id` key>
aws secretsmanager get-secret-value --secret-id $MYSECRETID
AWS doesn't permit what you want as it uses the name of the secret as part of the ARN. However, you can either try to do some indirection like #leondkr suggests, or use policies to restrict who can even list what secrets exist. That could be done in either IAM or in the secret resource policy. See here for more information: https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access.html The privilege you may want to restrict is secretsmanager:ListSecrets.
Here are the IAM actions for that service: https://docs.aws.amazon.com/service-authorization/latest/reference/list_awssecretsmanager.html
I am trying to figure out how to pass in static IAM AWS credentials when using the AWS Data API to interact with an Aurora Serverless db.
I am using the AWS Python Boto library and I read data from a table like this (which by default uses the credentials of the default IAM user that is defined in my ~/.aws/credentials file):
rds_client = boto3.client('rds-data')
rds_client.execute_statement(
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
This works successfully.
But I want to be able to pass in an AWS Access Key and Secret Key as parameters to the execute_statement call, something like:
rds_client.execute_statement(
accessKey='XXX',
secretKey='YYY',
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
But that does not work.
Any ideas on how I can achieve this?
Thanks!
In order to accomplish this, you will need to create a new function that takes the access key and the secret key, create a client for that user, then make the call.
def execute_statement_with_iam_user(accessKey, secretKey):
rds_client = boto3.client(
'rds',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey
)
rds_client.execute_statement(
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
execute_statement_with_iam_user(accessKey, secretkey)
FYI, AWS does not recommend hard coding your credentials like this. What you should be doing is assuming a role with a temporary session. For this, you would need to look into the sts client and creating roles for assumption.