Conn Configuration for AWS Lambda Python RDS Postgres IAM Authentication - amazon-web-services

Recently it was create the possibility to access RDS instances with IAM users and roles. I am confused about how to configure a python connection, since I would not use the database authentication data with psycopg2.
Now I am using like this:
conn = psycopg2.connect("dbname='%s' user='%s' host='%s' password='%s'" % (db_name, db_user, db_host, db_pass))
I have not idea how to use IAM credentials to connect my lambda function with IAM auth.
Please help.

First, you need to create an IAM policy and a DB user as described here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
Then you need to create IAM role for your Lambda function and attach the IAM policy created above to it. Your Lambda function will need to be executed with this role to be able to create a temporary DB password for the DB user.
Finally, you can generate a temporary password for your DB user (created above) within your Lambda using a code snippet like this:
from urllib.parse import quote_plus
import boto3
def get_password(rds_hostname, db_user, aws_region=None, url_encoded=True):
if (not aws_region):
aws_region = boto3.session.Session().region_name
if (not aws_region):
raise Exception("Error: no aws_region given and the default region is not set!")
rds_port = 5432
if (":" in rds_hostname):
split_hostname = rds_hostname.split(":")
rds_hostname = split_hostname[0]
rds_port = int(split_hostname[1])
rds_client = boto3.client("rds")
password = rds_client.generate_db_auth_token( Region=aws_region,
DBHostname=rds_hostname,
Port=rds_port,
DBUsername=db_user)
if url_encoded:
return quote_plus( password )
else:
return password
Do not assign the the password to a variable. Get a new password on every run, since the password has limited time validity and your Lambda container might not be recycled before it expires...
Finally, create the DB connection string for whatever python package you use (I would suggest some pure Python implementation, such as pg8000) from your RDS hostname, port, username and the temporary password obtained with the function above (<user>:<password>#<hostname>:<port>/<db_name>).
Connecting to the RDS might be a bit tricky. If you don't know how to set up VPC's properly I would suggest you run your Lambda outside of VPC and connect to the RDS over a public IP.
Additionally, you will probably need to enforce SSL connection and possibly include the RDS CA file in your Lambda deployment package. The exact way how to do this depends on what you use to connect (I could only describe how to do this with pymysql and sqlalchemy).
Each of these steps could be described in a tutorial of it's own, but knowing about them should be enough to get you started.
Good luck!

Related

Terraform resets credentials when importing existing RDS db resource

We have a bunch of existing resources on AWS that we want to import under terraform managment. One of these resources is an RDS db. So we wrote something like this :
resource "aws_db_instance" "db" {
engine = "postgres"
username = var.rds_username
password = var.rds_password
# other stuff...
}
variable "rds_username" {
type = string
default = "master_username"
}
variable "rds_password" {
type = string
default = "master_password"
sensitive = true
}
Note these are the existing master credentials. That's important. Then we did this:
terraform import aws_db_instance.db db-identifier
And then tried terraform plan a few times, tweaking the code to fit the existing resource until finally, terraform plan indicated there were no changes to be made (which was the goal).
However, once we ran terraform apply, it reset the master credentials of the DB instance. Worse, other resources that had previously connected to that DB using these exact credentials suddenly can't anymore.
Why is this happening? Is terraform encrypting the password behind the scenes and not telling me? Why can't other services connect to the DB using the same credentials? Is there any way to import an RDS instance without resetting credentials?

AWS Secrets manager accessible from EC2 instance but throws NoCredentialsError when running from the docker container deployed on the same instance

My Python application is deployed in a docker container on an EC2 instance. Passwords are stored in secrets manager. During runtime, application will make an API call to secrets manager to fetch the password and connect. Since we recreated the instance, it started giving out below error -
botocore.exceptions.NoCredentialsError: Unable to locate credentials
My application code is -
session = boto3.session.Session()
client = session.client(service_name = 'secretmanager', region_name = 'us-east-1')
get_secret_value_response = client.get_secret_value(secretId = secret_name)
If I run -
aws secretmanager get-secret-value --secret-id abc
It works without any issues since IAM policy is appropriately attached to the EC2 instance.
I spent the last 2 days trying to troubleshoot this but am still stuck with no clarity on why this is breaking. Any tips or guidance would help.
The problem was with the HTTPToken variable in the instance metadata options which was defaulted to required with the fresh update. Reverted it back to optional and boto3 is now able to make an API call for instance meta data and inherit its roles.

How to create Aurora Serverless database cluster with secret manager in Terraform

I've been reading this page: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
The example there is mainly for the provisioned database, I'm new to the serverless database, is there an Terraform example to create a serverless Aurora database cluster (SQL db), using the secret stored in the secret manager?
Many thanks.
I'm guessing you want to randomize the master_password?
You can do something like this:
master_password = random_password.DatabaseMasterPassword.result
The SSM parameter can be created like so:
resource "aws_ssm_parameter" "SSMDatabaseMasterPassword" {
name = "database-master-password"
type = "SecureString"
value = random_password.DatabaseMasterPassword.result
}
The random password can be defined like so:
resource "random_password" "DatabaseMasterPassword" {
length = 24
special = true
override_special = "!#$%^*()-=+_?{}|"
}
The basic example of creating serveless aurora is:
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
engine = "aurora-mysql"
engine_mode = "serverless"
database_name = "myauroradb"
enable_http_endpoint = true
master_username = "root"
master_password = "chang333eme321"
backup_retention_period = 1
skip_final_snapshot = true
scaling_configuration {
auto_pause = true
min_capacity = 1
max_capacity = 2
seconds_until_auto_pause = 300
timeout_action = "ForceApplyCapacityChange"
}
}
I'm not sure what do you want to do with secret manager. Its not clear from your question, so I'm providing any example for it.
The accepted answer will just create the Aurora RDS instance with a pre-set password -- but doesn't include Secrets Manager. It's a good idea to use Secrets Manager, so that your database and the applications (Lambdas, EC2, etc) can access the password from Secrets Manager, without having to copy/paste it to multiple locations (such as application configurations).
Additionally, by terraforming the password with random_password it will be stored in plaintext in your terraform.tfstate file which might be a concern. To resolve this concern you'd also need to enable Secrets Manager Automatic Secret Rotation.
Automatic rotation is a somewhat advanced configuration with Terraform. It involves:
Deploying a Lambda with access to the RDS instance and to the Secret
Configuring the rotation via the aws_secretsmanager_secret_rotation resource.
AWS provides ready-to-use Lambdas for many common rotation scenarios. The specific Lambda will vary depending on database engine (MySQL vs. Postgres vs. SQL Server vs. Oracle, etc), as well as whether you'll connecting to the database with the same credentials that you're rotating.
For example, when the secret rotates the process is something like:
Invoke the rotation lambda, Secrets Manager will pass the name of the secret as a parameter
The Lambda will use the details within the secret (DB Host, Post, Username, Password) to connect to RDS
The Lambda will generate a new password and run the "Update password" command, which can vary based on DB Engine
The Lambda will update the new credentials to Secrets Manager
For all this to work you'll also need to think about the permissions Lambda will need -- such as network connectivity to the RDS instance and IAM permissions to read/write secrets.
As mentioned it's somewhat advanced--but results in Secrets Manager being the only persistent location of the password. Once setup it works quite nicely though, and your apps can securely retrieve the password from Secrets Manager (one last tip -- it's ok to cache the secret in your app to reduce Secrets Manager calls but be sure to flush that cache on connection failures so that your apps will handle an automatic rotation).

Passing in IAM credentials when using the Aurora Serverless Data API?

I am trying to figure out how to pass in static IAM AWS credentials when using the AWS Data API to interact with an Aurora Serverless db.
I am using the AWS Python Boto library and I read data from a table like this (which by default uses the credentials of the default IAM user that is defined in my ~/.aws/credentials file):
rds_client = boto3.client('rds-data')
rds_client.execute_statement(
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
This works successfully.
But I want to be able to pass in an AWS Access Key and Secret Key as parameters to the execute_statement call, something like:
rds_client.execute_statement(
accessKey='XXX',
secretKey='YYY',
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
But that does not work.
Any ideas on how I can achieve this?
Thanks!
In order to accomplish this, you will need to create a new function that takes the access key and the secret key, create a client for that user, then make the call.
def execute_statement_with_iam_user(accessKey, secretKey):
rds_client = boto3.client(
'rds',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey
)
rds_client.execute_statement(
secretArn=self.db_credentials_secrets_store_arn,
database=self.database_name,
resourceArn=self.db_cluster_arn,
sql='SELECT * FROM TestTable;',
parameters=[])
execute_statement_with_iam_user(accessKey, secretkey)
FYI, AWS does not recommend hard coding your credentials like this. What you should be doing is assuming a role with a temporary session. For this, you would need to look into the sts client and creating roles for assumption.

Does my application need to ask for a role on ec2 instance to configure the session or leave it empty?

I'm trying to use the aws-sdk-go in my application. It's running on EC2 instance. Now in the Configuring Credentials of the doc,https://docs.aws.amazon.com/sdk-for-go/api/, it says it will look in
*Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles.
* Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development.
*EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production.`
Wouldn't the best order be the reverse order? But my main question is do I need to ask the instance if it has a role and then use that to set up the credentials if it has a role? This is where I'm not sure of what I need to do and how.
I did try a simple test of creating a empty config with essentially only setting the region and running it on the instance with the role and it seems to have "worked" but in this case, I am not sure if I need to explicitly set the role or not.
awsSDK.Config{
Region: awsSDK.String(a.region),
MaxRetries: awsSDK.Int(maxRetries),
HTTPClient: http.DefaultClient,
}
I just want to confirm is this the proper way of doing it or not. My thinking is I need to do something like the following
role = use sdk call to get role on machine
set awsSDK.Config { Credentials: credentials form of role,
...
}
issue service command with returned client.
Any more docs/pointers would be great!
I have never used the go SDK, but the AWS SDKs I used automatically use the EC2 instance role if credentials are not found from any other source.
Here's an AWS blog post explaining the approach AWS SDKs follow when fetching credentials: https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/. In particular, see this:
If you use code like this, the SDKs look for the credentials in this
order:
In environment variables. (Not the .NET SDK, as noted earlier.)
In the central credentials file (~/.aws/credentials or
%USERPROFILE%.awscredentials).
In an existing default, SDK-specific
configuration file, if one exists. This would be the case if you had
been using the SDK before these changes were made.
For the .NET SDK, in the SDK Store, if it exists.
If the code is running on an EC2
instance, via an IAM role for Amazon EC2. In that case, the code gets
temporary security credentials from the instance metadata service; the
credentials have the permissions derived from the role that is
associated with the instance.
In my apps, when I need to connect to AWS resources, I tend to use an access key and secret key that have specific predefined IAM roles. Assuming I have those two, the code I use to create a session is:
awsCredentials := credentials.NewStaticCredentials(awsAccessKeyID, awsSecretAccessKey, "")
awsSession = session.Must(session.NewSession(&aws.Config{
Credentials: awsCredentials,
Region: aws.String(awsRegion),
}))
When I use this, the two keys are usually specified as either environment variables (if I deploy to a docker container).
A complete example: https://github.com/retgits/flogo-components/blob/master/activity/amazons3/activity.go