AWS RDS connection string - amazon-web-services

I have a question.
So I am trying to automate my RDS connection string with terraform, but the database identifier is randomly generated per region in each aws account.
Is it possible to know before hand the database identifier? and if so i there a way i can automate it?
Below is my current scripts:
sudo socat TCP-LISTEN:5432,reuseaddr,fork TCP4:env0.**cvhjdfmcu7ed**.us-east-1.rds.amazonaws.com:5432
I current i'm using this below script to feed in variables to terraform in my userdata tpl file
sudo nohup socat TCP-LISTEN:${port},reuseaddr,forkTCP4:${name}.${connection}.${aws_region}.rds.amazonaws.com:${port}
If some here can suggests ways that i can use to automate the ${connection} variable, so that i can deploy it in any aws account and region and don't have to worry what possibly the identifier would be.

You would use the endpoint attribute available in either the aws_db_instance or aws_rds_cluster Terraform resource to access the hostname:port RDS endpoint in Terraform.
If you are not creating/managing the RDS instance in the same Terraform where you need access to the endpoint address, then you would use the appropriate Terraform datasource instead of a Terraform resource, which would lookup the RDS information and make the endpoint value available within the rest of your Terraform template.

Related

How to migrate AWS ECS from one account to another (in a different Region/AZ)?

Docker containers are hosted with aws ecs inside a VPC in a particular region. How do I migrate them to a different VPC in a different region?
Unfortunately, there isn't a straightforward method to migrate a service from one region to another. To accomplish this, you'll need to ensure that you have a VPC and ECS cluster set up in the target region. Then, you can create the service within that cluster and VPC.
If you're using Cloudformation or Terraform for configuration as code, simply update the region and relevant definitions, then redeploy. Otherwise, you can use the AWS CLI to extract the full definition of your cluster and service, and then recreate it in the target region. For more information, see the AWS CLI ECS reference: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
Also, make sure that any Docker images stored in a private registry are accessible in the target region. Best of luck with your migration!

Manipulate a file before provisioning in terraform

So I have created a terraform script that does the following:
Spins up an EC2 instance
Copies over some files
Run some remote commands to install stuff from repos
Creates an elasticsearch service domain
Now I need to configure the ec2 instance with the endpoint I get back from elasticsearch domain, so the application are using the right endpoint (currently it has some default value).
How can I pass the endpoint value into the file and then copy it over to the ec2 instance. What would be the recommended approach?
Thanks in advance.
Terraform will derive the order automatically, when you refer to the output of a certain resource. You can use the file provisioner to create files on the EC2 instance.
If the EC2 instances don't need to be created prior to the Elasticsearch domain, you can use the template provider and render the file based on the values of the Elasticsearch resource, render and copy up to the EC2 instances.

Running Python DynamoDB on an EC2 instance

I want to use DynamoDB in an EC2 instance in Python. I have tested it locally, and set up my DynamoDB resource locally by using:
dynamodb = boto3.resource('dynamodb', aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY, region_name='us-west-2', endpoint_url='http://localhost:8000')
I am wondering if, once it is running on an EC2 instance, the endpoint_url should be changed (to something different than http://localhost:8000), or if I should set up the resource in a completely different way. Thank you!
Firstly, you should avoid putting credentials in your source code. This can lead to security breaches and is difficult to update Instead:
When running on an Amazon EC2 instance: Assign an IAM Role to the instance. The code will automatically find credentials.
When running on your own system: Store credentials in the ~.aws/credentials file (or run aws configure to create the file).
If you wish to connect with DynamoDB, leave out the endpoint parameter. I assume that you have been using DynamoDB Local, which runs on your own computer. To use the 'real' DynamoDB, leave out the endpoint.
Also, it is a good idea to include a region, such as:
dynamodb = boto3.resource('dynamodb', region_name='ap-southeast-2')

AWS Elastic Beanstalk change RDS Endpoint

How do I change the configured RDS endpoint of an AWS Elastic Beanstalk environment?
E.g. after the RDS database was deleted or should be replaced with a new RDS database.
Update
The topic remains complex and the AWS Elastic Beanstalk (EB) documentation could still do a better job to clarify available options. The question has been about how to change an RDS endpoint, which seems to be read in two different ways:
One could interpret it about how to attach an existing externally managed RDS endpoint to an existing (not new!) EB environment - this is indeed not possible, rather one would need to resort to handling this scenario from within the app itself as e.g. outlined in section Using an Existing Amazon RDS DB Instance with Python within Using Amazon RDS with Python.
Rather, the OP asked about how to do that after the RDS database was deleted or should be replaced with a new RDS database, i.e. the RDS endpoint change is implied in the process of creating a new RDS database for an existing EB environment that already had one - this is indeed possible by means of the DBSnapshotIdentifier Option Value, which denotes The identifier for the DB snapshot to restore from. Once again the EB docs aren't exactly conclusive what this means, however, EB is using AWS CloudFormation under the hood, and the resp. entry for AWS::RDS::DBInstance - DBSnapshotIdentifier provides more details:
By specifying this property, you can create a DB instance from the
specified DB snapshot. If the DBSnapshotIdentifier property is an
empty string or the AWS::RDS::DBInstance declaration has no
DBSnapshotIdentifier property, the database is created as a new
database. If the property contains a value (other than empty string),
AWS CloudFormation creates a database from the specified snapshot. If
a snapshot with the specified name does not exist, the database
creation fails and the stack rolls back.
In other words, the typical result of updating any of the General Option Values from namespace aws:rds:dbinstance for an existing EB environment is the creation of a respectively adjusted RDS instance managed by EB, and thus a new RDS endpoint.
A specific sub scenario is the use of DBSnapshotIdentifier, which yields a new RDS instance managed by EB based on the referenced snapshot and can therefore be used to migrate (rather than attach) an existing externally managed RDS instance, albeit with considerable downtime based on the snapshot size.
Initial Answer
While unfortunately not specifically addressed within Configuring Databases with AWS Elastic Beanstalk, the AWS Elastic Beanstalk settings for an optional Amazon RDS database are handled via Option Values, see namespace aws:rds:dbinstance within General Options.
While the AWS Management Console hides many of those option values behind its UI, you can specify them explicitly when using the API via other means, both when creating an environment as well as when updating one (which is how you would change any settings of an RDS database instance) - see e.g. parameter --option-settings for update-environment from the the AWS Command Line Interface:
If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value.
I created a config file under .ebextensions folder that had the following content:
option_settings:
- namespace: aws:rds:dbinstance
option_name: DBSnapshotIdentifier
value: <name-of-snapshot>
Upload and deploy and it will create a new RDS db using this snapshot.
Hot-swapping out the data tier within an environment is discouraged because it breaks down the integrity of the environment. What you want to do is clone the environment, with a restored snapshot of the RDS instance. This means you'll have an identical environment with a different url 'host', and if everything went without a hitch, then you can swap environment urls in order to initiate a DNS swap.
After the swap happens and everything is good to go, you can proceed to deflate the old environment
Follow the steps in the resolution to:
Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment A.
Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance.
check out the official answer below for more detailed solution
https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/?nc1=h_ls

Tag Nodes With Chef Roles Using cloudformation

So my goal is to launch say 100 nodes in the cloud using cloudformation and I would like to tag nodes with chef roles within my cloudformation script instead of using knife. I have setup my cloudformation nodes to automtically register themselves with the chef server and I want to use report their role to the chef server so that chef server installs the proper cookbooks on each node (depending on the node roles). I know this is possible with knife but I want to bury the node role within my cloudformation script.
How can I do so?
I do this with chef. I usually put a json file in S3 which describes the roles the machine needs to use. I create an IAM user in CloudFormation which can access the S3 bucket. Then, in my user data script, I first grab the file from S3 and then run chef-client -j /path/to/json/file. I do the same thing with the validation key, fwiw, so that the node can register itself.
HTH
I use Puppet, which is of course slightly different to Chef, but the same theory should apply. I send a JSON object as the user data when launching a new instance (also via CloudFormation), then access this data in Puppet to do dynamic configuration.
Puppet handles a lot of this automatically - e.g. it will automatically set the FACTOR_EC2_USER_DATA environment variable for me, so I just need to parse the JSON in to variables such as $role and $environment, at which point I can dynamically decide which role the instance should be assigned.
So as long as you can find some way to access the user data within Chef, the same approach should work.