Replacement for EBS Environment Variables in CodeDeploy - amazon-web-services

I have been using Elastic Beanstalk to deploy my .Net based Web API service. I am using the environment variables to push the settings like SecretKey AccessKey DBPassword etc.
Now, we are moving from EBS to CodeDeploy where we do not have the option to push these settings into the web.config file of my application. After exploring a bit, I found that we can make use of Parameter Store in AWS to store the DBPassword and others. However, in order to read from the Parameter Store, we need to have the SecretKey and AccessKey. So what would be the best to achieve it in CodeDeploy?

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

how can aws service update the corresponding redis cluster

Currently I have several redis clusters for different env. In my code, I will write data to redis inside my lambda function, if I deploy this lambda to my aws account, how can it update the corresponding redis cluster? Since every environment has its own redis cluster.
You can have a config file with Redis cluster name and their hostnames and in code, you can pick different clusters based on the env provided.
If you are using roles in the AWS account, for each environment then you should also do STS on requirred role
Your resource files are just files. They will be loaded by your application following different strategies, depend on your application's framework and how it has been configured.
Some application, at build-time, apply the correct configuration, by passing a flag to the build for example. --uat or --prod. If your application is one of those kind, you can just build the correct version and push it to AWS. They will connect to the correct Redis, given that you put the redis configuration into the ENV files
The other option is to use Environment Variable

Secret info & EC2 CodeDeploy?

With secret code such as MongoDB password, Firebase admin password in my NodeJS server code, I am wondering how I should go about deploying this to EC2 (and multiple EC2 instances with CodeDeploy / AutoScaling, in the future).
Is there a common way to go about this - keeping your credentials secure? You could argue that the security layer is at the instance: make sure that there is no unwanted access to your instance(s) and you should be good. But is this really the way to go?
Given a service that has a secret password in its config file called config.json, create a software config file called config-development.json:
password=[PASSWORD]
During Codedeploy, there are scripts or hooks, that run during the deployment cycle eg BeforeInstall, Install, AfterInstall. During the AfterInstall script execution, get the secret from the parameter store via cli, store it in a variable, and then replace the [PASSWORD] value in the json file, using sed or any search and replace command line tool.
Rename the resulting file to the config.json, and restart the service.
This approach will allow you to keep secrets out of your repo, and use only value from the parameter store.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-hooks-list

AWS splitting resources between UAT and PROD

I'm using AWS Elastic Beanstalk to deploy a system. That all works fine. If I want a UAT and PROD environment I can just setup 2 different elastic beanstalk apps, this also works fine. Now my question: Say my app uses dynamoDB or S3 buckets (something outside of the EB deployment) how do I have different versions of these for UAT and PROD?
Taking dynamo: you have A dynamo DB instance, not one per EB deployment. My code would write to a 'users' table but how do you stop UAT and PROD using the same user table given there is only one dynamoDB?
Same with S3 buckets? What you ideally want is a prod.mybucket.xxx and uat.my bucket.xxx
I'm clearly missing something, can you tell me what? :)
You can use Elastic Beanstalk environment variables (this example is for java, but it's similar in other languages). Use one to track the environment type (e.g. PARAM1=dev or PARAM1=uat) then name your other resources (buckets / dynamo tables) with that the prefix
s3 bucket -> prod-myapp-bucket / uat-myapp-bucket
In your code, just grab param1 in bootstrap and bring up your aws resources that way. This is how beanstalk lets your application know which database to connect to (In Java it's JDBC_CONNECTION_STRING).
OR
You could use AWS api to query the actual Elastic Beanstalk environment name to do something similar (depending on what language you're using, it's something like 'Describe Environment').

AWS Elastic Beanstalk change RDS Endpoint

How do I change the configured RDS endpoint of an AWS Elastic Beanstalk environment?
E.g. after the RDS database was deleted or should be replaced with a new RDS database.
Update
The topic remains complex and the AWS Elastic Beanstalk (EB) documentation could still do a better job to clarify available options. The question has been about how to change an RDS endpoint, which seems to be read in two different ways:
One could interpret it about how to attach an existing externally managed RDS endpoint to an existing (not new!) EB environment - this is indeed not possible, rather one would need to resort to handling this scenario from within the app itself as e.g. outlined in section Using an Existing Amazon RDS DB Instance with Python within Using Amazon RDS with Python.
Rather, the OP asked about how to do that after the RDS database was deleted or should be replaced with a new RDS database, i.e. the RDS endpoint change is implied in the process of creating a new RDS database for an existing EB environment that already had one - this is indeed possible by means of the DBSnapshotIdentifier Option Value, which denotes The identifier for the DB snapshot to restore from. Once again the EB docs aren't exactly conclusive what this means, however, EB is using AWS CloudFormation under the hood, and the resp. entry for AWS::RDS::DBInstance - DBSnapshotIdentifier provides more details:
By specifying this property, you can create a DB instance from the
specified DB snapshot. If the DBSnapshotIdentifier property is an
empty string or the AWS::RDS::DBInstance declaration has no
DBSnapshotIdentifier property, the database is created as a new
database. If the property contains a value (other than empty string),
AWS CloudFormation creates a database from the specified snapshot. If
a snapshot with the specified name does not exist, the database
creation fails and the stack rolls back.
In other words, the typical result of updating any of the General Option Values from namespace aws:rds:dbinstance for an existing EB environment is the creation of a respectively adjusted RDS instance managed by EB, and thus a new RDS endpoint.
A specific sub scenario is the use of DBSnapshotIdentifier, which yields a new RDS instance managed by EB based on the referenced snapshot and can therefore be used to migrate (rather than attach) an existing externally managed RDS instance, albeit with considerable downtime based on the snapshot size.
Initial Answer
While unfortunately not specifically addressed within Configuring Databases with AWS Elastic Beanstalk, the AWS Elastic Beanstalk settings for an optional Amazon RDS database are handled via Option Values, see namespace aws:rds:dbinstance within General Options.
While the AWS Management Console hides many of those option values behind its UI, you can specify them explicitly when using the API via other means, both when creating an environment as well as when updating one (which is how you would change any settings of an RDS database instance) - see e.g. parameter --option-settings for update-environment from the the AWS Command Line Interface:
If specified, AWS Elastic Beanstalk updates the configuration set associated with the running environment and sets the specified configuration options to the requested value.
I created a config file under .ebextensions folder that had the following content:
option_settings:
- namespace: aws:rds:dbinstance
option_name: DBSnapshotIdentifier
value: <name-of-snapshot>
Upload and deploy and it will create a new RDS db using this snapshot.
Hot-swapping out the data tier within an environment is discouraged because it breaks down the integrity of the environment. What you want to do is clone the environment, with a restored snapshot of the RDS instance. This means you'll have an identical environment with a different url 'host', and if everything went without a hitch, then you can swap environment urls in order to initiate a DNS swap.
After the swap happens and everything is good to go, you can proceed to deflate the old environment
Follow the steps in the resolution to:
Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment A.
Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance.
check out the official answer below for more detailed solution
https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/?nc1=h_ls