how can aws service update the corresponding redis cluster - amazon-web-services

Currently I have several redis clusters for different env. In my code, I will write data to redis inside my lambda function, if I deploy this lambda to my aws account, how can it update the corresponding redis cluster? Since every environment has its own redis cluster.

You can have a config file with Redis cluster name and their hostnames and in code, you can pick different clusters based on the env provided.
If you are using roles in the AWS account, for each environment then you should also do STS on requirred role

Your resource files are just files. They will be loaded by your application following different strategies, depend on your application's framework and how it has been configured.
Some application, at build-time, apply the correct configuration, by passing a flag to the build for example. --uat or --prod. If your application is one of those kind, you can just build the correct version and push it to AWS. They will connect to the correct Redis, given that you put the redis configuration into the ENV files
The other option is to use Environment Variable

Related

Is it possible to mount (S3) file into AWS::ECS::TaskDefinition ContainerDefinition using cloudformation?

I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).

AWS how to handle programatic credentilas when building a docker container

I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.
You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.

how to provide environment variables to AWS ECS service?

Trying to create a service in ECS and unbelievably looks like it is not possible to specify any environmental variables...
Is it possible to do that without updating my task definition and recreate the service? Or a task override option?
This looks a bit cumbersome
The environmental variables are provided through task definition. Thus you have to update the definition to add/change the variables.
You don't have to re-create the service from scratch. You can update your service to use the new version of your task definition. For updating existing service you can use update-service AWS CLI call. The cli also provides --force-new-deployment if you want to force the deployment (but changing task should be enough and forcing would not be required).
You can't define environment variables at service creation time as explained by the other answer but you can define tags: thus one workaround is to
Create a service with a set of tags and propagateTags: SERVICE
On container startup read the cluster and task ARNs from the task metadata endpoint
Read service tags with the DescribeTasks API (note the include: ['TAGS'] parameter)
Configure the environment using the tag values

AWS Fargate container authentication configuration

What would be the appropriate way to configure infrastructure independent parameters inside Docker container inside ECS?
Let's say there's an API that needs to be connected with external sources (DB for example that doesn't live inside AWS infrastructure). How would one configure the container to discover the external sources?
What I've come up with:
Environment variables;
Injecting configuration during Docker image building;
Using AWS System Manager Parameter Store;
Using AWS Secrets Manager;
Hosting the configuration in S3 for example and reading from there.
To me using the environment variables seems to be the way to go.
Because one wouldn't want to make an extra query to the AWS System Manager or Secrets Manager just to get the DB host & port every time connecting with the external source.
Another possibility I thought about was that after the container is started the required parameters are queried from the AWS System Manager or Secrets Manager and then stored in some sort of configuration file. But how would one distinguish then between test & production?
Am I missing something obvious here?

How to update .env value of container definitions in AWS?

I’m new to AWS ECS deployment. This is my first time.
I have updated the .env in my container definition on my AWS account.
But when I run docker exec e718a29eb0e3 env in my container I still seeing the latest value updated.
I even tried
node#db39b382163a:/api$ pm2 restart all
I still not seeing it updated.
Do I need to restart something else ?
The native CodePipeline -> ECS integration will only update the container definitions' image attribute so you cannot use it to manage environment variables. you have a couple of other options:
You can use a Lambda function instead to drive your deployment and do something similar to the above to edit both the image and environment attributes.
If you're using CloudFormation to manage your task definition and service, you can use these templates to manage those fields instead of the native integration.