AWS how to handle programatic credentilas when building a docker container - amazon-web-services

I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.

You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.

Related

Gitlab & AWS parameter store

We want to save all our AWS accounts credentials in AWS parameter store for better security.
now the question is:
How can we use the credentials stored in AWS parameter store in GitLab for deployment?
In your project, you can configure .gitlab_ci.yaml to make many things, one of them is to deploy your application, and there are many ways, one of them is to:
Create a docker of your project
Push the image to ECR
Create a new ECS task definition with the new version of your docker image
Create a new ECS service with the new version of the task definition
and to do so, you need effectively the credentials of AWS that you have configured in your GitLab repository.
After that their many ways to deploy from GitLab to AWS, it depends on your company and what tools you are using.

How to deploy from Cloud9 to EC2?

I have my app developed in Cloud9, and I would like to use the terminal with GIT commands to deploy my app to a EC2 instance. The objective is to make the app run in the EC2 instance.
Is it necessary to deploy the app to a EC2 instance, or is it already in a EC2 instance and I just have to open the URL of the app?
My cloud9 environment is in a region and my EC2 Instance is in another region (I'm telling this just in case it changes something in the process).
I already did it to Heroku, but I can't see how does it work to a EC2 instance.
Thank you very much !
If you want to deploy your app to an EC2 instance from the Cloud9 console, you have a couple of options. Please note, you only need to use one of these options, not all of them. I would generally recommend option #1 for your case based on your original question.
Use AWS amplify instead of EC2 (Amplify is Amazon's version of Heroku)
Use the AWS CDK client, and specifically, you'll want to look at Instance and either utilize userData or CodeBuild to build your app and deploy to EC2.
Use the AWS client to deploy Cloudformation templates (this is the lower-level version of option #2 and will require more boilerplate)
Connect your Git repository to AWS CodePipeline and run a CI/CD flow to deploy it to EC2 on every commit to the main/master branch (this is fairly complicated)
The most important thing to understand is that Cloud9 is simply an IDE (integrated development environment) that is deployed to an EC2 instance that is managed by AWS (not you). Cloud9 is not a tool for actually deploying code (you'll need to use one of the options I mentioned above for that).

Container is not able to call S3 in Fargate

I'm not able to synchronize a log-folder to s3 inside a container.
I'm trying to get the following setup:
Docker Container with installed awscli
there are logfiles and other files generated inside the container
There is a cronjob, which calls the "aws s3 sync" command through a shell-script.
The synchronisation is not working properly and I'm not sure why not.
I tried the following, which worked just fine:
provided access key/secret access key inside the docker container
this worked locally, with plain ECS and with fargate
but it's not recommended to use the access keys
plain ECS without any keys (just the IAM role)
this worked too
I played a little with the configuration and read through the documentation.
The only hints I got are:
Has it something to do with the network mode "awsvpc"? (which fargate has to use)
Has it something to do with the "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" path variable?
I found a few hits there on the web, but I'm not sure if it's set or not. I'm not able to look inside the container in fargate.
ECS Task Definition has two parameters related to defining IAM Role.
executionRoleArn - Provides access to the task or container to start running by performing needed actions such as pulling images from ECR, writing logs to Cloudwatch.
taskRoleArn - Allows the Task to execute AWS API calls to interact with AWS resources such as S3, etc...
In my case i had a shell script which i used to call using entrypoint in the task definition. I had correctly set the Task Role with access to S3 however it did not work. So using the information provided here https://forums.aws.amazon.com/thread.jspa?threadID=273767#898645
i added the first line in my shell script as
export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Still it did not work. Then i upgraded the AWS cli on the docker container to version 2 and it worked. So for me the real problem was that the docker image had an old CLI version.

AWS profile with gitlab -ci

We are using git-lab as our repo and decided to go with gitlab ci. we are using server-less framework to deploy our code on AWS. I want to integrate AWS profiles to Gitlab so that it can call the specific profile and enter into the AWS account specified. I have tried hard-coding the variables but if i have to enter using a different profile for the Deployment, i need to change all the gitlab-ci files as am having more than 100 repos.
Any way to configure the aws profiles in gitlab?
Basically my git-lab-CI jobs runs on Docker. so i created a docker image with all the needed prerequisites needed for my Deployment and now my runtime is same as my Local machine with AWS-CLI installed and i can use my AWS profiles for the deployment in the serverless files

Remote update ec2 instance with docker image

I have a release of my project. I build a docker image and deploy it on an ec2 instance.
Later, when I have a new release, I would like update the docker on ec2 remotely (without accessing the machine, just executing some service).
Is there a way how to do it without ECS and ElasticBeanstalk?
If it's not possible can I somehow re-run the cfn-init script?
My Research
https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/
You can manage your instances remotely (i.e. make changes without manually SSHing into the instance and typing commands) by using any of the many system management services out there. AWS offers Simple Systems Manager (SSM) of which the Run Command you linked is part. AWS also offers the OpsWorks service which uses Chef. You also have other products like Ansible and SaltStack, and you can optionally integrate the use of those services with the AWS SSM service.