AWS profile with gitlab -ci - amazon-web-services

We are using git-lab as our repo and decided to go with gitlab ci. we are using server-less framework to deploy our code on AWS. I want to integrate AWS profiles to Gitlab so that it can call the specific profile and enter into the AWS account specified. I have tried hard-coding the variables but if i have to enter using a different profile for the Deployment, i need to change all the gitlab-ci files as am having more than 100 repos.
Any way to configure the aws profiles in gitlab?

Basically my git-lab-CI jobs runs on Docker. so i created a docker image with all the needed prerequisites needed for my Deployment and now my runtime is same as my Local machine with AWS-CLI installed and i can use my AWS profiles for the deployment in the serverless files

Related

Gitlab & AWS parameter store

We want to save all our AWS accounts credentials in AWS parameter store for better security.
now the question is:
How can we use the credentials stored in AWS parameter store in GitLab for deployment?
In your project, you can configure .gitlab_ci.yaml to make many things, one of them is to deploy your application, and there are many ways, one of them is to:
Create a docker of your project
Push the image to ECR
Create a new ECS task definition with the new version of your docker image
Create a new ECS service with the new version of the task definition
and to do so, you need effectively the credentials of AWS that you have configured in your GitLab repository.
After that their many ways to deploy from GitLab to AWS, it depends on your company and what tools you are using.

AWS how to handle programatic credentilas when building a docker container

I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.
You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.

Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition

We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.
Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.
There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;
aws ecr get-login --no-include-email
Above command outputs a docker login command with a username (AWS) and a password (token).
The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.
Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.
Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?
Thanks, Mahi
After lot of research, trial and error I found an answer to my own question.
AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;
build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )
Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you.
You are looking for this
Amazon ECR Docker Credential Helper
AWS documentation
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.
Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic

AWS code deploy to deploy

Right now I am manually deploying WAR files onto wildfly server(which is hosted on an ec2 instance) but I want to automate this and get rid of the manual deployments.
I build the application using jenkins (from another EC2 instance) and after that i want to deploy to the wildfly server and since i am also planning to user codepipeline, can anyone please tell me how to deploy an applcation on wildfly server using AWS CodeDeploy?
I am new to codedeploy so not that familiar with its usage.
Thank you,
Ajit
Hi #Ajith Code Deploy is the perfect tool for deploying applications in AWS. But it has some limitations, for example to give a source repository for the Code Build/ Code Deploy you can only choose these 3 as the code repository.
AWS Code Commit
AWS S3 Bucket
Guthub
You cant provide your own custom repository or Bitbucket. Please go through the below Examples for deploying applications using Code Deploy. I can't explain all the steps because it have a lot of steps. here they are explaining app deployment using tomcat, replace this with your wildfly scripts.
Deploy Applications From S3 using Code Deploy.

Automate code deploy from Git lab to AWS EC2 instance

We're building an application for which we are using GitLab repository. Manual deployment of code to the test server which is Amazon AWS EC2 instance is tedious, I'm planning to automate deployment process, such that when we commit code, it should reflect in the test instance.
from my knowledge we can use AWS code-deploy service to fetch the code from GitHub. But code deploy service does not support GitLab repository . Is there a way to automate the code deployment process to AWS Ec2 instance through GitLab. or Is there a shell scripting possibility to achieve this? Kindly educate me.
One way you could achieve this with AWS CodeDeploy is by using the S3 option in conjunction with Gitlab-CI: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Depending on how your project is setup, you may have the possibility to generate a distribution Zip (Gradle offers this through the application plugin). You may need to generate your "distribution" file manually if your project does not offer such a capability.
Gitlab does not offer a direct S3 integration, however through the gitlab-ci.yml you would be able to download it into the container and run the necessary upload commands to put the generated zip file on the S3 container as per the AWS instructions to trigger the deployment.
Here is an example of what your brefore-script could look like in the gitlab-ci.yml file:
before_script:
- apt-get update --quiet --yes
- apt-get --quiet install --yes python
- pip install -U pip
- pip install awscli
The AWS tutorial on how to use CodeDeploy with S3 is very detailed, so I will skip attempting to reproduce the contents here.
In regards to the actual deployment commands and actions that you are currently performing manually, AWS CodeDeploy provides the capability to run certain actions through scripts defined in the app-spec file depending on event hooks for the application:
http://docs.aws.amazon.com/codedeploy/latest/userguide/writing-app-spec.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref-hooks.html
I hope this helps.
This is one of my old post. But I happened to find an answer for this. Although my question is specific to work with code deploy I would say there is no such need to use any aws requirements using gitlab.
We don't require Code Deploy at all. There is no need to use any external CI server like the team city or the jenkins to perform the CI from the GitLab anymore.
We need to add the .gitlab-ci.yml file in the source directory of the branch and write an .yml script in it. There are pipelines in the GitLab that will perform the CI/CD automatically.
The pipelines of the GitLab CI/CD looks more similar to the working functionality of Jenkins Server. using the YML script we can perform SSH on the EC2 instance and place the files in it.
An example of how to write the gitlab .yml file to ssh to ec2 instance is here https://docs.gitlab.com/ee/ci/yaml/README.html