How to run aws configure in a travis deploy script? - amazon-web-services

I am trying to get travis-ci to run a custom deploy script that uses awscli to push a deployment up to my staging server.
In my .travis.yml file I have this:
before_deploy:
- 'curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"'
- 'unzip awscli-bundle.zip'
- './awscli-bundle/install -b ~/bin/aws'
- 'export PATH=~/bin:$PATH'
- 'aws configure'
And I have set up the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
with their correct values in the travis-ci web interface.
However when the aws configure runs, it stops and waits for user input. How can I tell it to use the environment variables I have defined?

Darbio's solution works fine but it's not taking into consideration that you may end up pushing your AWS credentials in your repository.
That is a bad thing especially if docker is trying to pull a private image from one of your ECR repositories. It would mean that you probably had to store your AWS production credentials in the .travis.yml file and that is far from ideal.
Fortunately Travis gives you the possibility to encrypt environment variables, notification settings, and deploy api keys.
gem install travis
Do a travis login first of all, it will ask you for your github credentials. Once you're logged in get in your project root folder (where your .travis.yml file is) and encrypt your access key id and secret access key.
travis encrypt AWS_ACCESS_KEY_ID="HERE_PUT_YOUR_ACCESS_KEY_ID" --add
travis encrypt AWS_SECRET_ACCESS_KEY="HERE_PUT_YOUR_SECRET_ACCESS_KEY" --add
Thanks to the --add option you'll end up with two new (encrypted) environment variables in your configuration file. Now just open your .travis.yml file and you should see something like this:
env:
global:
- secure: encrypted_stuff
- secure: encrypted_stuff
Now you can make travis run a shell script that creates the ~/.aws/credentials file for you.
ecr_credentials.sh
#!/usr/bin/env bash
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOL
[default]
aws_access_key_id = ${AWS_ACCESS_KEY_ID}
aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
EOL
Then you just need to run the ecr_credentials.sh script from your .travis.yml file:
before_install:
- ./ecr_credentials.sh
Done! :-D
Source: Encription keys on Travis CI

You can set these in a couple of ways.
Firstly, by creating a file at ~/.aws/config (or ~/.aws/credentials).
For example:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Secondly, you can add environment variables for each of your settings.
For example, create the following environment variables:
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Thirdly, you can pass region in as a command line argument. For example:
aws eb deploy --region us-west-2
You won't need to run aws configure in these cases as the cli is configured.
There is further AWS documentation on this page.

Following the advice from #Darbio, I came up with this solution:
- stage: deploy
name: "Deploy to AWS EKS"
language: minimal
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
deploy:
- provider: script
# bash script containing the kubectl commands to setup the cluster
script: bash k8s-config/deployment.sh
on:
branch: master
It is also possible to avoid installing AWS CLI altogether. Then you need to configure kubectl:
kubectl config set-cluster --server= --certificate-authority=
kubectl config set-credentials --client-certificate= --client-key=
kubectl config set-context myContext --cluster= --namespace= --user=
kubectl config use-context myContext
You can find most of the needed values in your users home directory in /.kube/config, after you performed the aws eks update-kubeconfig command on your local machine.
Except for the client certificate and key. I couldn't figure out where to get them from and therefore needed to install AWS CLI in the pipeline as well.

Related

How do you specify AWS credentials when running AWS CLI from a Dockerfile in an AWS SAM pipeline?

I have an app using:
SAM
AWS S3
AWS Lambda based on Docker
AWS SAM pipeline
Github function
In the Dockerfile I have:
RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
Resulting in the error message:
Step 6/8 : RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
---> Running in 786873b916db
fatal error: Unable to locate credentials
Error: InferenceFunction failed to build: The command '/bin/sh -c aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz' returned a non-zero code: 1
I need to find a way to store the credential in a secured manner. Is it possible with GitHub secrets or something?
Thanks
My solution may be a bit longer but I feel it solves your problem, and
It does not expose any secrets
It does not require any manual work
It is easy to change your AWS keys later if required.
Steps:
You can add the environment variables in Github actions(since you already mentioned Github actions) as secrets.
In your Github CI/CD flow, when you build the Dockerfile, you can create a aws credentials file.
- name: Configure AWS credentials
echo "
[default]
aws_access_key_id = $ACCESS_KEY
aws_secret_access_key = $SECRET_ACCESS_KEY
" > credentials
with:
ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
In your Dockerfile, you can add instructions to COPY this credentials file and store it
COPY credentials credentials
RUN mkdir ~/.aws
RUN mv credentials ~/.aws/credentials
Changing your credentials requires just changing your github actions.
Docker by default does not have access to the .aws folder running on the host machine. You could either pass the AWS credentials as environment variables to the Docker image:
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=...
Keep in mind, hardcoding AWS credentials in a Dockerfile is a bad practice. In order to avoid this, you can pass the environment variables at runtime with using docker run -e MYVAR1 or docker run --env MYVAR2=foo arguments. Other solution would be to use an .env file for the environment variables.
A more involved solution would be to map a volume for the ~/.aws folder from the host machine in the Docker image.

bitbucket pipeline for CodeArtifact

I am using the bitbucket pipeline to publish the artifacts to AWS code artifact, everything is running perfectly but 12 hours validity of the token needs me to update the password every time. Could anyone guide me on how I can automate this process?
EDIT: finally was able to solve it myself.
pipelines:
default:
- step:
name: test
image: atlassian/pipelines-awscli
script:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- aws codeartifact get-authorization-token --domain XXXXX --domain-owner XXXXXx --query authorizationToken --output text > pass.txt
- value=$(<pass.txt)
- echo $value
- echo "export value=$value" set_env.sh
- printenv > set_env.sh
artifacts:
- set_env.sh
- step:
name: maven
image: maven:3.8.1
caches:
- maven
script: # Modify the commands below to build your repository.
- source set_env.sh
- echo $value
- sed -i 's/passwd12/'"$value"'/g' ./settings.xml
- cat settings.xml
- mvn clean deploy -s settings.xml -P snapshot
I didn't realize BitBucket had global account-wide Workspace Variables. Some were already defined for our other repos. I added some to hold the values for AccessKeyId and SecretAccessKey for our npm registry at CodeArtifact.
Prior to npm install, I create a named AWS profile in the pipelines.yml file:
- aws configure --profile codeartifactuser set aws_access_key_id $AWS_ACCESS_KEY_ID_NPM
- aws configure --profile codeartifactuser set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_NPM
Then use that to make the call to authenticate and get a new token:
aws codeartifact login --tool npm --repository <repository> --domain <domain> --namespace #<namespace> --profile codeartifactuser
Now our npm install, etc... works as expected.
I used a named profile in case other parts of the build script expect different credentials. Just seems cleaner.

How to make k8s Pod (generated by Jenkins) use Service account IAM role to access AWS resources

Follow this link, I can create a pod whose service account's role can access the AWS resources; so the pod can access them either.
Then, inspired by this EKS-Jenkins-Workshop, I change this workshop a little bit. I want to deploy Jenkins Pipeline, this Jenkins Pipeline can create a pod whose account service's role can access aws resources, but the problem is the cdk code in this pod cannot access AWS resources. (I write the cdk code to access AWS resources, reference (Your first AWS CDK app)[https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html])
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: jenkins-agent
Namespace: default
spec:
serviceAccountName: jenkins
containers:
- name: node-yuvein
image: node
command:
- cat
tty: true
"""
}
}
stages {
stage('Build') {
steps {
container('node-yuvein') {
dir('hello-cdk'){
sh "pwd"
sh 'npm --version'
sh 'node -v'
sh 'npm install -g typescript'
sh 'npm install -g aws-cdk'
sh 'npm install #aws-cdk/aws-s3'
sh 'npm run build'
sh 'cdk deploy'
}
}
}
}
}
}
When I run the pipeline, it has this error:
User: arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-nodegr-NodeInstanceRole-1TCVDYSM1QKSO/i-0a4df3778517df0c6 is not authorized to perform: cloudformation:DescribeStacks on resource: arn:aws:cloudformation:us-west-2:450261875116:stack/HelloCdkStack/*
I am a beginner of K8s, Jenkins and cdk.
Hope someone can help me.
Thanks a lot.
Further Debugging:
In Jenkins Console, I can get serviceAccountName: "jenkins", and the name of my service account in EKS is jenkins.
the pod also get correct ENV:
+ echo $AWS_ROLE_ARN
arn:aws:iam::450261875116:role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2
+ echo $AWS_WEB_IDENTITY_TOKEN_FILE
/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The node.js and npm I installed are the lastest version.
+ npm --version
6.14.8
+ node -v
v14.13.0
+ aws sts get-caller-identity
{
"UserId": "AROAWRVNS7GWO5C7QJGRF:botocore-session-1601436882",
"Account": "450261875116",
"Arn": "arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2/botocore-session-1601436882"
}
when I run this command, it appears my service account role. But I still get the original error.
Jenkins podTemplate has serviceAccount option:
https://github.com/jenkinsci/kubernetes-plugin#pod-and-container-template-configuration
Create an IAM role mapped to an EKS cluster
Create a ServiceAccount mapped to an IAM role
Pass ServiceAccount name to a podTemplate
Further debugging:
Ensure the pod has correct service account name.
Check if pod got AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE env vars (they are added automatically).
Check if AWS SDK you use is above the minimal version: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html
Run aws sts get-caller-identity to see the role, don't waste time on running an actual job.
In the case of working with Jenkins slaves, one needs to customize the container images to use AWS CLI V2 instead of AWS CLI V1. I was running into errors related to authorization like the question poses; my client was using the cluster node roles instead of using the assumed web identity role of my service account attached to my Jenkins-pods for the slave containers.
Apparently V2 of the AWS CLI includes the web identity token file as part of the default credentials chain whereas V1 does not.
Here's a sample Dockerfile that pulls the latest AWS CLI version so this pattern works.
FROM jenkins/inbound-agent
# run updates as root
USER root
# Create docker group
RUN addgroup docker
# Update & Upgrade OS
RUN apt-get update
RUN apt-get -y upgrade
#install python3
RUN apt-get -y install python3
# add AWS Cli version 2 for web_identity_token files
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Add Maven
RUN apt-get -y install maven --no-install-recommends
# Add docker
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
# Add docker compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
# Delete cached files we don't need anymore:
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/*
# close root access
USER jenkins
Further, I had to make sure my serviceaccount was created and attached to both the Jenkins master image and the jenkins slaves. This can be accomplished via Manage Jenkins -> Manage Nodes and Clouds -> Configure Clouds -> Pod Template Details.
Be sure to edit Namespace and Serviceaccount fields with the appropriate values.

CircleCI deployment to AWS EC2

Can you help me find a useful step-by-step guide or a Gist outlining in detail how to configure CircleCI (using 2.0 syntax) to deploy to AWS EC2?
I understand the basic requirements and the moving pieces, but unsure what to put in the .circleci/config.yml file in the deploy step.
So far I got:
A "Hello World" Node.js app which is building successfully in CircleCI (just without the deploy step)
A running EC2 instance (Ubuntu 16.04)
An IAM user with sufficient permissions added to CircleCI for that particular job
Can you help out with the CircleCI deploy step?
Following your repository, you could create a script just like that: deploy.sh
#!/bin/bash
echo "Start deploy"
cd ~/circleci-aws
git pull
npm i
npm run build
pm2 stop build/server
pm2 start build/server
echo "Deploy end"
And in your .circleci/conf.yml you do it:
deploy:
docker:
- image: circleci/node:chakracore-8.11.1
steps:
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- run:
name: AWS EC2 deploy
command: |
#upload all the code to machine
scp -r -o StrictHostKeyChecking=no ./ ubuntu#13.236.1.107:/home/circleci-aws/
#Run script inside of machine
ssh -o StrictHostKeyChecking=no ubuntu#13.236.1.107 "./deploy.sh"
But this is so ugly, try something like AWS Codedeploy or ecs for using containers.

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )