I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.
Related
I have an ubuntu EC2 instance where the docker container runs. I need a simple CD architecture that will pull code from GitHub and run docker build... and docker run ... on my EC2 instance after every code push.
I've tried with GitHub actions and I'm able to connect to the EC2 instance but it gets stuck after docker commands.
name: scp files
on: [push]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Pull changes and run docker
uses: fifsky/ssh-action#master
with:
command: |
cd test_ec2_deployment
git pull
sudo docker build --network host -f Dockerfile -t test .
sudo docker run -d --env-file=/home/ubuntu/.env -ti test
host: ${{ secrets.HOST }}
user: ubuntu
key: ${{ secrets.SSH_KEY }}
args: "-tt"
output
Step 12/13 : RUN /usr/bin/crontab /etc/cron.d/cron-job
---> Running in 52a5a0174958
Removing intermediate container 52a5a0174958
---> badf6fdaf774
Step 13/13 : CMD printenv > /etc/environment && cron -f
---> Running in 0e9fd12db4f7
Removing intermediate container 0e9fd12db4f7
---> 888a2a9e5910
Successfully built 888a2a9e5910
Successfully tagged test:latest
Also, I've tried to separate docker commands into .sh script but it didn't help. Here is an issue for that https://github.com/fifsky/ssh-action/issues/30.
I wonder if it's possible to implement this CD structure using AWS CodePipeline or any other AWS services. Also, I'm not sure is it too complicated to set up Jenkins for this case.
This is definitely possible using AWS CodePipeline but it will require you to have a Lambda function since you want to deploy your container to your own EC2 instance (which I think is not necessary unless you have a specific use-case). This is how your pipeline would look like;
AWS CodePipline stages:
Source: Connect your GitHub repository. In the background, it will automatically clone code from your Git repo, zip it, and store it in S3 to be used by the next stage. There are other options as well if you want to do it all by yourself. For example;
using your GitHub actions, you zip the file and store it in S3 bucket. On the AWS side, you will add S3 as a source and provide the bucket and object key so whenever this object version changes, it will trigger the pipeline.
You can also use GitHub actions to actually build your Docker image and push it to AWS ECR (container registry) and totally skip build stage. So, either do build on GitHub or on AWS side, upto you.
Build: For this stage (if you decide to build using AWS), you can either use Jenkins or AWS Codebuild. I have used AWS Codebuild, so IMO this is fairly easy and quick solution for the build stage. At this stage, it will use the zip file in S3 bucket, unzip it, build your Docker container image and push it to AWS ECR.
Deploy: Since you want to run your Docker container on EC2, there is no straight forward way to do this. However, you can utilize the power of Lambda function to run your image on your own EC2 instance. But you will have to code your function which could be tricky. I would highly recommend using AWS ECS to run your container in a more manageable way. You can essentially do all the things that you want to do in your EC2 instance to your ECS container.
As #Myz suggested, this can be done using GitHub actions with AWS ECR and AWS ECS. Below are some articles which I was following to solve the issue:
https://docs.github.com/en/actions/deployment/deploying-to-your-cloud-provider/deploying-to-amazon-elastic-container-service
https://kubesimplify.com/cicd-pipeline-github-actions-with-aws-ecs
I have an app using:
SAM
AWS S3
AWS Lambda based on Docker
AWS SAM pipeline
Github function
In the Dockerfile I have:
RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
Resulting in the error message:
Step 6/8 : RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
---> Running in 786873b916db
fatal error: Unable to locate credentials
Error: InferenceFunction failed to build: The command '/bin/sh -c aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz' returned a non-zero code: 1
I need to find a way to store the credential in a secured manner. Is it possible with GitHub secrets or something?
Thanks
My solution may be a bit longer but I feel it solves your problem, and
It does not expose any secrets
It does not require any manual work
It is easy to change your AWS keys later if required.
Steps:
You can add the environment variables in Github actions(since you already mentioned Github actions) as secrets.
In your Github CI/CD flow, when you build the Dockerfile, you can create a aws credentials file.
- name: Configure AWS credentials
echo "
[default]
aws_access_key_id = $ACCESS_KEY
aws_secret_access_key = $SECRET_ACCESS_KEY
" > credentials
with:
ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
In your Dockerfile, you can add instructions to COPY this credentials file and store it
COPY credentials credentials
RUN mkdir ~/.aws
RUN mv credentials ~/.aws/credentials
Changing your credentials requires just changing your github actions.
Docker by default does not have access to the .aws folder running on the host machine. You could either pass the AWS credentials as environment variables to the Docker image:
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=...
Keep in mind, hardcoding AWS credentials in a Dockerfile is a bad practice. In order to avoid this, you can pass the environment variables at runtime with using docker run -e MYVAR1 or docker run --env MYVAR2=foo arguments. Other solution would be to use an .env file for the environment variables.
A more involved solution would be to map a volume for the ~/.aws folder from the host machine in the Docker image.
Can you help me find a useful step-by-step guide or a Gist outlining in detail how to configure CircleCI (using 2.0 syntax) to deploy to AWS EC2?
I understand the basic requirements and the moving pieces, but unsure what to put in the .circleci/config.yml file in the deploy step.
So far I got:
A "Hello World" Node.js app which is building successfully in CircleCI (just without the deploy step)
A running EC2 instance (Ubuntu 16.04)
An IAM user with sufficient permissions added to CircleCI for that particular job
Can you help out with the CircleCI deploy step?
Following your repository, you could create a script just like that: deploy.sh
#!/bin/bash
echo "Start deploy"
cd ~/circleci-aws
git pull
npm i
npm run build
pm2 stop build/server
pm2 start build/server
echo "Deploy end"
And in your .circleci/conf.yml you do it:
deploy:
docker:
- image: circleci/node:chakracore-8.11.1
steps:
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- run:
name: AWS EC2 deploy
command: |
#upload all the code to machine
scp -r -o StrictHostKeyChecking=no ./ ubuntu#13.236.1.107:/home/circleci-aws/
#Run script inside of machine
ssh -o StrictHostKeyChecking=no ubuntu#13.236.1.107 "./deploy.sh"
But this is so ugly, try something like AWS Codedeploy or ecs for using containers.
I'm using Circleci to deploy my project on my AWS S3 bucket.
After many attempts I was able to finally made my config.yml work and according to Circleci interface everything is running succefully.
The problem is that when I access my bucket there's nothing there.
I already tried this:
-
run:
command: "aws s3 sync myAppPath s3://myBucketName"
Anyone could help? I have no errors and everything is done successully but no file on my bucket.
Thanks in advance
You have to add credentials.
add environmental variables in the project https://circleci.com/docs/2.0/env-vars/
screen:
And then configure config .circleci/config.yml :
# deploy to aws s3
deploy:
docker:
- image: cibuilds/aws:1.15.73
environment:
aws_access_key_id: $AWS_ACCESS_KEY_ID
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY
steps:
- attach_workspace:
at: ./workspace
- run:
name: Deploy to S3 if tests pass and branch is develop
command: aws s3 sync workspace/public s3://your.bucket/ --delete
also just let you know to debug aws cli use circleci cli:
And in your terminal, once you connect by ssh to circleci
try:
aws s3 sync workspace/public s3://your.bucket/ --debug
I just created a brand new AWS Codestar project.
As far as I can tell, that Codestar is just a dashboard that integrates multiple AWS products.
There is one thing that I don't know how to configure yet, and it is branch deployments.
In my git repository, I have 3 branches: master, develop and staging
In an ideal world, master deploys to production, develop to the development environment and staging to the QA environment.
I don't know how to configure this pipeline using AWS, and I haven't been able to locate to relevant documentation in their developers portal.
This is my buildspec.yml file just in case it can be configured there:
version: 0.2
phases:
install:
commands:
- echo Installing NPM Packages...
- npm install
build:
commands:
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
This is a project that uses AWS API Gateway to route requests to AWS Lambda functions if that matters.
Sadly AWS CodePipline doesn't support passing in the git branch. Last year they have only added support to pass the git commit sha1 (more can be found here).
I'd suggest you follow the CodePipline docs here, to create 3 pipelines one for each branch (you can even create a special buildspec_dev.yaml or buildspec_prod.yaml, check out more examples here).