I'm trying to use Elastic Beanstalk to deploy my Django server.
My problem is that part of my deployment process is to "npm install" from my package.json, and then executing webpack (npx webpack ..... --output main.js)
How can I do that while maintaining an easy deployment process (eb deploy) and without committing main.js to the repository?
To do it, you'll probably need ebextensions to configure your Elastic Beanstalk environment. Documentation is here.
I recently deploy my Symfony app on ElasticBeanstalk which needed Yarn to execute webpack.
To do it, I created a .config file in which I write the commands to install Yarn and another .config file to run Yarn at each deployment. All .config files are in .ebextensions directory at the root of the project.
commands:
01_install_node:
command: |
sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
sudo yum -y install nodejs
02_install_yarn:
command: |
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
You can use the container_commands key to execute commands that affect
your application source code. Container commands run after the
application and web server have been set up and the application
version archive has been extracted.
container_commands:
02_run_yarn:
command: |
yarn install
yarn run encore production
Related
I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy
How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.
Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little
Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.
I am using AWS CDK (with Python) for a containerized application that runs on Fargate. I would like to run cdk deploy in a GitLab CI process and pass the git tag as an environment variable that replaces the container running in Fargate. I am currently doing something similar with CloudFormation (aws cloudformation update-stack ...). Is anyone else doing CI/CD with AWS CDK in this way? Is there a better way to do it?
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
Here is start that seems to be working well:
CDK:
image: python:3.8
stage: deploy
before_script:
- apt-get -qq update && apt-get -y install nodejs npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk diff
- cdk deploy --require-approval never
Edit 2020-05-04:
CDK can build docker images during cdk deploy, but it needs access to docker. If you don't need docker, the above CI job definition should be fine. Here's the current CI job I'm using:
cdk deploy:
image: docker:19.03.1
services:
- docker:19.03.5-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- pip3 -V
- apk add nodejs-current npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk bootstrap aws://$AWS_ACCOUNT_ID/$AWS_DEFAULT_REGION
- cdk deploy --require-approval never
The cdk bootstrap is needed because I am using assets in my cdk code:
self.backend_task.add_container(
"DjangoBackend",
image=ecs.AssetImage(
"../backend",
file="scripts/prod/Dockerfile",
target="production",
),
logging=ecs.LogDrivers.aws_logs(stream_prefix="Backend"),
environment=environment_variables,
command=["/start_prod.sh"],
)
Here's more information on cdk bootstrap: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md
you definitely have to use CDK deploy inside the CI/CD pipeline if you have lambda or ECS assets, otherwise, you could run CDK synth and pass the resulting Cloudformation to AWS Code Deploy. That means a lot of your CI/CD will be spent deploying which might drain your free tier build minutes or just means you pay more (AWS Code Deploy is free)
I do something similar with Golang in CircleCi. I use the Go base image and install nodejs and cdk. I use this base image to build all my go binaries, the vuejs frontend and compile cdk typescript and deploy it.
FROM golang:1.13
RUN go get -u -d github.com/magefile/mage
WORKDIR $GOPATH/src/github.com/magefile/mage
RUN go run bootstrap.go
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y nodejs
RUN npm i -g aws-cdk#1.36.x
RUN npm i -g typescript
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
I hope that helps.
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
For anyone looking for how to implement CI/CD with AWS CDK Python in 2022, here's a tested solution:
Use python:3.10.8 as the base image in your CI/CD
(or any image with Debian 11)
Install Node.js 16 from NodeSource: curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
Install aws-cdk: npm i -g aws-cdk
You can add the two latter steps as inline scripts in your CI/CD pipeline so you do not need to build your own Docker image.
Here's a full example for Bitbucket Pipelines:
image: python:3.10.8
run-tests: &run-tests
step:
name: Run tests
script:
# Node 16
- curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
- npm i -g aws-cdk
- pip install -r requirements-dev.txt
- pytest
pipelines:
pull-requests:
"**":
- <<: *run-tests
branches:
master:
- <<: *run-tests
Note that the above instructions do not install Docker engine. In Bitbucket Pipelines, Docker can be used simply by adding
services:
- docker
in the configuration file.
If cdk deploy is giving you the error:
/usr/lib/node_modules/aws-cdk/lib/index.js:12422
home = path.join((os.userInfo().homedir ?? os.homedir()).trim(), ".cdk");
then the node version is out of date. This can be fixed by updating the docker image which also requires pip3:
cdk deploy:
image: docker:20.10.21
services:
- docker:20.10.21-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- apk add py3-pip
- pip3 -V
I am attempting to upload a rails app that was recently updated from rails 5.2 to 6 to AWS Elastic Beanstalk. We had someone else working on this, but with the pandemic he had to step away - and now our site is kind of in limbo and I have not been able to update it. I have searched many different variations of my problem but no solutions have worked yet.
The app was working on EB with rails 5.2. I have the app running in 6.0 locally. When I eb deploy I get this error:
MacBook-Pro:app $ eb deploy
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed ---
Finished processing application version app-0e294-200420_110159
2020-04-21 00:22:24 INFO Environment update is starting.
2020-04-21 00:23:07 INFO Deploying new version to instance(s).
2020-04-21 00:27:59 ERROR [Instance: i-0e613ac1fe175f3f6] Command failed on instance. Return code: 1 Output: (TRUNCATED)...-- : Writing /var/app/ondeck/public/assets/application-06fe3df6175ba0def3d0e732489f883d0c09de.css.gz
Webpacker requires Node.js ">=10.13.0" and you are using v6.17.1
Please upgrade Node.js https://nodejs.org/en/download/
Exiting!.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/11_asset_compilation.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2020-04-21 00:27:59 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-04-21 00:28:00 ERROR Unsuccessful command execution on instance id(s) 'i-0e613ac1fe175f3f6'. Aborting the operation.
2020-04-21 00:28:00 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
It was giving me a bundler error before this, that I was able to fix by adding a file into .ebextensions that installs the correct version of bundler. I figured the solution to this would be similar.
This post was close to my problem:
Deploy rails react app with webpacker gem on AWS elastic beanstalk
So I added this file to my .ebextensions based off the selected answer of that:
01_update_note.config
commands:
01_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
02_install_nodejs:
command: yum -y install nodejs
However, it did not appear to do anything, I still get the same error. I tried a couple variations of the file based off a few other blog posts about the issue, but the error remains. Is anyone able to point me in the right direction or offer any insight into the problem? I apologize for not being very familiar with AWS or EB yet, but I will do my best to answer additional questions.
Maybe it is caused by yarn install later.
I try the following scripts and remove yarn install then set RAILS_SKIP_ASSET_COMPILATION=false and it works for me.
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
04_install_packages:
command: sudo yum install -y yarn
This is how I did it on Amazon Linux 2:
Create this file in .platform/hooks/prebuild/yarn_config.sh:
#!/usr/bin/env bash
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
yarn install
Give it the right permission: chmod +x .platform/hooks/prebuild/yarn_config.sh
And the error is gone, while you assets still compile (unlike with accepted answer)
I have an aws code pipeline which currently successfully deploys code to my EC2 instances.
I have a Docker image that has the necessary setup to run my code, Dockerfile provided below. When I run docker run -t it just loads up an interactive shell on my docker but then hangs on any command (eg: ls)
Any advice?
FROM continuumio/anaconda2
RUN apt-get install git
ENV PYTHONPATH /app/phdcode/panaxeaA1
# setting up venv
RUN conda create --name panaxea -y
RUN /bin/bash -c "source activate panaxea"
# Installing necessary packages
RUN conda install -c guyer pysparse
RUN conda install -c conda-forge pympler
RUN pip install pysparse
RUN git clone https://github.com/usnistgov/fipy.git
RUN cd fipy && python setup.py install
RUN cd ~
WORKDIR /app
COPY . /app
RUN cd panaxeaA1/models/alpha04c/launchers
RUN echo "launching..."
CMD python launcher_260818_aws.py
docker run -t simply starts a docker container with a pseuodo-tty connection to the container's stdin. However, just running this command does not establish an interactive shell to the container. You will need this to be able to have run commands within your container.
You need to also append the -i command line flag along with the shell you wish to use. For example, docker run -it IMAGE_NAME bash will launch a container from the image you provide using bash as your interactive shell. You can then run Bash commands as you normally would.
If you are looking for a simple way to run containers on EC2 instances in AWS, I highly recommend AWS EC2 Container Service (ECS) as an option. It is a very simple service for running containers that abstracts and manages much of the server level work involved in running containers.