Docker & Amazon Beanstalk - Deploy an Angular application - amazon-web-services

I am trying to deploy a dist folder that is generated with versioning by Gulp using a Dockerfile and with Amazon EB.
This fails when I run eb deploy with:
COPY dist /var/www/html dist: no such file or directory. Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Is this because the dist directory is not under source control? If so, what is the best way to transfer the dist directory up to EB whilst still using my docker file to deploy and configure the application?
Below is my Dockerfile:
FROM nimmis/apache-php5
COPY dist /var/www/html
WORKDIR /var/www/html
EXPOSE 80

If you really want the dist files in your docker image then install gulp and run the command to generate the dist folder within the Dockerfile.
See the RUN command for Dockerfiles

I think my understanding of eb deploy was the issue. Answer is to zip the dist directory, along with other files using a bash script and create my own artifact for the config.yml file:
e.g.
dist (application files)
config (php.ini and 000-default.conf)
Dockerfile
Then add the directory to the config.yml:
deploy:
artifact: dist.zip
I was then able to write a bash script to create a version number label and then deploy to Beanstalk:
eb deploy --staged --label {version_number}

Related

How to CI/CD deploy static Dockerized React build files to S3

I currently have a React application that I have a AWS CodePipeline set up for that does the following.
Detect changes in GitHub repository
Build the "build" files (with CodeBuild) using buildspec.yaml file
Push "build" files to S3 bucket
The S3 bucket is configured to serve the static files to my domain.
This setup is great because it's cheap, I don't need to have an EC2 server always up and running serving these static files, which is completely unnecessary.
Recently however I've Dockerized this application, which is fantastic for me when I'm working on it from different machines.
However now that it's Dockerized it seems like it would be a better idea to have a docker container build the "build" files and push them to the S3 bucket, to ensure that the files being built on my machine are identical to the ones being pushed to the S3 Bucket.
Ideally I would like to have this all be automated when I push to the repo like it currently is.
I've seen a lot of tutorials about how to automate the creation of docker images getting pushed to AWS ECR and then using ECS (Fargate) to run the container. However to me this is just the same thing as running my app on an EC2 server... why do I want to do all this and then have a container continuously running on a server? Now it would just be a ECS server...
So what I am asking is, how can I create an automated CI/CD pipeline that builds the static files using a docker container, and then pushes them to S3, as I currently have it?
Here is current CodeBuild buildspec.yaml file for reference
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
commands:
# install yarn
- npm install yarn
# install dependencies
- yarn
# so that build commands work
- yarn add eslint-config-react-app
build:
commands:
# run build script
- yarn build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- '**/*'
base-directory: 'build'
I figured this out. It is possible to do this without modifying the Source or Deploy sections of the CodePipeline. You do not need EC2,ECR, ECS or Fargate.
You will modify the CodeBuild section of the pipeline to use a buildspec.yml file like this:
version: 0.2
phases:
install:
runtime-versions:
docker: 19
commands:
# log in to docker account to prevent rate limiting
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
# build the Docker image for the application
- docker build -t my-react-app:latest -f Dockerfile.prod .
build:
commands:
# run container from built image (builds production files)
- docker run my-react-app:latest
# set container id to variable
- CONTAINER=$(docker ps -alq)
# copy build files from container to host
- docker cp $CONTAINER:/app/build/ $CODEBUILD_SRC_DIR/build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- "**/*"
base-directory: "build"
There are some additional details, I've written a blog post about it here:
https://ncoughlin.com/posts/aws-codepipeline-dockerized-react-s3/

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

is docker-compose.yml not supported in AWS Elastic Beanstalk?

In my root directory, I have my docker-compose.yml.
$ ls
returns:
build cmd docker-compose.yml exp go.mod go.sum LICENSE media pkg README.md
In the same directory, I ran:
$ eb init -p docker infogrid
$ eb create infogridEnv
However, this gave me an error:
Instance deployment: Both 'Dockerfile' and 'Dockerrun.aws.json' are missing in your source bundle. Include at least one of them. The deployment failed.
The fact that it does not even include docker-compose.yml as the missing file makes me think it does not support docker-compose. This is contradicting with the main documentation where it explicitly shows an example with docker-compose.yml.
It may be that you use "Amazon AMI" your enviroment should be the new "Docker running on 64bit Amazon Linux 2"
only then you get the docker-compose.yml support
source https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/docker-multicontainer-migration.html

amazon beans talk docker Failed to build Docker image aws_beanstalk/staging-app not a directory

I want run my Java application in Amazon Beans talk within Docker, I zip Dockerfile, my app and bash script into archive and upload to beanstalk but during build I get error:
Step 2 : COPY run /opt
time="2017-02-07T16:42:40Z" level="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory"
Failed to build Docker image aws_beanstalk/staging-app: ="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory" .
On my local computer docker build and run works fine.
My Dockerfile:
FROM ubuntu:14.04
MAINTAINER Dev
COPY run /opt
COPY app.war /opt
EXPOSE 8081
CMD ["/opt/run"]
Thanks for help

AWS CodePipeline successful, but not correctly deployed to Elastic Beanstalk

The (successful) deployment of my WAR file to Elastic Beanstalk gives me a 404 Not Found when I invoke the application URL. I can see a application.war file within /var/lib/tomcat8/webapps/ROOT/ instead of META-INF and WEB-INF, which is in there when I deploy manually.
When I pull the WAR file from S3 and deploy it to Elastic Beanstalk manually it works like a charm. Note: this is the same WAR file as generated by CodeBuild in my pipeline. Even better, if I secure copy (scp) the file to my local computer and upload it to Elastic Beanstalk it works as well.
It seems that everything works until the deployment, a working WAR file is even deployed to Elastic Beanstalk.
Going through eb-activity.log I can see it recognizes the WAR file and deploys it from a temporary directory to /var/lib/tomcat8/webapps/ROOT, but it isn't unpacked and the container/webserver isn't restarted.
How can I correctly deploy the WAR file with CodePipeline?
It seems that almost three years later AWS Codepipeline is not yet "WAR file deployment friendly". As pointed out in the comment by #Azeq the standard Elastic Beanstalk deployment procedure won't unzip the war file and nothing will be really deployed. CodePipeline reports success because the copy of the files is made without errors but Tomcat won't unzip the war file.
The solution is to provide your artifact in exploded form (already unzipped). To do so modify the post build phase and the artifact definiton of your CodeBuild buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
pre_build:
commands:
- echo CODEBUILD_RESOLVED_SOURCE_VERSION $CODEBUILD_RESOLVED_SOURCE_VERSION
build:
commands:
- mvn compile
post_build:
commands:
- mvn package
- mkdir artifact <-- create folder to extract war file content
- unzip target/my.war -d artifact/ <-- unzip to that folder
artifacts:
files:
- artifact/**/* <-- reference all those files as the artifact
name: artifact
cache:
paths:
- '/root/.m2/**/*'
Note the mkdir and unzip commands in the post build phase, and how the files definition in the artifacts section is written. As per CodeBuild documentation, **/* means all files recursively.
I tried to replicate the issue which you are facing. I think when creating the "war" file, you are putting the folder, which contains the "META-INF" and "WEB-INF" folders, as the root of the "war" output file.
Instead, you should put all the files (within the folder above) in the "war" file without the root-level folder.
I struggled with this for a while as well. Finally I was able to resolve it by having the buildspec.yml extract the WEB-INF directory from the built war file in the post_build section
Since AWS places a zip wrapper around your artifact it adds another folder level around what elasticbeanstalk actually needs
version: 0.2
phases:
install:
runtime-versions:
java: corretto11
build:
commands:
- mvn compile
post_build:
commands:
- mvn package
- mkdir artifact
- unzip target/demo-0.0.1-SNAPSHOT.war -d artifact/
- mv artifact/WEB-INF WEB-INF
artifacts:
files:
- WEB-INF/**/*
name: artifact