Why .htaccess and a2enmod not working on Elastic Beanstalk? - amazon-web-services

I have created an Elastic Beanstalk App and Env. It is PHP webserver on Amazon Linux. It has to host a ReactJS application built using npm run build.
I have also created AWS Pipeline which takes code from CodeCommit, Builds and Deploys it. Since it's a webserver, my App also needs an .htaccess file which I have tried to handle in buildspec.yml file as cp .htaccess.default build/.htaccess, but .htaccess does not work.
What I think the problem is that Apache's rewrite mode is OFF I am trying to turn it ON through buildspec.yml file. I tried to put command a2enmode rewrite and service httpd reload in different sections of buildspec.yml but no luck so far. The build crashes.
Here is my buildspec.yml.
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- apt install a2enmod # <---- Build crashes, exit code 127
- a2enmod rewrite # <---- Build crasehs
- service httpd reload
pre_build:
commands:
- npm install
build:
commands:
- npm run build
- cp .htaccess.build build/.htaccess
artifacts:
files:
- '**/*'
base-directory: build/
cache:
paths:
- 'node_modules/**/*'
How can I get .htaccess to work on this environment?

This is nothing wrong with buildspec syntax. But as a good practice, for public assets like .htaccess, you should keep it inside public folder of your ReactJS App. No need for cp .htaccess.build build/.htaccess or enabling apache rewrite module.

Related

Beanstalk's nginx not picking the .conf file on my application source bundle

I have a Spring Boot application built on Beanstalk (Amazon Linux 2), I need to increase the client_max_body_size because some form data I'm posting contains images and I'm getting the 413: Request Too Large Nginx error.
I followed AWS's documentation on how to change this property.
My project structure looks like this now:
And the content of the file is:
client_max_body_size 50M;
After deploying I keep getting the same error (with images > 1MB total).
No file has been created in conf.d:
Is this because how my buildSpec packages my application?
version: 0.2
phases:
install:
runtime-versions:
java: corretto17
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn package -Dmaven.test.skip
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- .platform/nginx/conf.d/proxy.conf
- target/myApp-0.0.1-SNAPSHOT.jar
- appspec.yml
discard-paths: yes
I also tried adding the configuration file into the artifacts.files section of my buildspec.yml.
I also tried to create the file its content from the files section on the buildspec.
I feel like I tried everything, is there anything I may be missing?
For now, my workaround:
I edited mannually the file
cd /etc/nginx/
sudo nano nginx.conf
and restarted. That worked, but I want to avoid this manual configuration so it's configured from the application source, as a good practice.
The problem was on my buildspec.
discard-paths: yes
was putting all the files on the root path of the bundle. I needed this so I the jar was on the root, but it was putting the proxy.conf in the root as well.
Setting that property to no (or removing it) made it work, but I needed a way to change the jar from /target/ to the root so I did it with a post-build command:
version: 0.2
phases:
install:
runtime-versions:
java: corretto17
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn package -Dmaven.test.skip
post_build:
commands:
- mv target/myApp-0.0.1-SNAPSHOT.jar myApp-0.0.1-SNAPSHOT.jar <------ HERE
- echo Build completed on `date`
artifacts:
files:
- .platform/nginx/conf.d/proxy.conf
- myApp-0.0.1-SNAPSHOT.jar
- appspec.yml

How to CI/CD deploy static Dockerized React build files to S3

I currently have a React application that I have a AWS CodePipeline set up for that does the following.
Detect changes in GitHub repository
Build the "build" files (with CodeBuild) using buildspec.yaml file
Push "build" files to S3 bucket
The S3 bucket is configured to serve the static files to my domain.
This setup is great because it's cheap, I don't need to have an EC2 server always up and running serving these static files, which is completely unnecessary.
Recently however I've Dockerized this application, which is fantastic for me when I'm working on it from different machines.
However now that it's Dockerized it seems like it would be a better idea to have a docker container build the "build" files and push them to the S3 bucket, to ensure that the files being built on my machine are identical to the ones being pushed to the S3 Bucket.
Ideally I would like to have this all be automated when I push to the repo like it currently is.
I've seen a lot of tutorials about how to automate the creation of docker images getting pushed to AWS ECR and then using ECS (Fargate) to run the container. However to me this is just the same thing as running my app on an EC2 server... why do I want to do all this and then have a container continuously running on a server? Now it would just be a ECS server...
So what I am asking is, how can I create an automated CI/CD pipeline that builds the static files using a docker container, and then pushes them to S3, as I currently have it?
Here is current CodeBuild buildspec.yaml file for reference
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
commands:
# install yarn
- npm install yarn
# install dependencies
- yarn
# so that build commands work
- yarn add eslint-config-react-app
build:
commands:
# run build script
- yarn build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- '**/*'
base-directory: 'build'
I figured this out. It is possible to do this without modifying the Source or Deploy sections of the CodePipeline. You do not need EC2,ECR, ECS or Fargate.
You will modify the CodeBuild section of the pipeline to use a buildspec.yml file like this:
version: 0.2
phases:
install:
runtime-versions:
docker: 19
commands:
# log in to docker account to prevent rate limiting
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
# build the Docker image for the application
- docker build -t my-react-app:latest -f Dockerfile.prod .
build:
commands:
# run container from built image (builds production files)
- docker run my-react-app:latest
# set container id to variable
- CONTAINER=$(docker ps -alq)
# copy build files from container to host
- docker cp $CONTAINER:/app/build/ $CODEBUILD_SRC_DIR/build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- "**/*"
base-directory: "build"
There are some additional details, I've written a blog post about it here:
https://ncoughlin.com/posts/aws-codepipeline-dockerized-react-s3/

AWS amplify deployment fails - You need to enable JavaScript to run this app

I have a react app that I'm trying to deploy automtically using AWS amplify. I connected the repo and the build and deployment seems to be successfull:
But opening the url shows You need to enable JavaScript to run this app. in the console
Building and serving the app locally using
$ npm run build
$ serve -s build
works fine.
I saw in the issue here that this might be about setting the "proxy" in the package.json, but I'm not sure which port does AWS amplify uses and adding the line of the answer there (using localhost:5000) doesn't work either.
any ideas?
EDIT:
amplify.yml:
version: 1
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*

How to configure AWS Codebuild with Webpack

I have created an AWS Codepipeline that runs in four stages. 1) Source code from github, 2) deploy backend to Elastic Beanstalk, 3) build fronted code with Codebuild (using the buildspec file below), and 4) deploy results of webpack to S3.
Everything works as expected so far except for the results of stage 3. Codebuild seemingly sets the artifacts as the source files and not the results of the webpack build. When I look in the bucket and folder for the deployed code, I'm expecting to see a series of js asset files and a manifest.json. Instead, I see the project files. Not quite sure what I'm configuring wrong here.
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing dependencies...
- yarn
build:
commands:
- echo Building project...
- yarn build
post_build:
commands:
- echo build completed on `date`
artifacts:
files:
- '**/*'
cache:
paths:
- '/root/.npm/**/*'
- '/node_modules/'
webpack-build configuration
webpack-deploy configuration
After a few hours of troubleshooting, I was finally able to figure out what was going on.
Running yarn build on the project bundles everything into a /dist folder. The artifacts line, however, indicates that the files that should be uploaded to S3 are all of the project files. So the fix was as simple as updating **/* to dist/**/*.

AWS CodePipeline successful, but not correctly deployed to Elastic Beanstalk

The (successful) deployment of my WAR file to Elastic Beanstalk gives me a 404 Not Found when I invoke the application URL. I can see a application.war file within /var/lib/tomcat8/webapps/ROOT/ instead of META-INF and WEB-INF, which is in there when I deploy manually.
When I pull the WAR file from S3 and deploy it to Elastic Beanstalk manually it works like a charm. Note: this is the same WAR file as generated by CodeBuild in my pipeline. Even better, if I secure copy (scp) the file to my local computer and upload it to Elastic Beanstalk it works as well.
It seems that everything works until the deployment, a working WAR file is even deployed to Elastic Beanstalk.
Going through eb-activity.log I can see it recognizes the WAR file and deploys it from a temporary directory to /var/lib/tomcat8/webapps/ROOT, but it isn't unpacked and the container/webserver isn't restarted.
How can I correctly deploy the WAR file with CodePipeline?
It seems that almost three years later AWS Codepipeline is not yet "WAR file deployment friendly". As pointed out in the comment by #Azeq the standard Elastic Beanstalk deployment procedure won't unzip the war file and nothing will be really deployed. CodePipeline reports success because the copy of the files is made without errors but Tomcat won't unzip the war file.
The solution is to provide your artifact in exploded form (already unzipped). To do so modify the post build phase and the artifact definiton of your CodeBuild buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
pre_build:
commands:
- echo CODEBUILD_RESOLVED_SOURCE_VERSION $CODEBUILD_RESOLVED_SOURCE_VERSION
build:
commands:
- mvn compile
post_build:
commands:
- mvn package
- mkdir artifact <-- create folder to extract war file content
- unzip target/my.war -d artifact/ <-- unzip to that folder
artifacts:
files:
- artifact/**/* <-- reference all those files as the artifact
name: artifact
cache:
paths:
- '/root/.m2/**/*'
Note the mkdir and unzip commands in the post build phase, and how the files definition in the artifacts section is written. As per CodeBuild documentation, **/* means all files recursively.
I tried to replicate the issue which you are facing. I think when creating the "war" file, you are putting the folder, which contains the "META-INF" and "WEB-INF" folders, as the root of the "war" output file.
Instead, you should put all the files (within the folder above) in the "war" file without the root-level folder.
I struggled with this for a while as well. Finally I was able to resolve it by having the buildspec.yml extract the WEB-INF directory from the built war file in the post_build section
Since AWS places a zip wrapper around your artifact it adds another folder level around what elasticbeanstalk actually needs
version: 0.2
phases:
install:
runtime-versions:
java: corretto11
build:
commands:
- mvn compile
post_build:
commands:
- mvn package
- mkdir artifact
- unzip target/demo-0.0.1-SNAPSHOT.war -d artifact/
- mv artifact/WEB-INF WEB-INF
artifacts:
files:
- WEB-INF/**/*
name: artifact