AWS - 404 Not Found - NGINX - amazon-web-services

So I have my Web Application deployed onto AWS. All my services and clients are deployed using Fargate, so I build a Docker Image and push that up.
Dockerfile
# build environment
FROM node:9.6.1 as builder
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install --silent
RUN npm install react-scripts#1.1.1 -g --silent
COPY . /usr/src/app
RUN npm run build
# production environment
FROM nginx:1.13.9-alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I also created a DNS on AWS using Route53 that I pointed to my Load Balancer which is connected to my client application. These have an SSL certificate attached.
Example DNS
*.test.co.uk
test.co.uk
When I navigate to the login page of the application, everything works fine, I can navigate around the application. If I try to navigate straight to /dashboard, I receive a 404 Not Found NGINX error.
Example
www.test.co.uk (WORKS FINE)
www.test.co.uk/dashboard (404 NOT FOUND)
Is there something I need to configure, either in the Dockerfile or AWS that will allow users to directly navigate to any path?

Related

Flask application on Docker image appears to run fine when ran from Docker Desktop; unable to deploy on Ec2,"Essential container in task exited"

I am currently trying to deploy a Docker image. The Docker image is of a Flask application; when I run the image via Docker desktop, the service works fine. However, after creating an EC2 instance on Amazon and running the image as a task, I get the error "Stopped reason Essential container in task exited".
I am unsure of how to trouble shoot or what steps to take. Do advice!
Edit:
I noticed that my Docker file on my computer is of 155mb while that on AWS is 67mb. Does AWS do any compression? I will be trying to push my image again.
Edit2:
Reading through some other qn, it appears that it is normal for file sizes to differ as the Docker desktop shows the uncompressed version.
I decided to run the AWS Task Image on my Docker desktop, while it does run and the console shows everything is fine, I am unable to access the links provided.
* Serving Flask app 'main' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000 (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: XXXXXX
Under my Docker file, I have made sure to EXPOSE 5000 already. I am unsure why after running the same image on Amazon on my local machine, I am unable to connect to it on my local machine.
FROM alpine:latest
ENV PATH /usr/local/bin:$PATH
RUN apk add --no-cache python3
RUN apk add py3-pip && pip3 install --upgrade pip
WORKDIR /backend
RUN pip3 install wheel
RUN pip3 install --extra-index-url https://alpine-wheels.github.io/index numpy
COPY . /backend
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["main.py"]
Edit3:
I believe I "found" the problem but I am unsure how to run it. When I was building the Docker file, inside VSCode I would run it via doing docker run -it -d -p 5000:5000 flaskapp , where the tags -d and -p 5000:5000 means running it in demo mode and port forwarding the 5000 port. When I run the image that way inside VSCode, I am able to access the application on my local machine.
However, after creating the image and running it via pressping Start inside Docker Desktop, I am unable to access it on my local machine.
How will I go about running the Docker image this way either via Docker Desktop or Amazon EC2?

React app not loading in AWS ECS Cluster?

I have the following dockerfile:
# Multi-stage
# 1) Node image for building frontend assets
# 2) nginx stage to serve frontend assets
# Name the node stage "builder"
FROM node:10 AS builder
# Set working directory
WORKDIR /app
# Copy all files from current directory to working dir in image
COPY . .
# install node modules and build assets
RUN yarn install && yarn build
# nginx state for serving content
FROM nginx:alpine
# Set working directory to nginx asset directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static assets
RUN rm -rf ./*
# Copy static assets from builder stage
COPY --from=builder /app/build .
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
This works locally with docker.
I am using a T2.micro ec2 instance with a ecs cluster. The deployment of the service was successful and the task is running for the container image. I have tried going to the ec2 instance's public address and it returns took to long to respond. Let me know if you need any other details.
Thoughts?
I decided to go the route suggested by #David Maze and store it in an S3 bucket instead of using ECS for the website.
I followed this guide and it is working:
https://serverless-stack.com/chapters/deploying-a-react-app-to-aws.html

Container app unable to connect from web browser

I wrote a container web app and built it with docker, after it appears to be running, the app is not accessible through typing the link in the browser, but only available through local access link. The same container app always shows 404 No Such Service when pushed to AWS
here is the dockerfile
FROM python:3.8-alpine
EXPOSE 2328
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY main.py /app
COPY Templates /app/templates
COPY blockSchedule.txt /app
COPY blockSchedule.txt /app
CMD [ "python", "./main.py", "production" ]
It would be helpful if you included the commands that you ran and the output that you observed.
I suspect that you're not publishing the container's port (possibly 2328) on the host.
If there is a Python server and it's running on 2328 in the container, you can use the following command to publish (forward) the port to the host:
docker run \
--interactive --tty --rm \
--publish=2328:2328 \
your-container-image:tag
NOTE replace your-container: tag with your container's image name and tag.
NOTE The --publish flag has the following syntax [HOST-PORT]:[CONTAINER-PORT]. The container port is generally fixed by the port used by the container but you can use whichever host port is available.
Using the command above, you should be able to e.g
browse the container from the host using e.g. localhost: 2328

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

amazon beans talk docker Failed to build Docker image aws_beanstalk/staging-app not a directory

I want run my Java application in Amazon Beans talk within Docker, I zip Dockerfile, my app and bash script into archive and upload to beanstalk but during build I get error:
Step 2 : COPY run /opt
time="2017-02-07T16:42:40Z" level="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory"
Failed to build Docker image aws_beanstalk/staging-app: ="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory" .
On my local computer docker build and run works fine.
My Dockerfile:
FROM ubuntu:14.04
MAINTAINER Dev
COPY run /opt
COPY app.war /opt
EXPOSE 8081
CMD ["/opt/run"]
Thanks for help