getting "memory error" during aws elastic beanstalk docker deployment - amazon-web-services

I have a machine learning model with flask api dockerized and when I am trying to deploy it gives me a vague error but checking the expanded logs I can see that it was a memory error when installing tensorflow
As an aside, I also have included the model.h5 file within the docker image, should I not be doing this? Otherwise it is just the .py and config files (no venv in directory)
This is my Dockerfile (it works locally btw, but beanstalk has a memory limit which is getting hit and im not sure why)
FROM python:latest
COPY . /app
WORKDIR /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["app.py", "--host=0.0.0.0"]

Related

Flask application on Docker image appears to run fine when ran from Docker Desktop; unable to deploy on Ec2,"Essential container in task exited"

I am currently trying to deploy a Docker image. The Docker image is of a Flask application; when I run the image via Docker desktop, the service works fine. However, after creating an EC2 instance on Amazon and running the image as a task, I get the error "Stopped reason Essential container in task exited".
I am unsure of how to trouble shoot or what steps to take. Do advice!
Edit:
I noticed that my Docker file on my computer is of 155mb while that on AWS is 67mb. Does AWS do any compression? I will be trying to push my image again.
Edit2:
Reading through some other qn, it appears that it is normal for file sizes to differ as the Docker desktop shows the uncompressed version.
I decided to run the AWS Task Image on my Docker desktop, while it does run and the console shows everything is fine, I am unable to access the links provided.
* Serving Flask app 'main' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000 (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: XXXXXX
Under my Docker file, I have made sure to EXPOSE 5000 already. I am unsure why after running the same image on Amazon on my local machine, I am unable to connect to it on my local machine.
FROM alpine:latest
ENV PATH /usr/local/bin:$PATH
RUN apk add --no-cache python3
RUN apk add py3-pip && pip3 install --upgrade pip
WORKDIR /backend
RUN pip3 install wheel
RUN pip3 install --extra-index-url https://alpine-wheels.github.io/index numpy
COPY . /backend
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["main.py"]
Edit3:
I believe I "found" the problem but I am unsure how to run it. When I was building the Docker file, inside VSCode I would run it via doing docker run -it -d -p 5000:5000 flaskapp , where the tags -d and -p 5000:5000 means running it in demo mode and port forwarding the 5000 port. When I run the image that way inside VSCode, I am able to access the application on my local machine.
However, after creating the image and running it via pressping Start inside Docker Desktop, I am unable to access it on my local machine.
How will I go about running the Docker image this way either via Docker Desktop or Amazon EC2?

Getting "Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory" while docker-compose build

So I am pretty new to docker and django. Unfortunately while running the below command on my linux machine which i am connected using my physical windows machine using putty:
docker-compose build
I am getting an error:
Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory
'myenv' is the environment I have created inside my folder.
I am getting a container started on port 9000. The app doesn't have anything yet just a simple project so i just expect to see the 'congratulations' screen. I don't know where I am going wrong. My final goal would be to run the docker url in my windows browser and see the screen of docker container.
This is my docker-compose.yml file:
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9000
ports:
- 202.179.92.106:8000:9000
the IP: 202.179.92.106 is my public IP. I did the above binding so as to access the docker container from my windows machine. Would request additional inputs for the port binding as well if correct/incorrect.
Below is my Dockerfile:
FROM python:3.6.9
RUN mkdir djangotest
WORKDIR djangotest
ADD . /djangotest
RUN pip install -r requirements.txt
Please help me out peeps!
If you have a virtual environment in your normal development tree, you can't copy it into a Docker image. You can exclude this from the build sequence by mentioning it in a .dockerignore file:
# .dockerignore
myenv
Within the Dockerfile, the RUN pip install line will install your application's dependencies into the Docker image, so you should have a complete self-contained image.

Cloud Run needs NGINX or not?

I am using cloud run for my blog and a work site and I really love it.
I have deployed python APIs and Vue/Nuxt Apps by containerising it according to the google tutorials.
One thing I don't understand is why there is no need to have NGINX on the front.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
RUN npm run build
CMD [ "npm", "start" ]
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc
RUN pip install -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn -b :$PORT --workers=4 main:server
All this works without me calling Nginx ever.
But I read alot of articles whereby people bundle NGINX in their container. So I would like some clarity. Are there any downsides to what I am doing?
One considerable advantage of using NGINX or a static file server is the size of the container image. When serving SPAs (without SSR), all you need is to get the bundled files to the client. There's no need to bundle build dependencies or runtime that's needed to compile the application.
Your first image is copying whole source code with dependencies into the image, while all you need (if not running SSR) are the compiled files. NGINX can give you the "static site server" that will only serve your build and is a lightweight solution.
Regarding python, unless you can bundle it somehow, it looks ok to use without the NGINX.

How do I let my docker volume sync to my filesystem and allow writing from docker build

I'm building a django app using docker. The issue I am having is that my local filesystem is not synced to the docker environment so making local changes have no effect until I rebuild.
I added a volume
- ".:/app:rw"
which is syncing to my local filesystem but does my bundles that get built via webpack during the build don't get inserted (because they aren't in my filesystem)
My dockerfile has this
... setup stuff...
ENV NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV PATH=/node_modules/.bin:$PATH
COPY package*.json /
RUN (cd / && npm install && rm -rf /tmp/*)
...pip install stuff...
COPY . /app
WORKDIR /app
RUN npm run build
RUN DJANGO_MODE=build python manage.py collectstatic --noinput
So I want to sync to my local filesystem so I can make changes and have them show up immediately AND have my bundles and static assets present. The way I've been developing so far is to just comment out the app:rw line in my docker-compose.yml which allows all the assets and bundles to be present.
The solution that ended up working for me was to assign a volume to the directory I wanted to not sync to my local environment.
volumes:
- ".:/app/:rw"
- "/app/project_folder/static_source/bundles/"
- "/app/project_folder/bundle_tracker/"
- "/app/project_folder/static_source/static/"
Arguably there's probably a better way to do this, but this solution does work. The dockerfile compiles webpack and collect static does it's job both within the container and the last 3 lines above keep my local machine from overwriting them. The downside is that I still have to figure out a better solution for live recompile of scss or javascript, but that's a job for another day.
You can mount a local folder into your Docker image. Just use the --mount option at the docker run command. In the following example the current directory will be available in your Docker image at /app.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Reference: https://docs.docker.com/storage/bind-mounts/

amazon beans talk docker Failed to build Docker image aws_beanstalk/staging-app not a directory

I want run my Java application in Amazon Beans talk within Docker, I zip Dockerfile, my app and bash script into archive and upload to beanstalk but during build I get error:
Step 2 : COPY run /opt
time="2017-02-07T16:42:40Z" level="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory"
Failed to build Docker image aws_beanstalk/staging-app: ="info" msg="stat /var/lib/docker/devicemapper/mnt/823f97180373b7f268e72b3a5daf0f965feb2c7aa9d3537cf845a36e2dfac80a/rootfs/opt/run: not a directory" .
On my local computer docker build and run works fine.
My Dockerfile:
FROM ubuntu:14.04
MAINTAINER Dev
COPY run /opt
COPY app.war /opt
EXPOSE 8081
CMD ["/opt/run"]
Thanks for help