xmlrpc.py Connection refused error while using supervisor in docker - django

hello guys im writing a docker file and compose with ubuntu 20.04 and try to install supervisor inside it
docker file :
...
FROM ubuntu:20.04
WORKDIR /src/app
ENV BACKENDENVIRONMENT 0
COPY gsigner .
COPY docker_requirements.txt ./
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt install -y python-is-python3
RUN apt-get install -y python3.9
RUN apt-get install python3-pip -y
RUN apt-get install gcc musl-dev python3-dev libffi-dev openssl libssl-
dev cargo -y
RUN apt install -y postgresql postgresql-contrib
RUN apt-get update && apt-get install -y postgresql-server-dev-all gcc
python3-dev musl-dev
RUN pip install --upgrade pip setuptools wheel \
&&pip install -r docker_requirements.txt
RUN mkdir /run/gsigner
RUN apt-get install -y supervisor
COPY backend_supervisord.conf /etc/supervisor/conf.d/
dockerfile updated
docker compose :
version: "3.9"
services:
gsigner:
build: .
command: bash -c "python manage.py migrate && supervisorctl reread && supervisorctl reload&&supervisorctl start daphne"
ports:
- 8000:8000
volumes:
- static:/var/static/gsigner/
- media:/var/media/gsigner/
- gsigner:/src/app/
- log:/var/log/gsigner/
volumes:
static:
media:
log:
gsigner:
dockercompose updated
daphne is my program name in my supervisor conf file
my supervisor conf file :
[supervisord]
[supervisorctl]
[program:daphne]
command=daphne gsigner.asgi:application
directory=/src/app/gsigner/
user=root
autostart=true
autorestart=true
i really did not realize what is happening here
and this is the err msg :
error:error: <class 'ConnectionRefusedError'>, [Errno 111] Connection refused: file: /usr/lib/python3/dist-packages/supervisor/xmlrpc.py line: 560

Related

Django Docker ElasticBeanstalk fails

I'm new to docker and eb deployment, I want to deploy django with docker on eb
here's what I did so far
created a Dockerfile
# Pull base image
FROM python:3.9.16-slim-buster
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal libpq-dev python-dev libcurl4-openssl-dev libssl-dev gcc
# install dependencies
COPY . /code
WORKDIR /code/
RUN pip install -r requirements.txt
# set work directory
WORKDIR /code/app
then in docker-compose.yml
version: '3.7'
services:
web:
build: .
command: python /code/hike/manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- .:/code
it runs locally, but on deployment it fails and when I get to logs, it says
pg_config is required to build psycopg2 from source.
like it's not using the Dockerfile, I read somewhere I should set Dockerrunder.aws.json but I've no idea what to write in it!

Import boto3 module error in aws batch job

I was trying to run a batch job on an image in aws and is getting below error
ModuleNotFoundError: No module named 'boto3'
But boto3 is getting imported in dockerfile
Dockerfile
FROM ubuntu:20.04
ENV SPARK_VERSION 2.4.8
ENV HADOOP_VERSION 3.0.0
RUN apt update
RUN apt install openjdk-8-jdk -y
RUN apt install scala -y
RUN apt install wget tar -y
#RUN wget https://apache.mirror.digitalpacific.com.au/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz
RUN wget http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz
RUN wget https://downloads.apache.org/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-without-hadoop.tgz
RUN tar xfz hadoop-$HADOOP_VERSION.tar.gz
RUN mv hadoop-$HADOOP_VERSION /opt/hadoop
RUN tar xvf spark-$SPARK_VERSION-bin-without-hadoop.tgz
RUN mv spark-$SPARK_VERSION-bin-without-hadoop /opt/spark
RUN apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt update && \
apt install python3.7 -y
ENV SPARK_HOME /opt/spark
ENV HADOOP_HOME /opt/hadoop
ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:${HADOOP_HOME}/bin
ENV PYSPARK_PYTHON /usr/bin/python3.7
RUN export SPARK_HOME=/opt/spark
RUN export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:${HADOOP_HOME}/bin
RUN export PYSPARK_PYTHON=/usr/bin/python3.7
RUN export SPARK_DIST_CLASSPATH=$(hadoop classpath)
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1
RUN update-alternatives --set python /usr/bin/python3.7
RUN apt-get install python3-distutils -y
RUN apt-get install python3-apt -y
RUN apt install python3-pip -y
RUN pip3 install --upgrade pip
COPY ./pipeline_union/requirements.txt requirements.txt
#RUN python -m pip install -r requirements.txt
RUN pip3 install -r requirements.txt
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk/1.10.6/aws-java-sdk-1.10.6.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.874/aws-java-sdk-bundle-1.11.874.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.0.0/hadoop-aws-3.0.0.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/net/java/dev/jets3t/jets3t/0.9.4/jets3t-0.9.4.jar -P $SPARK_HOME/jars/
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.10.6/aws-java-sdk-s3-1.10.6.jar -P ${HADOOP_HOME}/share/hadoop/tools/lib/
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.10.6/aws-java-sdk-s3-1.10.6.jar -P ${SPARK_HOME}/jars/
# COPY datalake/spark-on-spot/src/jars $SPARK_HOME/jars
# COPY datalake/spark-on-spot/src/pipeline_union ./
# COPY datalake/spark-on-spot/src/pipeline_union/spark.conf spark.conf
COPY ./jars $SPARK_HOME/jars
COPY ./pipeline_union ./
COPY ./pipeline_union/spark.conf spark.conf
#RUN ls /usr/lib/jvm
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV PATH $PATH:$HOME/bin:$JAVA_HOME/bin
RUN export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN hadoop classpath
ENV SPARK_DIST_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/contrib/capacity-scheduler/*.jar:/opt/hadoop/share/hadoop/tools/lib/*
ENTRYPOINT ["spark-submit", "--properties-file", "spark.conf"]
#ENTRYPOINT ["spark-submit", "--packages", "org.apache.hadoop:hadoop-aws:2.8.5"]
#ENTRYPOINT ["spark-submit", "--properties-file", "spark.conf", "--packages", "org.apache.hadoop:hadoop-aws:2.8.5"]
requirements.txt
boto3==1.13.9
botocore
colorama==0.3.9
progressbar2==3.39.3
pyarrow==1.0.1
requests
psycopg2-binary
pytz
I ran another image successfully, with 2 differences
code line in dockerfile
RUN pip install -r requirements.txt
requirement.txt
requests
boto3
psycopg2-binary
pytz
pandas
pynt
Is there any knowns issues in:
Using pip3 in Dockerfile instead of pip
Specifying boto3 version

Upgrade mysql2 gem to latest version with rails 4?

i have tried to update mysql2 gem but error are there:
Gem::LoadError: Specified 'mysql2' for database adapter, but the gem is not loaded. Add `gem 'mysql2'` to your Gemfile (and ensure its version is at the minimum required by ActiveRecord).
i have to upgrade mysql2 gem to latest version because i have to do data migration from mysql to postgresql that why we need to upgrade mysql2 gem.
also i need to dump data of mysql database that why i need to upgrade mysql2 gem to latest version.
if any one have suggestion how to done this work so, please help me and mail to: santu.essence#gmail.com
docker file:
FROM ruby:2.7.2
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y --force-yes
RUN apt-get install libtag1-dev -y --force-yes
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install nodejs -y
# postgresql-client --- postgresql database adapter
# RUN apt-get install -y wget sudo pgloader
# install ffmpeg
# RUN apt-get install checkinstall
RUN apt-get update \
&& apt-get install -y git build-essential gcc make yasm autoconf automake cmake libtool libmp3lame-dev pkg-config libunwind-dev zlib1g-dev libssl-dev \
&& apt-get update \
&& apt-get clean
RUN apt-get install -y --no-install-recommends libc6-dev libgdiplus wget software-properties-common \
&& wget https://www.ffmpeg.org/releases/ffmpeg-4.0.2.tar.gz \
&& tar -xzf ffmpeg-4.0.2.tar.gz; rm -r ffmpeg-4.0.2.tar.gz \
&& cd ./ffmpeg-4.0.2; ./configure --enable-gpl --enable-libmp3lame --enable-decoder=mjpeg,png --enable-encoder=png --enable-openssl --enable-nonfree \
&& cd ./ffmpeg-4.0.2; make \
&& cd ./ffmpeg-4.0.2; make install \
# ffmpeg installation complete
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
# RUN mkdir -p .bundle && printf '%s\n%s\n' '---' 'BUNDLE_WITHOUT: development:test' > .bundle/config
COPY Gemfile* ./
RUN gem install bundler -v 2.2.28
RUN bundle install
COPY . .
# ENTRYPOINT ./entrypoint-web.sh
docker-compose.yml:
version: '3'
services:
db:
image: mysql:5.6.45
restart: always
environment:
MYSQL_DATABASE: youspin_development
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- ./tmp/db:/var/lib/mysql
ports:
- 3306:3306
redis:
image: redis
ports:
- 6379:6379
# volumes:
# ./tmp/db:
# external: true
worker:
# set the path for Dockerfile
build: .
command: bash -c "rm -f /usr/src/app/tmp/pids/resque.pid && rake app:worker-start "
environment:
RAILS_ENV: development
REDIS_URL: redis://redis:6379/0
volumes:
- .:/usr/src/app
depends_on:
- db
- redis
web:
# set the path for Dockerfile
build: .
command: bash -c "rm -f /usr/src/app/tmp/pids/server.pid && rake assets:clobber app:init app:serve && bundle exec rails s -p 3000 -b '0.0.0.0'"
environment:
RAILS_ENV: development
REDIS_URL: redis://redis:6379/0
volumes:
- .:/usr/src/app
ports:
- 3000:3000
depends_on:
- db
- worker
- redis
nginx:
image: nginx
volumes:
- ./config/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- ./public:/var/www/youspin/public
ports:
- 80:80
environment:
- NGINX_PORT=80
links:
- web

Docker Compose Up is running while with the same image its not running the image?

docker-compose up working fine. Screenshot attached.
Here is the docker-compose file
version: '3.0'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9090
ports:
- 9090:9090
env_file:
- .env
Dockerfile
FROM python:3.7-alpine
RUN mkdir -p /app
COPY . /app
COPY .env /app
WORKDIR /app/
RUN apk --update add python3-dev
RUN apk add mariadb-dev mariadb-client
RUN apk --update add python3 py-pip openssl ca-certificates py-openssl wget
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache \
gcc g++ make libc-dev linux-headers
RUN pip install --upgrade pip
RUN pip install uwsgi
RUN pip install -r requirements.txt --default-timeout=10000 --no-cache-dir
EXPOSE 9090
docker run testbackend_web:latest
Above Command not working with the build
Can someone help in the same?
Error in Container

Django: Dockerfile error with collectsatic

I am trying to deploy Django application with Docker and Jenkins.
I get the error
"msg": "Error building registry.docker.si/... - code: 1 message: The command '/bin/sh -c if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input' returned a non-zero code: 1"
}
The Dockerfile is:
FROM python:3.6
RUN apt-get update && apt-get install -y python-dev && apt-get install -y libldap2-dev && apt-get install -y libsasl2-dev && apt-get install -y libssl-dev && apt-get install -y sasl2-bin
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir --upgrade pip
RUN pip install --no-cache-dir --upgrade setuptools
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod u+rwx manage.py
RUN if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input
RUN chown -R 10000:10000 ./
EXPOSE 8080
CMD ["sh", "./run-django.sh"]
My problem is that, with same dockerfile other Django project deploy without any problem...