Django channels integrating with django - django

I'm following django channels tutorial in order to integrate it with django. But, at one point I confronted with error which I can't solve. In terminal Django says:
[Errno 111] Connect call failed ('127.0.0.1', 6379)
I think that problem is about with these lines in tutorial:
We will use a channel layer that uses Redis as its backing store. To start a Redis server on port 6379, run the following command:
$ docker run -p 6379:6379 -d redis:2.8
I'm working in Linux Ubuntu 17.04 and can't run command shown in above. When I run that command ubuntu terminal says:
docker: command not found
Result is still the same after installing 'docker' with 'sudo apt-get install docker'. How can I solve this problem? Is there other way to start redis server on specified port without installing docker?

From the first page of the tutorial:
This tutorial also uses Docker to install and run Redis. We use Redis as the backing store for the channel layer, which is an optional component of the Channels library that we use in the tutorial. Install Docker from its official website.
So, install docker on your ubuntu system, and the command docker will become available.

Related

Flask application on Docker image appears to run fine when ran from Docker Desktop; unable to deploy on Ec2,"Essential container in task exited"

I am currently trying to deploy a Docker image. The Docker image is of a Flask application; when I run the image via Docker desktop, the service works fine. However, after creating an EC2 instance on Amazon and running the image as a task, I get the error "Stopped reason Essential container in task exited".
I am unsure of how to trouble shoot or what steps to take. Do advice!
Edit:
I noticed that my Docker file on my computer is of 155mb while that on AWS is 67mb. Does AWS do any compression? I will be trying to push my image again.
Edit2:
Reading through some other qn, it appears that it is normal for file sizes to differ as the Docker desktop shows the uncompressed version.
I decided to run the AWS Task Image on my Docker desktop, while it does run and the console shows everything is fine, I am unable to access the links provided.
* Serving Flask app 'main' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:5000
* Running on http://172.17.0.2:5000 (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: XXXXXX
Under my Docker file, I have made sure to EXPOSE 5000 already. I am unsure why after running the same image on Amazon on my local machine, I am unable to connect to it on my local machine.
FROM alpine:latest
ENV PATH /usr/local/bin:$PATH
RUN apk add --no-cache python3
RUN apk add py3-pip && pip3 install --upgrade pip
WORKDIR /backend
RUN pip3 install wheel
RUN pip3 install --extra-index-url https://alpine-wheels.github.io/index numpy
COPY . /backend
RUN pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["main.py"]
Edit3:
I believe I "found" the problem but I am unsure how to run it. When I was building the Docker file, inside VSCode I would run it via doing docker run -it -d -p 5000:5000 flaskapp , where the tags -d and -p 5000:5000 means running it in demo mode and port forwarding the 5000 port. When I run the image that way inside VSCode, I am able to access the application on my local machine.
However, after creating the image and running it via pressping Start inside Docker Desktop, I am unable to access it on my local machine.
How will I go about running the Docker image this way either via Docker Desktop or Amazon EC2?

Gunicorn not working on Amazon ec2 server

I am trying to deploy a django website on an aws ec2 instance (ubuntu 18.04) following this tutorial that is consistent with other online resources. Everything is working fine but there is some issue with gunicorn. The worker doesn't seem to boot.
I figured there was something wrong with my installation and I tried uninstalling and installing with different commands-
inisde my virtualenv with
pip install gunicorn
and
sudo -H pip install gunicorn
I even physically deleted all gunicorn files and reinstalled gunicorn but its always the same. Where have I gone wrong?
p.s: I had initially done sudo apt-get
From the screenshot you attached, it seems that gunicorn is installed correctly, but perhaps you have not passed a configuration file. The command to run gunicorn with a configuration is
gunicorn -c /path/to/gunicorn.conf.py myapp.wsgi:application

sudo: stop: command not found

I'm running a shiny app on an amazon web services instance using shiny-server. I wanted to stop the shiny-server in order to set up password protection but when I was following a protocol that said to type sudo stop shiny-server I got this error sudo: stop: command not found.
I tried to look into it and tried to install sudo apt-get install upstart-sysv but now my error is stop: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused.
the aws instance is ubuntu 16.04, any help is appreciated
You should use sudo systemctl stop shiny-server
Most major Linux distros, including Ubuntu 15.04+, now use systemd for management and configuration.
Earlier versions of Ubuntu used upstart (where the command was sudo stop shiny-server).
For more, see shiny-server documentation.

EB CLI requires the 'docker' Python package

I'm not sure if this is Docker, the Elastick Beanstalk, or docker image issue related, but my problem is that I'm running the command eb local run to start the local environment alongside with docker.
Expected behavior
The command runs seamlessly
Actual behavior
ERROR: DockerVersionError - Your local host has the 'docker-py' version 1.10.6 Python package installed on it.
When you run a Multicontainer Docker application locally, the EB CLI requires the 'docker' Python package.
To fix this error:
Be sure that no applications on your local host require 'docker-py', and then run this command:
pip uninstall docker-py
The EB CLI will install 'docker' the next time you run it.
$ eb --version : EB CLI 3.12.2 (Python 2.7.1)
$ docker -v : Docker version 17.12.0-ce, build c97c6d6
If you want to launch multi-container Dockers using eb local run, you need to have uninstalled docker-py, and installed docker
As the error message indicates:
perform pip uninstall docker-py ** if you don't need it **.
run pip install "docker>=2.6.0,<2.7" immediately after
docker and docker-py cannot coexist. These release notes highlight the change in the package name. These release notes allude to the breakage the change in package name caused.
Not to be confused with Docker, the engine/client, docker-py/docker is a Python wrapper around the Docker client which the EBCLI relies on.

Cassandra - ubuntu 16.04 :: Failed to start cassandra.service: Unit cassandra.service not found

I'm using ubuntu 16.04.I was trying to install Cassandra. But I'm getting some unknown issues. Could you please help me to solve it.
I followed the Apache-Cassandra Instructions to install Cassandra 3.6
I installed python 2.7 on this machine.
When I use the command to see the status.
sudo service cassandra status
I'm getting this error
Cassandra.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
I'm pretty sure, I was not doing as root.
I installed Cassandra in ubuntu 14.04 before one year without any problem. And worked well on different projects. Now getting these issues.
If I use the following command
sudo service cassandra start
I'm getting this error.
Failed to start cassandra.service: Unit cassandra.service not found.
Is there any problem with python and Cassandra version? Please help to solve these issues. Please suggest me the best way.
Try to reinstall Cassandra via next commands:
sudo apt-get purge cassandra
then
sudo apt-get update
sudo apt-get install cassandra
And use the latest version of Cassandra - 3.9, because the previous has a problem with cqlsh if you have Python 2.7.11+.