manage.py: cannot connect to X server - django

I have used PyQt4.QtWebkit to crawl the web page in my django application.In the production environment that module doesn't work to crawl it.it throws the error "manage.py: cannot connect to X server"
My Qt class :
class Render(QWebPage):
def __init__(self, url):
self.app = QApplication(sys.argv)
QWebPage.__init__(self)
self.loadFinished.connect(self._loadFinished)
self.mainFrame().load(QUrl(url))
self.app.exec_()
def _loadFinished(self, result):
self.frame = self.mainFrame()
self.app.quit()
calling from django-shell:
r = Render(url)
when i call this "Render" class through django with the Django-shell(python manage.py shell) the render function throws the error.
could you please help me on this?

The Reason is "Xvfb"
i need to run my python program in bash shell with xvfb(X virtual frame buffer)
likewise,
ubuntu#localhost$ xvfb-run python webpage_scrapper.py http://www.google.ca/search?q=navaspot
It gives the result.
Now My requirement is i need to execute this shell command in python and waiting for tine to collect the result.I have to process the result.
Could you please suggest me for executing this command on python effectively.

Seems like environment variables for X display are not set and that's the reason you get such error. It can occur because you're running script from environment, that isn't bound to X display (ssh to server).
Try adding display variable:
DISPLAY=:0.0 python manage.py script
It is also possible to set DISPLAY environment variable from python. You may set it before calling the PyQt4:
import os
os.putenv('DISPLAY', ':0.0')
It's also may not be possible to run PyQt4.QtWebkit if your production environment doesn't have X server running.

Generally on headless machines, the DISPLAY variable is absent or misconfigured. To work on such machines, you can use the following approach. As a example for Ubuntu 14.04-LTS machines:
First install X server:
sudo apt-get install xserver-xorg
Now start the X server (say at :0):
sudo /usr/bin/X :0&
You can use process managers like supervisor to handle the above process.
Now just set the DISPLAY environment variable and make sure it is available to any processes you are running which depend on this,
DISPLAY=:0 python manage.py
The way you provide the environment variables to your application is upto you.

Related

Django Celery with Redis Issues on Digital Ocean App Platform

After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!

Python exec_command throws Unknown command: `ls' [duplicate]

I am connecting to SSH via terminal (on Mac) and run a Paramiko Python script and for some reason, the two sessions seem to behave differently. The PATH environment variable is different in these cases.
This is the code I run:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('host', username='myuser',password='mypass')
stdin, stdout, stderr =ssh.exec_command('echo $PATH')
print (stdout.readlines())
Any idea why the environment variables are different?
And how can I fix it?
The SSHClient.exec_command by default does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
To emulate the default Paramiko behavior with the ssh, use the -T switch:
ssh -T myuser#host
See the ssh man:
-T Disable pseudo-tty allocation.
Contrary, to emulate the default ssh behavior with Paramiko, set the get_pty parameter of the exec_command to True:
def exec_command(self, command, bufsize=-1, timeout=None, get_pty=False):
Though rather than working around the issue by allocating the pseudo terminal in Paramiko, you should better fix your startup scripts to set the same PATH for all sessions.
For that see Some Unix commands fail with "<command> not found", when executed using Python Paramiko exec_command.
Working with the Channel object instead of the SSHClient object solved my problem.
chan=ssh.invoke_shell()
chan.send('echo $PATH\n')
print (chan.recv(1024))
For more details, see the documentation

How to "dockerize" Flask application?

I have Flask application named as rest.py and I have dockerize but it is not running.
#!flask/bin/python
from flask import Flask, jsonify
app = Flask(__name__)
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
}
]
#app.route('/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': tasks})
if __name__ == '__main__':
app.run(debug=True)
Dockerfile is as follows
FROM ubuntu
RUN apt-get update -y
RUN apt-get install -y python-dev python-pip
COPY . /rest
WORKDIR /rest
RUN pip install -r Req.txt
ENTRYPOINT ["python"]
CMD ["rest.py"]
I have build it using this command...
$ docker build -t flask-sample-one:latest
...and when I run container...
$ docker run -d -p 5000:5000 flask-sample-one
returning the following output:
7d1ccd4a4471284127a5f4579427dd106df499e15b868f39fa0ebce84c494a42
What am I doing wrong?
The output you get is the container ID. Check with docker ps whether it keeps running.
Use docker logs [container-id] to figure out what's going on inside.
Some problems I can find in your question:
Change the app.run line to app.run(host='0.0.0.0', debug=True). From the point of view of the container, its services need to be externally available. So they need to run on the loopback interface, like you would run it if you'd set up a publicly available server on a host directly.
Make sure that Flask gets installed. Your docker image file requires all the commands to make it work from a blank Ubuntu installation.
Please do not forget to deactivate debug if you'd ever expose this service on your host. Debug mode in Flask makes it possible for visitors to run arbitrary code if they can trigger an exception (it's a feature, not a bug).
After that (and building the container again [1]), try curl http://127.0.0.1:5000/tasks on the host. Let me know if it works, if not there are other problems in your setup.
[1] You can improve the prototyping workflow with Flask's built-in reloader (which is enabled by default) if you use a volume mount in your docker container for the directory that contains your python files - this would allow you to change your script on the host, reload in the browser and directly see the result.
I believe that you need to reinforce your concepts about Docker, in order to understand how it works, and then you will achieve your objectives regarding "dockerizing" whatever application.
Here is an article which can give your some first steps.
An official HOWTO will also help you.
Some observations that might help you:
check if your Req.txt contains flask for installation
before dockerizing, check if your application is working
check your running containers with docker ps and see if your container is running
if it is running, test your application: curl http://127.0.0.1:5000/tasks
*
One more thing:
your JSON has an OBJECT with an ARRAY with just one ELEMENT
Is that what you want for your prototype?
Take a look on this doc, about the JSON standard.

Django / Docker / Remote Debug using Pydev

My setup is the following one :
- a django server running in a docker with port mapping: 8090:8090
- Eclipse with PyDev
I want to be able to put breakpoint on Pydev (click on a line, step by step)
I found several articles like;
http://www.pydev.org/manual_adv_remote_debugger.html
but it's still not working.
1) Should I update the manage.py to "import pydev" ? which lines to add and do I have to copy inside the docker container the pysrc of pydev plugin in order to be able to do the module import ?
2) Is there a port forwarding needed ? python instance running into docker should have access to remote debug server on host machine ?
3) I found article about pycharm and remote debug using ssh ? not possible to do similar with pydev ?
4) How to "link" my local directory and docker "directory" ?
[EDIT] I found the solution
Copy the eclipse/path_to\pydev\plugins\org.python.pydev\pysrc directory into a place where your docker container can access it.
Edit pysrc/pydevd_file_utils.py, and add directory mapping between your host and docker container like:
PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'C:/django',r'/.../lib/django'),
(r'C:/workspace/myapp',r'/var/www/myapp')]
you can set several tuples if you have several paths containing python code
edit manage.py and add the following
sys.path.append('/my_path/to_pysrc_/under_docker/pysrc')
import pydevd
pydevd.settrace(host='172.17.42.1') #IP of your host
In Pydev, in preferences > Pydev > Run/Debug > Port for remote debugger: 5678
In Debug Perspective, click on "Start the Pydev server"
in your docker, run: python manage.py runserver 0.0.0.0:8090 --noreload
(replace 8090 by your http port)
In Pydev: you will see that the code just break after settrace !
Now You can add some breakpoint and use the debug CLI of Pydev:) Enjoy !
I had the similar issue - django project in docker, connect to docker by pycharm 145.1504 & 162.1120 via docker interpreter, run server works OK, but debug is stack after pycharm runs
/usr/bin/python2.7 -u /root/.pycharm_helpers/pydev/pydevd.py --multiproc --qt-support --client '0.0.0.0' --port 38324 --file /opt/project/manage.py runserver 0.0.0.0:8000.
I tried to find out why for a few days, then connected pycharm to docker by ssh connection and everything works fine, run and debug.
Ok, from what you wrote I will assume you have a Django docker container running on your local machine.
From inside your container (e.g. docker-compose exec <container name> bash to get into it)
pip install pydevd
in Eclipse, put a breakpoint like this:
import pydevd; pydevd.settrace('docker.for.mac.localhost')
If you're not using Docker for Mac, you have to do a bit of work to get the IP of your machine from inside of your container, e.g. see this.
go to Debug Perspective and start the PyDev debug server
start your application or test
... and you should see your views for stack, variables, etc., populate as the code stops at the breakpoint.
In Python 3.7, there is now a builtin breakpoint, which you can configure to point to your favorite debugger using an environment variable (the default is pdb):
breakpoint()
It also takes arguments, so you can do:
breakpoint(host='docker.for.mac.localhost')
I found that a bit annoying to type, so I ended up putting inside an app a module that looks like this:
# my_app/pydevd.py
import pydevd
def set_trace():
pydevd.settrace('docker.for.mac.localhost')
I then set the environment variable for the builtin breakpoint (say in your docker-compose.yml):
PYTHONBREAKPOINT=my_app.pydevd.set_trace

How to run a deploy command on remote host from PyCharm?

I am looking for a way to simplify remote deployment of a django application directly from PyCharm.
Even if deploying the files itself works just file with the remote host and upload, I was not able to find a way to run the additional commands on the server site (like manage.py syncdb).
I am looking for a fully automated solution, one that would work at single click (or command).
I don't know much about PyCharm so maybe you could do something from the IDE, but I think you'll probably want to take a look at the fabric project (http://docs.fabfile.org/en/1.0.1/index.html)
It's a python deployment automation tool that's pretty great.
Here is one of my fabric script files. Note that I make a lot of assumptions (This is my own that I use) that completely depend on how you want to set up your project, such as I use virtualenv, pip, and south as well as my own personal preference for how to deploy and where to deploy to.
You'll likely want to rework or simplify it to meet your needs.
You may use File > Settings > Tools > External Tools to run arbitrary external executable files. You may write a small command that connects over SSH and issues a [set of] command. Then the configured tool would be executable
For example, in my project based on tornado, I run the instances using supervisord, which, according to answer here, cannot restart upon code change.
I ended up writing a small tool on paramiko, that connects via ssh and runs supervisorctl restart. The code is below:
import paramiko
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-s",
action="store",
dest="server",
help="server where to execute the command")
parser.add_option("-u",
action="store",
dest="username")
parser.add_option("-p",
action="store",
dest="password")
(options, args) = parser.parse_args()
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(hostname=options.server, port=22, username=options.username, password=options.password)
command = "supervisorctl reload"
(stdin, stdout, stderr) = client.exec_command(command)
for line in stdout.readlines():
print line
client.close()
External Tool configuration in Pycharm:
program: <PYTHON_INTERPRETER>
parameters: <PATH_TO_SCRIPT> -s <SERVERNAME> -u <USERNAME> -p <PASSWORD>