NOAUTH Authentication required redis - ruby-on-rails-4

Start redis-server:
redis-server /usr/local/etc/redis.conf
Redis config file (/usr/local/etc/redis.conf):
...
requirepass 'foobared'
...
Rails - application.yml:
...
development:
redis_password: 'foobared
...
The Error:
Redis::CommandError - NOAUTH Authentication required.:
...
app/models/user.rb:54:in `last_accessed_at'
...
app/models/user.rb:54:
Line 53 - def last_accessed_at
Line 54 - Rails.cache.read(session_key)
Line 55 - end
and session_key is just an attribute of User model.
BTW:
± ps -ef | grep redis
501 62491 57789 0 1:45PM ttys001 0:00.37 redis-server 127.0.0.1:6379
501 62572 59388 0 1:54PM ttys002 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn redis

I got the same error:
I just commented out the requirepass line.
# requirepass 'foobared'
It worked in my case.

Related

Pytest not working with Django and Docker - AssertionError: local('/dev/console') is not a file

I'm running a Django application in Docker and everything works fine but when I try try to run tests it fails with quite ambiguous error.
running docker-compose run djangoapp coverage run -m pytest
result:
Creating djangoapp_run ... done ================================================= test session starts ==================================================
platform linux -- Python 3.8.5, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /
collected 0 items / 1 error
======================================================== ERRORS ========================================================
____________________________________________ ERROR collecting test session _____________________________________________
usr/local/lib/python3.8/dist-packages/_pytest/runner.py:311: in from_call
result: Optional[TResult] = func()
usr/local/lib/python3.8/dist-packages/_pytest/runner.py:341: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
usr/local/lib/python3.8/dist-packages/_pytest/main.py:710: in collect
for x in self._collectfile(path):
usr/local/lib/python3.8/dist-packages/_pytest/main.py:546: in _collectfile
assert (
E AssertionError: local('/dev/console') is not a file (isdir=False, exists=True, islink=False)
=============================================== short test summary info ================================================
ERROR - AssertionError: local('/dev/console') is not a file (isdir=False, exists=True, islink=False)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=================================================== 1 error in 0.33s ===================================================
docker-compose:
version: '3'
services:
djangoapp:
build: .
container_name: djangoapp
ports:
- '8000:80'
- '1433:1433'
volumes:
- ./djangoapp:/var/www/html/djangoapp
environment:
- PYTHONUNBUFFERED=0
pytest traverses recursively and default working directory in docker is /. To combine these... Set working directly correctly!
...
environment:
- PYTHONUNBUFFERED=0
working_dir: /var/www/html/djangoapp
...

Upstart Gunicorn doesn't working

Hi I am a south korean student :)
I am studing staging, production test using nginx, gunicorn
first I want run gunicorn using socket
gunicorn --bind unix:/tmp/tddtest.com.socket testlists.wsgi:applicaion
and It shows
[2016-06-26 05:33:42 +0000] [27861] [INFO] Starting gunicorn 19.6.0
[2016-06-26 05:33:42 +0000] [27861] [INFO] Listening at: unix:/tmp/tddgoat1.amull.net.socket (27861)
[2016-06-26 05:33:42 +0000] [27861] [INFO] Using worker: sync
[2016-06-26 05:33:42 +0000] [27893] [INFO] Booting worker with pid: 27893
and I running function test in local repository
python manage.py test func_test
and I was working!
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 9.062s
OK
Destroying test database for alias 'default'...
and I want auto start gunicorn when I boot server
So I decide to using Upstart(in ubuntu)
In /etc/init/tddtest.com.conf
description "Gunicorn server for tddtest.com"
start on net-device-up
stop on shutdown
respawn
setuid elspeth
chdir /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
exec gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
(path of wsgi.py is)
/sites/tddtest.com/source/TDD_Test/testlists/testlists
and I command
sudo start tddtest.com
It shows
tddtest.com start/running, process 27905
I think it is working
but I running function test in local repository
python manage.py test func_test
but it show
======================================================================
FAIL: test_can_start_a_list_and_retrieve_it_later (functional_tests.tests.NewVisitorTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/hanminsoo/Documents/TDD_test/TDD_Test/superlists/functional_tests/tests.py", line 38, in test_can_start_a_list_and_retrieve_it_later
self.assertIn('To-Do', self.browser.title)
AssertionError: 'To-Do' not found in 'Error'
----------------------------------------------------------------------
Ran 2 tests in 4.738s
THE GUNICORN IS NOT WORKING ㅠ_ㅠ
I want look process
ps aux
but I can't found gunicorn process
[...]
ubuntu 24387 0.0 0.1 105636 1700 ? S 02:51 0:00 sshd: ubuntu#pts/0
ubuntu 24391 0.0 0.3 21284 3748 pts/0 Ss 02:51 0:00 -bash
root 24411 0.0 0.1 63244 1800 pts/0 S 02:51 0:00 su - elspeth
elspeth 24412 0.0 0.4 21600 4208 pts/0 S 02:51 0:00 -su
root 26860 0.0 0.0 31088 960 ? Ss 04:45 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 26863 0.0 0.1 31524 1872 ? S 04:45 0:00 nginx: worker process
elspeth 28005 0.0 0.1 17160 1292 pts/0 R+ 05:55 0:00 ps aux
I can't found problem...
please somebody help me thankyou :)
Please modify your upstart script as follows:
exec /home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
If that does not work it could very well be because the /home/elspeth/.pyenv/ folder is inaccessible please check it's permission. If permissions are found to be correct and you are continuing to have problems try this:
script
cd /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
/home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
end script

supervisor, gunicorn and django only work when logged in

I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE

project wsgi not found when started automatically but not manually

I do not understand why I get an error about not finding my "project.wsgi" module when supervisor tries to start the app automatically (for example when the server is starting.)
2014-02-15 05:13:05 [1011] [INFO] Using worker: sync
2014-02-15 05:13:05 [1016] [INFO] Booting worker with pid: 1016
2014-02-15 05:13:05 [1016] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
worker.init_process()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
self.wsgi = self.app.wsgi()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
self.callable = self.load()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
return self.load_wsgiapp()
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
return util.import_app(self.app_uri)
File "/var/local/sites/myproject/venv/local/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
__import__(module)
ImportError: No module named myproject.wsgi
Whereas I do not get this error and it works fine when I manually do:
sudo supervisorctl start myapp
What is different?
Thanks
UPDATE:
supervisor conf file:
[program:myproject]
command=/var/local/sites/myproject/run/gunicorn_start ; Command to start app
user=myproject ; User to run as
autostart=true
autorestart=true
loglevel=info
redirect_stderr=false
stdout_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/local/sites/myproject/logs/supervisor-myproject-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
/var/local/sites/myproject/run/gunicorn_start:
#!/bin/bash
NAME="myproject_app" # Name of the application
USER=myproject # the user to run as
GROUP=myproject # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
# Logs config
LOG_LEVEL=info
ACCESS_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-access.log
ERROR_LOGFILE=/var/local/sites/myproject/logs/gunicorn-myproject-error.log
echo "Starting $NAME"
exec envdir /var/local/sites/myproject/env_vars /var/local/sites/myproject/venv/bin/gunicorn myproject.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE
I think you should add directory to your supervisor configuration file. This is my template. I use this in every project and works fine:
[program:PROJECT_NAME]
command=/opt/sites/PROJECT_NAME/env/bin/gunicorn -c /opt/sites/etc/gunicorn/GUNICORN_CONF.conf.py PROJECT_NAME.wsgi:application
directory=/opt/sites/PROJECT_NAME
environment=PATH="/opt/sites/PROJECT/env/bin"
autostart=true
autorestart=false
redirect_stderr=True
stdout_logfile=/tmp/PROJECT_NAME.stdout
I have the same problem before. I'm using the following line in my gunicorn_start instead of envdir. I'm running a django application within a virtual env located in /env/nafd/ and my django app is located in /env/nafd/nafd_proj
..
DJANGODIR=/to/path/app_proj
cd $DJANGODIR
source ../bin/activate`
exec ../bin/gunicorn nafd_proj.wsgi:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=$LOG_LEVEL \
--bind=unix:/tmp/myproject.gunicorn.sock \
--access-logfile=$ACCESS_LOGFILE \
--error-logfile=$ERROR_LOGFILE`
It's obvious, but it is worth to mention:
check if "supervisord" daemon is running (service supervisor status).
Here is the following setup that I have here, using a Flask app with WSGI (Gunicorn), controlled via Supervisor, and it is working perfectly.
Flask App
root#ilg40:~# ll /etc/tdm/flask/
total 1120
drwx------ 5 root root 4096 Jan 24 19:47 ./
drwx------ 3 root root 4096 Jan 23 00:20 ../
-r-------- 1 root root 1150 Aug 31 17:54 favicon.ico
drw------- 2 root root 4096 Jan 13 22:51 static/
-rw------- 1 root root 883381 Jan 23 20:09 tdm.log
-rwx------ 1 root root 73577 Jan 23 21:37 tdm.py*
-rw------- 1 root root 56445 Jan 23 21:37 tdm.pyc
drw------- 2 root root 4096 Jan 23 20:08 templates/
-rw-r--r-- 1 root root 493 Jan 23 22:42 wsgi.py
-rw-r--r-- 1 root root 720 Jan 23 22:42 wsgi.pyc
srwxrwxrwx 1 root root 0 Jan 24 19:47 wsgi.sock=
Supervisor Config File
root#ilg40:~# cat /etc/supervisor/conf.d/wsgi_flask.conf
[program:wsgi_flask]
command = gunicorn --preload --bind unix:/etc/tdm/flask/wsgi.sock --workers 4 --pythonpath /etc/tdm/flask wsgi
process_name = wsgi_flask
autostart = true
autorestart = true
stdout_logfile = /var/log/wsgi_flask/wsgi_flask.out.log
stderr_logfile = /var/log/wsgi_flask/wsgi_flask.err.log
Update Supervisord About The New Process
root#ilg40:~# supervisorctl update
wsgi_flask: added process group
Checking Process Status
root#ilg40:~# supervisorctl status wsgi_flask
wsgi_flask RUNNING pid 1129, uptime 0:29:12
Note: in the setup above, I'm not using virtualenv, which I believe that could require the configuration of directory variable for the process and also, to configure environment PATH for the command variable (adding env PATH="$PATH:/the/app/path" gunicorn...), since that gunicorn, flask, and so on, are only located inside of the virtualenv.

Tuning gunicorn (with Django): Optimize for more concurrent connections and faster connections

I use Django 1.5.3 with gunicorn 18.0 and lighttpd. I serve my static and dynamic content like that using lighttpd:
$HTTP["host"] == "www.mydomain.com" {
$HTTP["url"] !~ "^/media/|^/static/|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
proxy.balance = "hash"
proxy.server = ( "" => ("myserver" =>
( "host" => "127.0.0.1", "port" => 8013 )
))
}
$HTTP["url"] =~ "^/media|^/static|^/apple-touch-icon(.*)$|^/favicon(.*)$|^/robots\.txt$" {
alias.url = (
"/media/admin/" => "/var/www/virtualenvs/mydomain/lib/python2.7/site-packages/django/contrib/admin/static/admin/",
"/media" => "/var/www/mydomain/mydomain/media",
"/static" => "/var/www/mydomain/mydomain/static"
)
}
url.rewrite-once = (
"^/apple-touch-icon(.*)$" => "/media/img/apple-touch-icon$1",
"^/favicon(.*)$" => "/media/img/favicon$1",
"^/robots\.txt$" => "/media/robots.txt"
)
}
I already tried to run gunicorn (via supervisord) in many different ways, but I cant get it better optimized than it can handle about 1100 concurrent connections. In my project I need about 10000-15000 connections
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 9 -k gevent --preload --settings=myproject.settings
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 10 -k eventlet --worker_connections=1000 --settings=myproject.settings --max-requests=10000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 20 -k gevent --settings=myproject.settings --max-requests=1000
command = /var/www/virtualenvs/myproject/bin/python /var/www/myproject/manage.py run_gunicorn -b 127.0.0.1:8013 -w 40 --settings=myproject.settings
On the same server there live about 10 other projects, but CPU and RAM is fine, so this shouldnt be a problem, right?
I ran a load test and these are the results:
At about 1100 connections my lighttpd errorlog says something like that, thats where the load test shows the drop of connections:
2013-10-31 14:06:51: (mod_proxy.c.853) write failed: Connection timed out 110
2013-10-31 14:06:51: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 83
2013-10-31 14:06:51: (mod_proxy.c.1316) no proxy-handler found for: /
... after about one minute
2013-10-31 14:07:02: (mod_proxy.c.1361) proxy - re-enabled: 127.0.0.1 8013
These things also appear ever now and then:
2013-10-31 14:06:55: (network_linux_sendfile.c.94) writev failed: Connection timed out 600
2013-10-31 14:06:55: (mod_proxy.c.853) write failed: Connection timed out 110
...
2013-10-31 14:06:57: (mod_proxy.c.828) establishing connection failed: Connection timed out
2013-10-31 14:06:57: (mod_proxy.c.939) proxy-server disabled: 127.0.0.1 8013 45
So how can I tune gunicorn/lighttpd to serve more connections faster? What can I optimize? Do you know any other/better setup?
Thanks alot in advance for your help!
Update: Some more server info
root#django ~ # top
top - 15:28:38 up 100 days, 9:56, 1 user, load average: 0.11, 0.37, 0.76
Tasks: 352 total, 1 running, 351 sleeping, 0 stopped, 0 zombie
Cpu(s): 33.0%us, 1.6%sy, 0.0%ni, 64.2%id, 0.4%wa, 0.0%hi, 0.7%si, 0.0%st
Mem: 32926156k total, 17815984k used, 15110172k free, 342096k buffers
Swap: 23067560k total, 0k used, 23067560k free, 4868036k cached
root#django ~ # iostat
Linux 2.6.32-5-amd64 (django.myserver.com) 10/31/2013 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
33.00 0.00 2.36 0.40 0.00 64.24
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 137.76 980.27 2109.21 119567783 257268738
sdb 24.23 983.53 2112.25 119965731 257639874
sdc 24.25 985.79 2110.14 120241256 257382998
md0 0.00 0.00 0.00 400 0
md1 0.00 0.00 0.00 284 6
md2 1051.93 38.93 4203.96 4748629 512773952
root#django ~ # netstat -an |grep :80 |wc -l
7129
Kernel Settings:
echo "10152 65535" > /proc/sys/net/ipv4/ip_local_port_range
sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=250000
sysctl -w net.ipv4.tcp_max_syn_backlog=2500
sysctl -w net.core.netdev_max_backlog=2500
ulimit -n 10240