In an operating website with Nginx, Uwsgi and Django. and it has tons of venv and django projects. Is it any way I can tell with .ini file the uwsgi loaded?
I ran "ps aux | grep uwsgi" and it shows this:
ubuntu 2136 0.0 0.4 108280 32872 pts/1 S+ Mar29 22:58 uwsgi repository.ini
ubuntu 9337 0.0 0.4 111312 34836 pts/1 S+ Jul11 0:05 uwsgi repository.ini
ubuntu 9893 0.0 0.4 111572 34836 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 9980 0.0 0.4 300744 37496 pts/6 Sl+ Jul11 0:07 uwsgi repository.ini
ubuntu 12442 0.1 0.4 300752 37520 pts/6 Sl+ Jul11 0:07 uwsgi repository.ini
ubuntu 12663 0.1 0.4 111548 34872 pts/1 S+ Jul11 0:08 uwsgi repository.ini
ubuntu 15462 0.1 0.4 300752 37520 pts/6 Sl+ 00:22 0:05 uwsgi repository.ini
ubuntu 15767 0.1 0.4 111568 34852 pts/1 S+ 00:25 0:09 uwsgi repository.ini
ubuntu 17740 0.1 0.4 300752 37524 pts/6 Sl+ 00:43 0:05 uwsgi repository.ini
ubuntu 18874 0.0 0.4 107356 33944 pts/5 S+ May15 2:02 uwsgi repository.ini
ubuntu 18876 0.0 0.4 110272 33856 pts/5 S+ May15 0:00 uwsgi repository.ini
ubuntu 18877 0.0 0.4 110368 34068 pts/5 S+ May15 0:00 uwsgi repository.ini
ubuntu 20763 0.1 0.4 300744 37504 pts/6 Sl+ 01:12 0:04 uwsgi repository.ini
ubuntu 22143 0.0 0.4 301004 37716 pts/6 Sl+ Jul11 0:10 uwsgi repository.ini
ubuntu 25620 0.0 0.0 13772 1104 pts/0 S+ 01:54 0:00 grep --color=auto uwsgi
ubuntu 25915 0.0 0.4 301132 38492 pts/6 Sl+ Jul11 0:11 uwsgi repository.ini
ubuntu 27713 0.0 0.4 300756 37508 pts/6 Sl+ Jul11 0:10 uwsgi repository.ini
ubuntu 28648 0.0 0.3 92948 29528 pts/4 S+ May15 2:02 uwsgi repository.ini
ubuntu 28650 0.0 0.4 300576 36920 pts/4 Sl+ May15 0:01 uwsgi repository.ini
ubuntu 28651 0.0 0.4 300484 36812 pts/4 Sl+ May15 0:00 uwsgi repository.ini
ubuntu 30146 0.0 0.3 93864 31336 pts/6 S+ May15 12:38 uwsgi repository.ini
ubuntu 30187 0.0 0.4 113104 36372 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 30910 0.0 0.4 113088 36492 pts/1 S+ Jul11 0:07 uwsgi repository.ini
ubuntu 32262 0.0 0.4 112852 36404 pts/1 S+ Jul11 0:06 uwsgi repository.ini
ubuntu 32618 0.0 0.4 113100 36756 pts/1 S+ Jul11 0:08 uwsgi repository.ini
but I could not tell which repository.ini is running.
If you have run your application following the docs you can do
ps aux | grep uwsgi and should see a list of uwsgi instances (if you hacve multiple) and their corresponding ini file. you can then look up the .ini file to check which is running which
Related
I have created a vm instance which connects to the external ip with http but not with https.
On checking the logs, it shows that the following error:
Invalid ssh key entry - expired key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJdFE+rHGtkgTx0niNZRTQYb...........
......jH8ycULWLplemekTGdFnwoGhNb google-ssh {"userName":"my_user_name#gmail.com","expireOn":"2022-12-06T17:04:46+0000"}
Does anyone know why is this happening or how to resolve this? I have spent at least 10 hrs trying to resolve this issue but I have been unsuccessful as I am not from a technical field.
I tried
creating a new ssh key - firstly I had never done that - and then updating it in the meta data or through the console terminal etc.
I tried adding a new ssh key directly too but that didn't work
Edit
Ran the following as per the comment:
gcloud compute project-info describe --format flattened
Result below:
commonInstanceMetadata.fingerprint: mgT7F7wYfBw=
commonInstanceMetadata.items[0].key: ssh-keys
commonInstanceMetadata.items[0].value: himanshusomani007:ssh-rsa AAAAB3NzaC1yc2EAAAADAQA..........cULWLplemekTGdFnwoGhNb google-ssh {"userName":"my_user_name#gmail.com","expireOn":"2023-12-04T17:04:46+0000"}
commonInstanceMetadata.kind: compute#metadata
creationTimestamp: 2022-11-17T00:15:29.195-08:00
defaultNetworkTier: PREMIUM
defaultServiceAccount: 1054284009344-compute#developer.gserviceaccount.com
id: 3401782412795466575
kind: compute#project
name: principal-storm-368908
quotas[0].limit: 1000.0
quotas[0].metric: SNAPSHOTS
quotas[0].usage: 0.0
quotas[1].limit: 5.0
quotas[1].metric: NETWORKS
quotas[1].usage: 1.0
quotas[2].limit: 100.0
quotas[2].metric: FIREWALLS
quotas[2].usage: 9.0
quotas[3].limit: 100.0
quotas[3].metric: IMAGES
quotas[3].usage: 0.0
quotas[4].limit: 8.0
quotas[4].metric: STATIC_ADDRESSES
quotas[4].usage: 0.0
quotas[5].limit: 200.0
quotas[5].metric: ROUTES
quotas[5].usage: 1.0
quotas[6].limit: 15.0
quotas[6].metric: FORWARDING_RULES
quotas[6].usage: 0.0
quotas[7].limit: 50.0
quotas[7].metric: TARGET_POOLS
quotas[7].usage: 0.0
quotas[8].limit: 50.0
quotas[8].metric: HEALTH_CHECKS
quotas[8].usage: 0.0
quotas[9].limit: 8.0
quotas[9].metric: IN_USE_ADDRESSES
quotas[9].usage: 0.0
quotas[10].limit: 50.0
quotas[10].metric: TARGET_INSTANCES
quotas[10].usage: 0.0
quotas[11].limit: 10.0
quotas[11].metric: TARGET_HTTP_PROXIES
quotas[11].usage: 0.0
quotas[12].limit: 10.0
quotas[12].metric: URL_MAPS
quotas[12].usage: 0.0
quotas[13].limit: 50.0
quotas[13].metric: BACKEND_SERVICES
quotas[13].usage: 0.0
quotas[14].limit: 100.0
quotas[14].metric: INSTANCE_TEMPLATES
quotas[14].usage: 0.0
quotas[15].limit: 5.0
quotas[15].metric: TARGET_VPN_GATEWAYS
quotas[15].usage: 0.0
quotas[16].limit: 10.0
quotas[16].metric: VPN_TUNNELS
quotas[16].usage: 0.0
quotas[17].limit: 3.0
quotas[17].metric: BACKEND_BUCKETS
quotas[17].usage: 0.0
quotas[18].limit: 10.0
quotas[18].metric: ROUTERS
quotas[18].usage: 0.0
quotas[19].limit: 10.0
quotas[19].metric: TARGET_SSL_PROXIES
quotas[19].usage: 0.0
quotas[20].limit: 10.0
quotas[20].metric: TARGET_HTTPS_PROXIES
quotas[20].usage: 0.0
quotas[21].limit: 10.0
quotas[21].metric: SSL_CERTIFICATES
quotas[21].usage: 0.0
quotas[22].limit: 100.0
quotas[22].metric: SUBNETWORKS
quotas[22].usage: 0.0
quotas[23].limit: 10.0
quotas[23].metric: TARGET_TCP_PROXIES
quotas[23].usage: 0.0
quotas[24].limit: 32.0
quotas[24].metric: CPUS_ALL_REGIONS
quotas[24].usage: 1.0
quotas[25].limit: 10.0
quotas[25].metric: SECURITY_POLICIES
quotas[25].usage: 0.0
quotas[26].limit: 100.0
quotas[26].metric: SECURITY_POLICY_RULES
quotas[26].usage: 0.0
quotas[27].limit: 1000.0
quotas[27].metric: XPN_SERVICE_PROJECTS
quotas[27].usage: 0.0
quotas[28].limit: 20.0
quotas[28].metric: PACKET_MIRRORINGS
quotas[28].usage: 0.0
quotas[29].limit: 100.0
quotas[29].metric: NETWORK_ENDPOINT_GROUPS
quotas[29].usage: 0.0
quotas[30].limit: 6.0
quotas[30].metric: INTERCONNECTS
quotas[30].usage: 0.0
quotas[31].limit: 5000.0
quotas[31].metric: GLOBAL_INTERNAL_ADDRESSES
quotas[31].usage: 0.0
quotas[32].limit: 5.0
quotas[32].metric: VPN_GATEWAYS
quotas[32].usage: 0.0
quotas[33].limit: 100.0
quotas[33].metric: MACHINE_IMAGES
quotas[33].usage: 0.0
quotas[34].limit: 20.0
quotas[34].metric: SECURITY_POLICY_CEVAL_RULES
quotas[34].usage: 0.0
quotas[35].limit: 0.0
quotas[35].metric: GPUS_ALL_REGIONS
quotas[35].usage: 0.0
quotas[36].limit: 5.0
quotas[36].metric: EXTERNAL_VPN_GATEWAYS
quotas[36].usage: 0.0
quotas[37].limit: 1.0
quotas[37].metric: PUBLIC_ADVERTISED_PREFIXES
quotas[37].usage: 0.0
quotas[38].limit: 10.0
quotas[38].metric: PUBLIC_DELEGATED_PREFIXES
quotas[38].usage: 0.0
quotas[39].limit: 128.0
quotas[39].metric: STATIC_BYOIP_ADDRESSES
quotas[39].usage: 0.0
quotas[40].limit: 10.0
quotas[40].metric: NETWORK_FIREWALL_POLICIES
quotas[40].usage: 0.0
quotas[41].limit: 15.0
quotas[41].metric: INTERNAL_TRAFFIC_DIRECTOR_FORWARDING_RULES
quotas[41].usage: 0.0
quotas[42].limit: 15.0
quotas[42].metric: GLOBAL_EXTERNAL_MANAGED_FORWARDING_RULES
quotas[42].usage: 0.0
selfLink: https://www.googleapis.com/compute/v1/projects/principal-storm-368908
vmDnsSetting: ZONAL_ONLY
xpnProjectStatus: UNSPECIFIED_XPN_PROJECT_STATUS
From the error shared on your description the ssh key expired.
You need to create a new rsa key
ssh-keygen
Then copy the new public ssh key in the VM instance section ssh key.
Please! Copy the public key not the private, you can identify the public with the extension .pub
Follow the instruction from here
I am running the setup that follows(just showing the beat setup for simplicity) for daemonizing my celery and beat workers on Elastic beanstalk. I am able daemonize the processes successfully however too many processes are being spawned.
Current Output
root 20409 0.7 9.1 473560 92452 ? S 02:59 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
root 20412 0.6 7.8 388152 79228 ? S 02:59 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp beat --loglevel=INFO
root 20509 0.0 7.1 388748 72412 ? S 02:59 0:00 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
root 20585 0.6 7.7 387624 78340 ? S 03:00 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp beat --loglevel=INFO
root 20679 1.1 9.1 473560 92584 ? S 03:01 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
root 20685 0.0 7.1 388768 72460 ? S 03:01 0:00 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
Desired output as achieved by after the environment deploys running kill -9 $(pgrep celery)
root 20794 20.6 7.7 387624 78276 ? S 03:03 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp beat --loglevel=INFO
root 20797 24.3 9.1 473560 92564 ? S 03:03 0:01 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
root 20806 0.0 7.1 388656 72272 ? S 03:03 0:00 /opt/python/run/venv/bin/python3.6 /opt/python/run/venv/bin/celery -A djangoApp worker --loglevel=INFO
celeryBeat.sh
#!/usr/bin/env bash
/opt/python/run/venv/bin/celery -A djangoApp beat --loglevel=INFO
supervisor.conf
[unix_http_server]
file=/opt/python/run/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
[supervisord]
logfile=/opt/python/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=10MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/opt/python/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
directory=/opt/python/current/app ; (default is not to cd during start)
;nocleanup=true ; (don't clean up tempfiles at start;default false)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///opt/python/run/supervisor.sock
[program:httpd]
command=/opt/python/bin/httpdlaunch
numprocs=1
directory=/opt/python/current/app
autostart=true
autorestart=unexpected
startsecs=1 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
killasgroup=false ; SIGKILL the UNIX process group (def false)
redirect_stderr=false
[include]
files: celery.conf
celery.conf
[program:beat:]
; Set full path to celery program if using virtualenv
command=sh /opt/python/etc/celeryBeat.sh
directory=/opt/python/current/app
; user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=60
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
container commands
container_commands:
01_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
02_celery_tasks_run:
command: "cat .ebextensions/files/beat_configuration.txt > /opt/python/etc/beat.sh && chmod 744 /opt/python/etc/celeryBeat.sh"
leader_only: true
03_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
Hi I am a south korean student :)
I am studing staging, production test using nginx, gunicorn
first I want run gunicorn using socket
gunicorn --bind unix:/tmp/tddtest.com.socket testlists.wsgi:applicaion
and It shows
[2016-06-26 05:33:42 +0000] [27861] [INFO] Starting gunicorn 19.6.0
[2016-06-26 05:33:42 +0000] [27861] [INFO] Listening at: unix:/tmp/tddgoat1.amull.net.socket (27861)
[2016-06-26 05:33:42 +0000] [27861] [INFO] Using worker: sync
[2016-06-26 05:33:42 +0000] [27893] [INFO] Booting worker with pid: 27893
and I running function test in local repository
python manage.py test func_test
and I was working!
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 9.062s
OK
Destroying test database for alias 'default'...
and I want auto start gunicorn when I boot server
So I decide to using Upstart(in ubuntu)
In /etc/init/tddtest.com.conf
description "Gunicorn server for tddtest.com"
start on net-device-up
stop on shutdown
respawn
setuid elspeth
chdir /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
exec gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
(path of wsgi.py is)
/sites/tddtest.com/source/TDD_Test/testlists/testlists
and I command
sudo start tddtest.com
It shows
tddtest.com start/running, process 27905
I think it is working
but I running function test in local repository
python manage.py test func_test
but it show
======================================================================
FAIL: test_can_start_a_list_and_retrieve_it_later (functional_tests.tests.NewVisitorTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/hanminsoo/Documents/TDD_test/TDD_Test/superlists/functional_tests/tests.py", line 38, in test_can_start_a_list_and_retrieve_it_later
self.assertIn('To-Do', self.browser.title)
AssertionError: 'To-Do' not found in 'Error'
----------------------------------------------------------------------
Ran 2 tests in 4.738s
THE GUNICORN IS NOT WORKING ㅠ_ㅠ
I want look process
ps aux
but I can't found gunicorn process
[...]
ubuntu 24387 0.0 0.1 105636 1700 ? S 02:51 0:00 sshd: ubuntu#pts/0
ubuntu 24391 0.0 0.3 21284 3748 pts/0 Ss 02:51 0:00 -bash
root 24411 0.0 0.1 63244 1800 pts/0 S 02:51 0:00 su - elspeth
elspeth 24412 0.0 0.4 21600 4208 pts/0 S 02:51 0:00 -su
root 26860 0.0 0.0 31088 960 ? Ss 04:45 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 26863 0.0 0.1 31524 1872 ? S 04:45 0:00 nginx: worker process
elspeth 28005 0.0 0.1 17160 1292 pts/0 R+ 05:55 0:00 ps aux
I can't found problem...
please somebody help me thankyou :)
Please modify your upstart script as follows:
exec /home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
If that does not work it could very well be because the /home/elspeth/.pyenv/ folder is inaccessible please check it's permission. If permissions are found to be correct and you are continuing to have problems try this:
script
cd /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
/home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
end script
I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE
So i set up a WSGI server running python/django code, and stuck the following in my httpd.conf file:
WSGIDaemonProcess mysite.com processes=2 threads=15 user=django group=django
However, when i go to the page and hit "refresh" really really quickly, it seems that i am getting way more than two processes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21042 django 20 0 975m 36m 4440 S 98.9 6.2 0:15.63 httpd
1017 root 20 0 67688 2352 740 S 0.3 0.4 0:10.50 sendmail
21041 django 20 0 974m 40m 4412 S 0.3 6.7 0:16.36 httpd
21255 django 20 0 267m 8536 2036 S 0.3 1.4 0:01.02 httpd
21256 django 20 0 267m 8536 2036 S 0.3 1.4 0:00.01 httpd
I thought setting processes=2 would limit it to two processes. Is there something i'm missing?