I am trying to host my Django website using CyberPanel on OpenLiteSpeed.
I have completed website development. The website is running properly when I run in debug mode (using runserver command). I am trying to deploy the website.
I am using CyberPanel as I wish to deploy my own SMTP server.
I have used the following article to install cyber-panel on my digital ocean server : https://cyberpanel.net/docs/installing-cyberpanel/
I have used the following article to install and configure Django based website on Openlitespeed server https://cyberpanel.net/blog/2019/01/10/how-to-setup-django-application-on-cyberpanel-openlitespeed/
When I try to open website unionelectronics.in I get 503 Service Unavailable (The server is temporarily busy, try again later!) error.
I have updated A record in DNS Management on godaddy to point to my cloud service provider (digital ocean).
Error Log :
2021-10-05 14:53:37.241089 [INFO] [817] [config:server:vhosts:vhost:Example] config context /docs/.
2021-10-05 14:53:37.241103 [INFO] [817] [config:server:vhosts:vhost:Example] config context /protected/.
2021-10-05 14:53:37.242757 [INFO] [817] [PlainConf] [virtualHostConfig:] start parsing file /usr/local/lsws/conf/vhosts/unionelectronics.in/vhost.conf
2021-10-05 14:53:37.243519 [INFO] [817] [PlainConf] [virtualHostConfig:] Finished parsing file /usr/local/lsws/conf/vhosts/unionelectronics.in/vhost.conf
2021-10-05 14:53:37.243533 [INFO] [817] [PlainConf] [virtualHostConfig:] context [/] add param [envtype 1]
2021-10-05 14:53:37.251116 [NOTICE] [817] [SSL_CTX: 0x124eb20] OCSP Stapling can't be enabled: [OCSP] /etc/letsencrypt/live/unionelectronics.in/fullchain.pem: X509_STORE_CTX_get1_issuer failed!.
2021-10-05 14:53:37.251224 [INFO] [817] [config:server:vhosts:vhost:unionelectronics.in] config context /.
2021-10-05 14:53:37.257606 [INFO] [817] [PlainConf] [virtualHostTemplate:] start parsing file /usr/local/lsws/conf/templates/ccl.conf
2021-10-05 14:53:37.261988 [INFO] [817] [PlainConf] [virtualHostTemplate:] Finished parsing file /usr/local/lsws/conf/templates/ccl.conf
2021-10-05 14:53:37.262027 [INFO] [817] [PlainConf] [context:/] rewrite [] add rules [rewritefile .htaccess]
2021-10-05 14:53:37.262326 [INFO] [817] [PlainConf] [virtualHostTemplate:] start parsing file /usr/local/lsws/conf/templates/rails.conf
2021-10-05 14:53:37.274452 [INFO] [817] [PlainConf] [virtualHostTemplate:] Finished parsing file /usr/local/lsws/conf/templates/rails.conf
2021-10-05 14:53:37.274503 [INFO] [817] [PlainConf] [context:/] rewrite [] add rules [rewritefile .htaccess]
2021-10-05 14:53:37.274660 [NOTICE] [817] [ZConfManager] No VHosts added, do not send!
2021-10-05 14:53:37.275041 [NOTICE] [817] Instance is ready for service. m_fdCmd 29, m_fdAdmin 6.
2021-10-05 14:53:37.278573 [NOTICE] [817] [AutoRestarter] new child process with pid=848 is forked!
2021-10-05 14:53:37.279309 [NOTICE] [848] [*:80] Worker #1 activates SO_REUSEPORT #1 socket, fd: 22
2021-10-05 14:53:37.279352 [NOTICE] [848] [*:443] Worker #1 activates SO_REUSEPORT #1 socket, fd: 23
2021-10-05 14:53:37.279366 [NOTICE] [848] [UDP *:443] Worker #1 activates SO_REUSEPORT #1 socket, fd: 24
2021-10-05 14:53:37.279406 [INFO] [848] [UDP:0.0.0.0:443] initPacketsIn: allocated 100 packets
2021-10-05 14:53:37.279418 [INFO] [848] [UDP:0.0.0.0:7080] initPacketsIn: allocated 100 packets
2021-10-05 14:53:37.279605 [NOTICE] [848] AIO is not supported on this machine!
2021-10-05 14:53:37.279650 [NOTICE] [848] Successfully change current user to nobody
2021-10-05 14:53:37.279659 [NOTICE] [848] Core dump is enabled.
2021-10-05 14:53:37.279705 [INFO] [848] [union7914.655340]: locked pid file [/tmp/lshttpd/union7914.sock.pid].
2021-10-05 14:53:37.279729 [INFO] [848] [union7914.655340] remove unix socket for detached process: /tmp/lshttpd/union7914.sock
2021-10-05 14:53:37.279781 [NOTICE] [848] [LocalWorker::workerExec] VHost:unionelectronics.in suExec check uid 65534 gid 65534 setuidmode 0.
2021-10-05 14:53:37.279787 [NOTICE] [848] [LocalWorker::workerExec] Config[union7914.655340]: suExec uid 1002 gid 1002 cmd /usr/local/lsws/lsphp80/bin/lsphp, final uid 1002 gid 1002, flags: 0.
2021-10-05 14:53:37.299448 [NOTICE] [848] [union7914.655340] add child process pid: 850
2021-10-05 14:53:37.299497 [INFO] [848] [union7914.655340]: unlocked pid file [/tmp/lshttpd/union7914.sock.pid].
2021-10-05 14:53:37.299538 [INFO] [848] [wsgi:unionelectronics.in:/]: locked pid file [/tmp/lshttpd/unionelectronics.in:_.sock.pid].
2021-10-05 14:53:37.299541 [INFO] [848] [wsgi:unionelectronics.in:/] remove unix socket for detached process: /tmp/lshttpd/unionelectronics.in:_.sock
2021-10-05 14:53:37.306283 [NOTICE] [848] [LocalWorker::workerExec] VHost:unionelectronics.in suExec check uid 65534 gid 65534 setuidmode 0.
2021-10-05 14:53:37.306306 [NOTICE] [848] [LocalWorker::workerExec] Config[wsgi:unionelectronics.in:/]: suExec uid 65534 gid 65534 cmd /usr/local/lsws/fcgi-bin/lswsgi -m /home/company_website/companywebsiteproject/companywebsiteproject/wsgi.py, final uid 65534 gid 65534, flags: 0.
2021-10-05 14:53:37.314205 [NOTICE] [848] [wsgi:unionelectronics.in:/] add child process pid: 854
2021-10-05 14:53:37.314305 [INFO] [848] [wsgi:unionelectronics.in:/]: unlocked pid file [/tmp/lshttpd/unionelectronics.in:_.sock.pid].
2021-10-05 14:53:37.314358 [INFO] [848] [lsphp]: locked pid file [/tmp/lshttpd/lsphp.sock.pid].
2021-10-05 14:53:37.314362 [INFO] [848] [lsphp] remove unix socket for detached process: /tmp/lshttpd/lsphp.sock
2021-10-05 14:53:37.314425 [NOTICE] [848] [LocalWorker::workerExec] Config[lsphp]: suExec uid 65534 gid 65534 cmd /usr/local/lsws/lsphp73/bin/lsphp, final uid 65534 gid 65534, flags: 0.
2021-10-05 14:53:37.325014 [NOTICE] [848] [lsphp] add child process pid: 855
2021-10-05 14:53:37.325069 [INFO] [848] [lsphp]: unlocked pid file [/tmp/lshttpd/lsphp.sock.pid].
2021-10-05 14:53:37.325113 [NOTICE] [848] Setup swapping space...
2021-10-05 14:53:37.325224 [NOTICE] [848] LiteSpeed/1.7.14 Open
module versions:
modgzip 1.1
cache 1.62
modinspector 1.1
uploadprogress 1.1
mod_security 1.4
starts successfully!
Update (08-10-2021) :
I deleted the my old droplet, created a new droplet and reinstalled the entire system.
I have added vHost Conf entry that I added.
I have created a virtual environment company_website and have created companywebsiteproject Project inside the virtual enviroment.
docRoot $VH_ROOT/public_html
vhDomain $VH_NAME
vhAliases www.$VH_NAME
adminEmails kpecengineers#gmail.com
enableGzip 1
enableIpGeo 1
index {
useServer 0
indexFiles index.php, index.html
}
errorlog $VH_ROOT/logs/$VH_NAME.error_log {
useServer 0
logLevel ERROR
rollingSize 10M
}
accesslog $VH_ROOT/logs/$VH_NAME.access_log {
useServer 0
logFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i""
logHeaders 5
rollingSize 10M
keepDays 10
compressArchive 1
}
scripthandler {
add lsapi:union9108 php
}
extprocessor union9108 {
type lsapi
address UDS://tmp/lshttpd/union9108.sock
maxConns 10
env LSAPI_CHILDREN=10
initTimeout 600
retryTimeout 0
persistConn 1
pcKeepAliveTimeout 1
respBuffer 0
autoStart 1
path /usr/local/lsws/lsphp80/bin/lsphp
extUser union9108
extGroup union9108
memSoftLimit 2047M
memHardLimit 2047M
procSoftLimit 400
procHardLimit 500
}
phpIniOverride {
}
rewrite {
enable 1
autoLoadHtaccess 1
}
context / {
type App Server
location /home/unionelectronics.in/public_html/KPECproject/
binPath /usr/local/lsws/fcgi-bin/lswsgi
appType WSGI
startupFile KPECproject/wsgi.py
env PYTHONPATH=/home/unionelectronics.in/public_html/bin/python3:/home/unionelectronics.in/public_html/KPECproject
env LS_PYTHONBIN=/home/unionelectronics.in/public_html/bin/python3
}
When I go to http://Server_IP I get 404 error instead of earlier 503 error.
When I go to http://unionelectronics.in/ I get 404 error instead of earlier 503 error.
Earlier I was getting error in error log now I am not getting any error.
I believe the issue is because of
index {
useServer 0
indexFiles index.php, index.html
}
How should I reroute to my home page?
If I remove index.html I get 404 error. If I keep index.html I am getting teh default cyberpanel data.
Related
I have a simple Flask app that starts with Gunicorn which has 4 workers.
I want to clear and warmup cache when server restarted. But when I do this inside create_app() method it is executing 4 times.
def create_app(test_config=None):
app = Flask(__name__)
# ... different configuration here
t = threading.Thread(target=reset_cache, args=(app,))
t.start()
return app
[2022-10-28 09:33:33 +0000] [7] [INFO] Booting worker with pid: 7
[2022-10-28 09:33:33 +0000] [8] [INFO] Booting worker with pid: 8
[2022-10-28 09:33:33 +0000] [9] [INFO] Booting worker with pid: 9
[2022-10-28 09:33:33 +0000] [10] [INFO] Booting worker with pid: 10
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,908 INFO webapp reset_cache:38 Clearing cache
2022-10-28 09:33:36,909 INFO webapp reset_cache:38 Clearing cache
How to make it only one-time without using any queues, rq-workers or celery?
Signals, mutex, some special check of worker id (but it is always dynamic)?
Tried Haven't found any solution so far.
I used Redis locks for that.
Here is an example using flask-caching, which I had in project, but you can replace set client from whatever place you have redis client:
import time
from webapp.models import cache # cache = flask_caching.Cache()
def reset_cache(app):
with app.app_context():
client = app.extensions["cache"][cache]._write_client # redis client
lock = client.lock("warmup-cache-key")
locked = lock.acquire(blocking=False, blocking_timeout=1)
if locked:
app.logger.info("Clearing cache")
cache.clear()
app.logger.info("Warming up cache")
# function call here with `cache.set(...)`
app.logger.info("Completed warmup cache")
# time.sleep(5) # add some delay if procedure is really fast
lock.release()
It can be easily extended with threads, loops or whatever you need to set value to cache.
My Django project used to work perfectly fine for the last 90 days.
There has been no new code deployment during this time.
Running supervisor -> gunicorn to serve the application and to the front nginx.
Unfortunately it just stopped serving the login page (standard framework login).
I wrote a small view that checks if the DB connection is working and it comes up within seconds.
def updown(request):
from django.shortcuts import HttpResponse
from django.db import connections
from django.db.utils import OperationalError
status = True
# Check database connection
if status is True:
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
status = False
error = 'No connection to database'
else:
status = True
if status is True:
message = 'OK'
elif status is False:
message = 'NOK' + ' \n' + error
return HttpResponse(message)
This delivers back an OK.
But the second I am trying to reach /admin or anything else requiring the login, it times out.
wget http://127.0.0.1:8000
--2022-07-20 22:54:58-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /account/login/?next=/business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/account/login/? next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... No data received.
Retrying.
--2022-07-20 22:55:30-- (try: 2) http://127.0.0.1:8000/account/login/?next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response...
Supervisor / Gunicorn Log is not helpful at all:
[2022-07-20 23:06:34 +0200] [980] [INFO] Starting gunicorn 20.1.0
[2022-07-20 23:06:34 +0200] [980] [INFO] Listening at: http://127.0.0.1:8000 (980)
[2022-07-20 23:06:34 +0200] [980] [INFO] Using worker: sync
[2022-07-20 23:06:34 +0200] [986] [INFO] Booting worker with pid: 986
[2022-07-20 23:08:01 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:986)
[2022-07-20 23:08:02 +0200] [980] [WARNING] Worker with pid 986 was terminated due to signal 9
[2022-07-20 23:08:02 +0200] [1249] [INFO] Booting worker with pid: 1249
[2022-07-20 23:12:26 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:1249)
[2022-07-20 23:12:27 +0200] [980] [WARNING] Worker with pid 1249 was terminated due to signal 9
[2022-07-20 23:12:27 +0200] [1515] [INFO] Booting worker with pid: 1515
Nginx is just giving:
502 Bad Gateway
I don't see anything in the logs, I don't see any error when running the dev server from Django, also Sentry is not showing anything. Totally lost.
I am running Django 4.0.x and all libraries are updated.
The check up script for the database is only checking the connection. Due to misconfiguration of the database replication, the db was connecting and also reading, but when writing it hang.
The login page tries to write a session to the tables, which failed in this case.
I'm preparing a cluster with personal machine, I mount Centos 7 in the server and I'm trying to start the slurm clients but when I typed this command:
pdsh -w n[00-09] systemctl start slurmd
I had this error:
n07: Job for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
pdsh#localhost: n07: ssh exited with exit code 1
I had that message for all the nodes.
[root#localhost ~]# systemctl status slurmd.service -l
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-12-22 18:27:30 CST; 27min ago
Process: 1589 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=203/EXEC)
Dec 22 18:27:30 localhost.localdomain systemd[1]: Starting Slurm node daemon...
Dec 22 18:27:30 localhost.localdomain systemd[1]: slurmd.service: control process exited, code=exited status=203
Dec 22 18:27:30 localhost.localdomain systemd[1]: Failed to start Slurm node daemon.
Dec 22 18:27:30 localhost.localdomain systemd[1]: Unit slurmd.service entered failed state.
Dec 22 18:27:30 localhost.localdomain systemd[1]: slurmd.service failed.
This is the slurm.conf file:
ClusterName=linux
ControlMachine=localhost
#ControlAddr=
#BackupController=
#BackupAddr=
#
SlurmUser=slurm
#SlurmdUser=root
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
StateSaveLocation=/var/spool/slurm/ctld
SlurmdSpoolDir=/var/spool/slurm/d
SwitchType=switch/none
MpiDefault=none
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
ProctrackType=proctrack/pgid
#PluginDir=
#FirstJobId=
#MaxJobCount=
#PlugStackConfig=
#PropagatePrioProcess=
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#Prolog=
#Epilog=
#SrunProlog=
#SrunEpilog=
#TaskProlog=
#TaskEpilog=
#TaskPlugin=
#TrackWCKey=no
#TreeWidth=50
#TmpFS=
#UsePAM=
#
#TIMERS
SlurmctldTimeout=300
SlurmdTimeout=300
InactiveLimit=0
MinJobAge=300
KillWait=30
Waittime=0
#
#SCHEDULING
SchedulerType=sched/backfill
#SchedulerAuth=
#SelectType=select/linear
FastSchedule=1
#PriorityType=priority/multifactor
#PriorityDecayHalfLife=14-0
#PriorityUsageResetPeriod=14-0
#PriorityWeightFairshare=100000
#PriorityWeightAge=1000
#PriorityWeightPartition=10000
#PriorityWeightJobSize=1000
#PriorityMaxAge=1-0
#
#LOGGING
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurmd.log
JobCompType=jobcomp/none
#JobCompLoc=
#
#ACCOUNTING #JobAcctGatherType=jobacct_gather/linux #JobAcctGatherFrequency=30 # #AccountingStorageType=accounting_storage/slurmdbd #AccountingStorageHost= #AccountingStorageLoc= #AccountingStoragePass= #AccountingStorageUser= # #COMPUTE NODES # OpenHPC default configuration TaskPlugin=task/affinity PropagateResourceLimitsExcept=MEMLOCK AccountingStorageType=accounting_storage/filetxt Epilog=/etc/slurm/slurm.epilog.clean NodeName=n[00-09] Sockets=1 CoresPerSocket=6 ThreadsPerCore=2 State=UNKNOWN
PartitionName=normal Nodes=n[00-09] Default=YES MaxTime=24:00:00 State=UP
ReturnToService=1
The control machine was set by hostname -s
I am facing worker timeout issue. when this happens api gets timeout issue.
Please look into log and settings.
Here is log:
[2016-11-26 09:45:02 +0000] [19064] [INFO] Autorestarting worker after current request.
[2016-11-26 09:45:02 +0000] [19064] [INFO] Worker exiting (pid: 19064)
[2016-11-26 09:46:02 +0000] [19008] [CRITICAL] WORKER TIMEOUT (pid:19064)
[2016-11-28 04:12:06 +0000] [19008] [INFO] Handling signal: winch
gunicorn config:
workers = threads = numCPUs() * 2 + 1
backlog = 2048
max_requests = 1200
timeout = 60
preload_app = True
worker_class = "gevent"
debug = True
daemon = False
pidfile = "/tmp/gunicorn.pid"
logfile = "/tmp/gunicorn.log"
loglevel = 'info'
accesslog = '/tmp/gunicorn-access.log'
I have this in a django view:
import logging
from django.conf import settings
fmt = getattr(settings, 'LOG_FORMAT', None)
lvl = getattr(settings, 'LOG_LEVEL', logging.INFO)
logging.basicConfig(format=fmt, level=lvl)
#api_view(['GET', 'POST'])
def index(request):
if request.GET.get("request_id"):
logging.info("standard CMH request...")
barf()
# etc
In my work environment (DEBUG=True) this prints on the console when I access the relevant page over REST from a client app:
mgregory$ foreman start
19:59:11 web.1 | started with pid 37371
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Starting gunicorn 18.0
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Listening at: http://0.0.0.0:5000 (37371)
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Using worker: sync
19:59:11 web.1 | 2014-10-10 19:59:11 [37374] [INFO] Booting worker with pid: 37374
19:59:18 web.1 | standard CMH request...
What is actually happening is that barf() is throwing an exception, because it's not defined. But this isn't appearing in the console log.
How can I get all exceptions to appear in the console log, in DEBUG=True environment?
Supplementary: is there any reason why I wouldn't have this do the same in the production environment, with DEBUG=False? My production environment does not email me, I'd love to have these exceptions in the log, I think?