I just updated sphinx to the latest version on a dedicated server running with centos 7, but after hours of search I can't find the problem.
The sphinx index has created well, but I can't start search daemon. I got this messages all the time :
systemctl status searchd.service
searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: failed (Result: timeout) since Sat 2018-03-24 21:14:09 CET; 3min 4s ago
Process: 17865 ExecStartPre=/bin/chown sphinx.sphinx /var/run/sphinx (code=exited, status=0/SUCCESS)
Process: 17863 ExecStartPre=/bin/mkdir -p /var/run/sphinx (code=killed, signal=TERM)
Mar 24 21:14:09 systemd[1]: Starting SphinxSearch Search Engine...
Mar 24 21:14:09 systemd[1]: searchd.service start-pre operation timed out. Terminating.
Mar 24 21:14:09 systemd[1]: Failed to start SphinxSearch Search Engine.
Mar 24 21:14:09 systemd[1]: Unit searchd.service entered failed state.
Mar 24 21:14:09 systemd[1]: searchd.service failed.
I have really no idea where this problem comes from.
In your systemd service file (mine is in /usr/lib/systemd/system/searchd.service) comment out:
/bin/chown sphinx.sphinx /var/run/sphinx
/bin/mkdir -p /var/run/sphinx manually
(you can run these commands manually if it's not done yet).
Then change from
Type=forking
to
Type=simple
Then do systemctl daemon-reload and you can start/stop/status the service:
[root#server ~]# cat /usr/lib/systemd/system/searchd.service
[Unit]
Description=SphinxSearch Search Engine
After=network.target remote-fs.target nss-lookup.target
After=syslog.target
[Service]
Type=simple
User=sphinx
Group=sphinx
# Run ExecStartPre with root-permissions
PermissionsStartOnly=true
#ExecStartPre=/bin/mkdir -p /var/run/sphinx
#ExecStartPre=/bin/chown sphinx.sphinx /var/run/sphinx
# Run ExecStart with User=sphinx / Group=sphinx
ExecStart=/usr/bin/searchd --config /etc/sphinx/sphinx.conf
ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait
KillMode=process
KillSignal=SIGTERM
SendSIGKILL=no
LimitNOFILE=infinity
TimeoutStartSec=infinity
PIDFile=/var/run/sphinx/searchd.pid
[Install]
WantedBy=multi-user.target
Alias=sphinx.service
Alias=sphinxsearch.service
[root#server ~]# systemctl start searchd
[root#server ~]# systemctl status searchd
● searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2018-03-25 10:41:24 EDT; 4s ago
Process: 111091 ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait (code=exited, status=1/FAILURE)
Main PID: 112030 (searchd)
CGroup: /system.slice/searchd.service
├─112029 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
└─112030 /usr/bin/searchd --config /etc/sphinx/sphinx.conf
Mar 25 10:41:24 server.domain.com searchd[112026]: Sphinx 2.3.2-id64-beta (4409612)
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2001-2016, Andrew Aksyonoff
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Mar 25 10:41:24 server.domain.com searchd[112026]: Sphinx 2.3.2-id64-beta (4409612)
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2001-2016, Andrew Aksyonoff
Mar 25 10:41:24 server.domain.com searchd[112026]: Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'test1'
Mar 25 10:41:24 server.domain.com searchd[112026]: WARNING: index 'test1': prealloc: failed to open /var/lib/sphinx/test1.sph: No such file or directory...T SERVING
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'testrt'
Mar 25 10:41:24 server.domain.com systemd[1]: searchd.service: Supervising process 112030 which is not our child. We'll most likely not notice when it exits.
Hint: Some lines were ellipsized, use -l to show in full.
[root#server ~]# systemctl stop searchd
[root#server ~]# systemctl status searchd
● searchd.service - SphinxSearch Search Engine
Loaded: loaded (/usr/lib/systemd/system/searchd.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2018-03-25 10:41:36 EDT; 1s ago
Process: 112468 ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait (code=exited, status=1/FAILURE)
Main PID: 112030
Mar 25 10:41:24 server.domain.com searchd[112026]: WARNING: index 'test1': prealloc: failed to open /var/lib/sphinx/test1.sph: No such file or directory...T SERVING
Mar 25 10:41:24 server.domain.com searchd[112026]: precaching index 'testrt'
Mar 25 10:41:24 server.domain.com systemd[1]: searchd.service: Supervising process 112030 which is not our child. We'll most likely not notice when it exits.
Mar 25 10:41:33 server.domain.com systemd[1]: Stopping SphinxSearch Search Engine...
Mar 25 10:41:33 server.domain.com searchd[112468]: [Sun Mar 25 10:41:33.183 2018] [112468] using config file '/etc/sphinx/sphinx.conf'...
Mar 25 10:41:33 server.domain.com searchd[112468]: [Sun Mar 25 10:41:33.183 2018] [112468] stop: successfully sent SIGTERM to pid 112030
Mar 25 10:41:36 server.domain.com systemd[1]: searchd.service: control process exited, code=exited status=1
Mar 25 10:41:36 server.domain.com systemd[1]: Stopped SphinxSearch Search Engine.
Mar 25 10:41:36 server.domain.com systemd[1]: Unit searchd.service entered failed state.
Mar 25 10:41:36 server.domain.com systemd[1]: searchd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
I had the same problem and finally found the solution that worked for me.
I have edited my "/etc/systemd/system/sphinx.service" to look like
[Unit]
Description=SphinxSearch Search Engine
After=network.target remote-fs.target nss-lookup.target
After=syslog.target
[Service]
User=sphinx
Group=sphinx
RuntimeDirectory=sphinxsearch
RuntimeDirectoryMode=0775
# Run ExecStart with User=sphinx / Group=sphinx
ExecStart=/usr/bin/searchd --config /etc/sphinx/sphinx.conf
ExecStop=/usr/bin/searchd --config /etc/sphinx/sphinx.conf --stopwait
KillMode=process
KillSignal=SIGTERM
SendSIGKILL=no
LimitNOFILE=infinity
TimeoutStartSec=infinity
#PIDFile=/var/run/sphinx/searchd.pid
PIDFile=/var/run/sphinxsearch/searchd.pid
[Install]
WantedBy=multi-user.target
Alias=sphinx.service
Alias=sphinxsearch.service
In that case my searchd is able to survive the reboot. The solution from previous post have the problem with searchd starting after reboot before the /var/run/sphinxsearch dir was deleting after reboot in my case.
The fact is that RHEL (CentOS) 7 does not perceive the "Infinity" value of the "TimeoutStartSec"parameter. You must set a numeric value. For Example, TimeoutStartSec=600
Related
I'm preparing a cluster with personal machine, I mount Centos 7 in the server and I'm trying to start the slurm clients but when I typed this command:
pdsh -w n[00-09] systemctl start slurmd
I had this error:
n07: Job for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
pdsh#localhost: n07: ssh exited with exit code 1
I had that message for all the nodes.
[root#localhost ~]# systemctl status slurmd.service -l
● slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-12-22 18:27:30 CST; 27min ago
Process: 1589 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=203/EXEC)
Dec 22 18:27:30 localhost.localdomain systemd[1]: Starting Slurm node daemon...
Dec 22 18:27:30 localhost.localdomain systemd[1]: slurmd.service: control process exited, code=exited status=203
Dec 22 18:27:30 localhost.localdomain systemd[1]: Failed to start Slurm node daemon.
Dec 22 18:27:30 localhost.localdomain systemd[1]: Unit slurmd.service entered failed state.
Dec 22 18:27:30 localhost.localdomain systemd[1]: slurmd.service failed.
This is the slurm.conf file:
ClusterName=linux
ControlMachine=localhost
#ControlAddr=
#BackupController=
#BackupAddr=
#
SlurmUser=slurm
#SlurmdUser=root
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
StateSaveLocation=/var/spool/slurm/ctld
SlurmdSpoolDir=/var/spool/slurm/d
SwitchType=switch/none
MpiDefault=none
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
ProctrackType=proctrack/pgid
#PluginDir=
#FirstJobId=
#MaxJobCount=
#PlugStackConfig=
#PropagatePrioProcess=
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#Prolog=
#Epilog=
#SrunProlog=
#SrunEpilog=
#TaskProlog=
#TaskEpilog=
#TaskPlugin=
#TrackWCKey=no
#TreeWidth=50
#TmpFS=
#UsePAM=
#
#TIMERS
SlurmctldTimeout=300
SlurmdTimeout=300
InactiveLimit=0
MinJobAge=300
KillWait=30
Waittime=0
#
#SCHEDULING
SchedulerType=sched/backfill
#SchedulerAuth=
#SelectType=select/linear
FastSchedule=1
#PriorityType=priority/multifactor
#PriorityDecayHalfLife=14-0
#PriorityUsageResetPeriod=14-0
#PriorityWeightFairshare=100000
#PriorityWeightAge=1000
#PriorityWeightPartition=10000
#PriorityWeightJobSize=1000
#PriorityMaxAge=1-0
#
#LOGGING
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurmd.log
JobCompType=jobcomp/none
#JobCompLoc=
#
#ACCOUNTING #JobAcctGatherType=jobacct_gather/linux #JobAcctGatherFrequency=30 # #AccountingStorageType=accounting_storage/slurmdbd #AccountingStorageHost= #AccountingStorageLoc= #AccountingStoragePass= #AccountingStorageUser= # #COMPUTE NODES # OpenHPC default configuration TaskPlugin=task/affinity PropagateResourceLimitsExcept=MEMLOCK AccountingStorageType=accounting_storage/filetxt Epilog=/etc/slurm/slurm.epilog.clean NodeName=n[00-09] Sockets=1 CoresPerSocket=6 ThreadsPerCore=2 State=UNKNOWN
PartitionName=normal Nodes=n[00-09] Default=YES MaxTime=24:00:00 State=UP
ReturnToService=1
The control machine was set by hostname -s
I try to manage Puma server on my ruby on rails web-site with systemd. Puma cannot start with following error: PG::ConnectionBad: fe_sendauth: no password supplied. When I start Puma myself in terminal with the same start command as in systemd it run correctly. Please help.
I use RoR 4.2.11.1, postgresql 11.2, on Debian 9.12, which run on VirtualBox 6.0.
web-site file structure:
/mytarifs/current - symlink to last release
/mytarifs/releases - relseas
/mytarifs/shared - shared files like database connections
I start Puma in terminal with success by following command:
root#mt-staging-1:/mytarifs/current# bundle exec puma -C config/puma.production.rb
Database_URL environment variable:
DATABASE_URL=postgresql://login_name:password#localhost:5432/db_tarif
With this database url I can connect to my db with psql
error log:
Mar 07 02:20:39 mt-staging-1 systemd[1]: Started puma for mytarifs (production).
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] Puma starting in cluster mode...
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] * Version 4.3.3 (ruby 2.3.8-p459), codename: Mysterious Traveller
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] * Min threads: 0, max threads: 5
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] * Environment: production
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] * Process workers: 1
Mar 07 02:20:40 mt-staging-1 puma[12237]: [12237] * Preloading application
Mar 07 02:20:47 mt-staging-1 puma[12237]: The PGconn, PGresult, and PGError constants are deprecated, and will be
Mar 07 02:20:47 mt-staging-1 puma[12237]: removed as of version 1.0.
Mar 07 02:20:47 mt-staging-1 puma[12237]: You should use PG::Connection, PG::Result, and PG::Error instead, respectively.
Mar 07 02:20:47 mt-staging-1 puma[12237]: Called from /mytarifs/releases/20200306184828/vendor/bundle/ruby/2.3.0/gems/activesupport-4.2.11.1/lib/active_support/dependencies.rb:240:in `load_dependency'
/mytarifs/current/config/puma.production.rb
threads Integer(ENV['MIN_THREADS'] || 0), Integer(ENV['MAX_THREADS'] || 5)
workers Integer(ENV['PUMA_WORKERS'] || 1)
preload_app!
bind 'unix:///mytarifs/shared/tmp/sockets/puma.sock'
pidfile '/mytarifs/shared/tmp/pids/puma.production.pid'
state_path '/mytarifs/shared/tmp/pids/puma.state'
rackup DefaultRackup
environment ENV['RACK_ENV'] || 'production'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
/mytarifs/current/config/database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: 125
username: <%= ENV["PG_USERNAME"] %>
password: <%= ENV["PG_PASSWORD"] %>
host: localhost
template: template0
reconnect: true
production:
<<: *default
url: <%= ENV["DATABASE_URL"] %>
/etc/systemd/system/puma.service
[Unit]
Description=puma for mytarifs (production)
After=network.target
[Service]
Type=simple
Environment=RAILS_ENV=production
Environment=PUMA_DEBUG=1
WorkingDirectory=/mytarifs/current
ExecStart=/root/.rbenv/shims/bundle exec puma -e production -C config/puma.production.rb
ExecReload=/bin/kill -TSTP $MAINPID
ExecStop=/bin/kill -TERM $MAINPID
User=root
Group=root
RestartSec=1
Restart=on-failure
SyslogIdentifier=puma
[Install]
WantedBy=multi-user.target
ok, I found the reason for mistake. It is because environment variables are not available (equal to "") when systemd executes.
I do not know how get environment variables from memory, but systemd can take them from file with directive EnvironmentFile=/absolute/path/to/environment/file
My deploy enviroment is "Django 1.11.13 + Python3.6.5(with virtualenv) + uWSGI 2.0 + Nginx 1.12". Here is my project:
(ncms) [ncms#localhost ncms]$ pwd
/home/ncms/ncms
(ncms) [ncms#localhost ncms]$ ll
总用量 36
drwxrwxr-x 12 ncms ncms 157 5月 17 13:48 apps
-rwxrwxr-x 1 ncms ncms 16384 5月 14 18:49 celerybeat-schedule
drwxrwxr-x 2 ncms ncms 66 5月 17 10:40 db_tools
drwxrwxr-x 4 ncms ncms 41 5月 17 10:40 extra_apps
drwxrwxr-x 5 ncms ncms 233 5月 17 10:40 libs
drwxrwxr-x 2 ncms ncms 152 5月 17 10:40 logfiles
-rwxrwxr-x 1 ncms ncms 855 5月 6 22:23 manage.py
drwxrwxr-x 3 ncms ncms 201 5月 21 14:19 ncms
-rwxrwxr-x 1 ncms ncms 351 5月 15 18:25 ncms.conf
-rwxrwxr-x 1 ncms ncms 2766 5月 17 13:43 notes.md
-rwxrwxr-x 1 ncms ncms 518 5月 14 15:52 requirements.txt
drwxrwxr-x 3 ncms ncms 23 5月 18 16:09 static
drwxrwxr-x 10 ncms ncms 120 5月 18 16:05 static_files
drwxrwxr-x 11 ncms ncms 4096 5月 17 15:38 templates
my virtualenv path and name:
(ncms) [ncms#localhost ncms]$ pwd
/home/ncms/.virtualenvs/ncms
(ncms) [ncms#localhost ncms]$ ll
总用量 8
drwxrwxr-x 3 ncms ncms 4096 5月 21 13:46 bin
drwxrwxr-x 2 ncms ncms 24 5月 15 11:49 include
drwxrwxr-x 3 ncms ncms 23 5月 15 11:49 lib
-rw-rw-r-- 1 ncms ncms 61 5月 15 11:50 pip-selfcheck.json
Three important files you must know:
1./etc/uwsgi/ncms.ini
[uwsgi]
# Django diretory that contains manage.py
chdir = /home/ncms/ncms
module = ncms.wsgi:application
env = DJANGO_SETTINGS_MODULE=ncms.settings
# enable master process manager
master = true
# bind to UNIX socket
socket = /run/uwsgi/ncms.sock
# number of worker processes
processes = 4
# user identifier of uWSGI processes
uid = ncms
# group identifier of uWSGI processes
gid = ncms
#respawn processes after serving 5000 requests
max-requests = 5000
# clear environment on exit
vacuum = true
# the virtualenv you are using (full path)
home = /home/ncms/.virtualenvs/ncms
# set mode and own of created UNIX socket
chown-socket = ncms:nginx
chmod-socket = 660
# place timestamps into log
log-date = true
logto = /var/log/uwsgi.log
no-site = true
2./etc/systemd/system/uwsgi.service
[Unit]
Description=ncms uWSGI service
[Service]
ExecStartPre=/usr/bin/bash -c 'mkdir -p /run/uwsgi; chown ncms:nginx /run/uwsgi'
ExecStart=/usr/bin/uwsgi --emperor /etc/uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=graphical.target
3./etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location = favicon.ico { access_log off; log_not_found off; }
location /static {
root /home/ncms/ncms;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/ncms.sock;
}
}
}
After they were configured well:
sudo nginx –t
sudo usermod -a -G ncms nginx
chmod 710 /home/ncms
sudo systemctl daemon-reload
sudo systemctl restart nginx
sudo systemctl restart uwsgi
Then I always got this error at the Operational MODE: preforking step when I looked /var/log/uwsgi.log:
Mon May 21 16:38:35 2018 - SIGINT/SIGQUIT received...killing workers...
Mon May 21 16:38:35 2018 - received message 0 from emperor
Mon May 21 16:38:36 2018 - worker 1 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 2 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 3 buried after 1 seconds
Mon May 21 16:38:36 2018 - worker 4 buried after 1 seconds
Mon May 21 16:38:36 2018 - goodbye to uWSGI.
Mon May 21 16:38:36 2018 - VACUUM: unix socket /run/uwsgi/ncms.sock removed.
Mon May 21 16:38:38 2018 - *** Starting uWSGI 2.0.17 (64bit) on [Mon May 21 16:38:38 2018] ***
Mon May 21 16:38:38 2018 - compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-16) on 26 April 2018 05:37:29
Mon May 21 16:38:38 2018 - os: Linux-3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018
Mon May 21 16:38:38 2018 - nodename: localhost.localdomain
Mon May 21 16:38:38 2018 - machine: x86_64
Mon May 21 16:38:38 2018 - clock source: unix
Mon May 21 16:38:38 2018 - pcre jit disabled
Mon May 21 16:38:38 2018 - detected number of CPU cores: 4
Mon May 21 16:38:38 2018 - current working directory: /etc/uwsgi
Mon May 21 16:38:38 2018 - detected binary path: /usr/bin/uwsgi
Mon May 21 16:38:38 2018 - chdir() to /home/ncms/ncms
Mon May 21 16:38:38 2018 - your processes number limit is 7164
Mon May 21 16:38:38 2018 - your memory page size is 4096 bytes
Mon May 21 16:38:38 2018 - detected max file descriptor number: 1024
Mon May 21 16:38:38 2018 - lock engine: pthread robust mutexes
Mon May 21 16:38:38 2018 - thunder lock: disabled (you can enable it with --thunder-lock)
Mon May 21 16:38:38 2018 - uwsgi socket 0 bound to UNIX address /run/uwsgi/ncms.sock fd 3
Mon May 21 16:38:38 2018 - setgid() to 2014
Mon May 21 16:38:38 2018 - setuid() to 2030
Mon May 21 16:38:38 2018 - Python version: 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Mon May 21 16:38:38 2018 - Set PythonHome to /home/ncms/.virtualenvs/ncms
Mon May 21 16:38:38 2018 - *** Python threads support is disabled. You can enable it with --enable-threads ***
Mon May 21 16:38:38 2018 - Python main interpreter initialized at 0x1cea860
Mon May 21 16:38:38 2018 - your server socket listen backlog is limited to 100 connections
Mon May 21 16:38:38 2018 - your mercy for graceful operations on workers is 60 seconds
Mon May 21 16:38:38 2018 - mapped 364600 bytes (356 KB) for 4 cores
Mon May 21 16:38:38 2018 - *** Operational MODE: preforking ***
Traceback (most recent call last):
File "./ncms/__init__.py", line 1, in <module>
from __future__ import absolute_import, unicode_literals
ImportError: No module named __future__
Mon May 21 16:38:38 2018 - unable to load app 0 (mountpoint='') (callable not found or import error)
Mon May 21 16:38:38 2018 - *** no app loaded. going in full dynamic mode ***
Mon May 21 16:38:38 2018 - *** uWSGI is running in multiple interpreter mode ***
Mon May 21 16:38:38 2018 - spawned uWSGI master process (pid: 3456)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 1 (pid: 3458, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 2 (pid: 3459, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 3 (pid: 3460, cores: 1)
Mon May 21 16:38:38 2018 - spawned uWSGI worker 4 (pid: 3462, cores: 1)
When I removed the line "from future import absolute_import, unicode_literals" in my code, it rasied the same error, like:
Mon May 21 16:43:30 2018 - *** Operational MODE: preforking ***
Traceback (most recent call last):
File "./ncms/__init__.py", line 4, in <module>
from .celery import app as celery_app
File "./ncms/celery.py", line 6, in <module>
import os
ImportError: No module named os
Mon May 21 16:43:30 2018 - unable to load app 0 (mountpoint='') (callable not found or import error)
Mon May 21 16:43:30 2018 - *** no app loaded. going in full dynamic mode ***
looks like can't import anything...
When I access my website, the /var/log/uwsgi.log displayed:
Mon May 21 16:55:28 2018 - --- no python application found, check your startup logs for errors ---
[pid: 3812|app: -1|req: -1/3] 192.168.10.1 () {46 vars in 854 bytes} [Mon May 21 16:55:28 2018] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
Mon May 21 16:55:28 2018 - --- no python application found, check your startup logs for errors ---
[pid: 3812|app: -1|req: -1/4] 192.168.10.1 () {48 vars in 855 bytes} [Mon May 21 16:55:28 2018] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (1 switches on core 0)
I've google a lot and tried many time changing here and there, just don't get the right answer.
Anyone could help me? Please!
You can first using uwsgi command line to start app as root
uwsgi --socket 127.0.0.1:8080 --chdir /home/ncms/ncms/ --wsgi-file ncms/wsgi.py
then if ok, debug the config file mode
When trying to run Rocket.Chat on CentOS with the following command:
systemctl start rocketchat.service
getting the following error:
Failed to start rocketchat.service: Unit is not loaded properly: Bad message.
See system logs and systemctl status rocketchat.service for details.
and when I run the command systemctl status rocketchat.service I get:
Loaded: error (Reason: Bad message)
Active: inactive (dead)
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:3] Failed to add dependency on mongod.target[Service], ignoring: I... argument
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:4] Unknown lvalue 'Type' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:5] Unknown lvalue 'ExecStart' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:6] Unknown lvalue 'Restart' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:7] Unknown lvalue 'StandardOutput' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:8] Unknown lvalue 'StandardError' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:9] Unknown lvalue 'SyslogIdentifier' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:10] Unknown lvalue 'User' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:11] Unknown lvalue 'Environment' in section 'Unit'
Jan 18 08:56:12 localhost.relinns.com systemd[1]: [/etc/systemd/system/rocketchat.service:13] Invalid section header '[Install] WantedBy=multi-user.target'
Hint: Some lines were ellipsized, use -l to show in full.
Where is the problem?
Failed to add dependency on mongod.target[Service]
I had a similar issue in a different service. The problem is with you systemctl config file. You do not have a line break before [Service] element which shall actually start a new section and not be a part of After element.
So open you rocketchat.service file (which is probably in /usr/lib/systemd/system) and check that [Service] element starts on a new line (check other .service files in that dir for example of correct format). It shall look something like
....
After=mongod.target
[Service]
....
I'm trying to fetch certain values from a file that I've created with a system command. The file is in order and the regex is working up until I reach a "newline". I've tried to get it to grab the other value in multiple ways, but I can't seem to figure it out. Where am I going wrong?
Here is the code
sub servicechoise2 {
my $sys_com = "Servicestatus.txt";
print "type status you would like to see status of: ";
my $service = <>;
chomp $service;
system( "systemctl status $service > $sys_com" );
open( my $fh2, "<", $sys_com );
my #services;
while ( my $line = <$fh2> ) {
if ( $line =~ /([a-z]+.service)\s-.*(running|dead)/s ) {
my %hash2 = (
"servicename" => $1,
"servicestatus" => $2
);
push( #services, \%hash2 );
}
}
return \#services;
}
and here is the file I'm parsing
sshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled) Active: active (running) since Fri 2015-08-21 18:20:06 CEST; 1h 32min ago Main PID: 1297 (sshd) CGroup: /system.slice/sshd.service
└─1297 /usr/sbin/sshd -D
Aug 21 18:20:06 Thomas-PC systemd[1]: Started OpenSSH server daemon. Aug 21 18:20:07 Thomas-PC sshd[1297]: Server listening on 0.0.0.0 port
22. Aug 21 18:20:07 Thomas-PC sshd[1297]: Server listening on :: port 22.
cups.service - CUPS Printing Service Loaded: loaded (/usr/lib/systemd/system/cups.service; enabled) Active: active (running) since Fri 2015-08-21 18:20:33 CEST; 1h 32min ago Main PID: 3657 (cupsd) CGroup: /system.slice/cups.service
└─3657 /usr/sbin/cupsd -f
Aug 21 18:20:33 Thomas-PC systemd[1]: Started CUPS Printing Service.
ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled) Active: inactive (dead)
named.service - Berkeley Internet Name Domain (DNS) Loaded: loaded (/usr/lib/systemd/system/named.service; enabled) Active: active (running) since Fri 2015-08-21 18:20:10 CEST; 1h 32min ago Process: 2477 ExecStart=/usr/sbin/named -u named $OPTIONS (code=exited, status=0/SUCCESS) Process: 1302 ExecStartPre=/usr/sbin/named-checkconf -z /etc/named.conf (code=exited, status=0/SUCCESS) Main PID: 2502 (named) CGroup: /system.slice/named.service
└─2502 /usr/sbin/named -u named
Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.biz/A/IN': 2001:503:7bbb:ffff:ffff:ffff:ffff:ff7e#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.biz/AAAA/IN': 2001:503:7bbb:ffff:ffff:ffff:ffff:ff7e#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.biz/A/IN': 2001:500:3682::12#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.biz/AAAA/IN': 2001:500:3682::12#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'ns2.isc.ultradns.net/A/IN': 2001:502:4612::e8#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.com/AAAA/IN': 2001:502:f3ff::e8#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.com/AAAA/IN': 2610:a1:1016::e8#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.co.uk/AAAA/IN': 2610:a1:1017::e8#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.co.uk/A/IN': 2610:a1:1017::e8#53 Aug 21 19:20:11 Thomas-PC named[2502]: error (network unreachable) resolving 'pdns196.ultradns.biz/A/IN': 2610:a1:1015::e8#53
postfix.service - Postfix Mail Transport Agent Loaded: loaded (/usr/lib/systemd/system/postfix.service; enabled) Active: active (running) since Fri 2015-08-21 18:20:10 CEST; 1h 32min ago Process: 1335 ExecStart=/usr/sbin/postfix start (code=exited, status=0/SUCCESS) Process: 1328 ExecStartPre=/usr/libexec/postfix/chroot-update (code=exited, status=0/SUCCESS) Process: 1298 ExecStartPre=/usr/libexec/postfix/aliasesdb (code=exited, status=0/SUCCESS) Main PID: 2531 (master) CGroup: /system.slice/postfix.service
├─2531 /usr/libexec/postfix/master -w
├─2534 pickup -l -t unix -u
└─2535 qmgr -l -t unix -u
Aug 21 18:20:06 Thomas-PC systemd[1]: Starting Postfix Mail Transport Agent... Aug 21 18:20:09 Thomas-PC postfix/postfix-script[2510]: warning: group or other writable: /etc/postfix/./main.cf Aug 21 18:20:10 Thomas-PC postfix/postfix-script[2529]: starting the Postfix mail system Aug 21 18:20:10 Thomas-PC postfix/master[2531]: daemon started -- version 2.10.1, configuration /etc/postfix Aug 21 18:20:10 Thomas-PC systemd[1]: Started Postfix Mail Transport Agent. Aug 21 18:23:08 Thomas-PC postfix/smtpd[4293]: connect from localhost[127.0.0.1] Aug 21 18:23:08 Thomas-PC postfix/smtpd[4293]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <a14thona#localhost>: Recipient address rejected: User unknown in local recipient table; from=<admin#localhost> to=<a14thona#localhost> proto=ESMTP helo=<localhost.localdomain> Aug 21 18:23:08 Thomas-PC postfix/smtpd[4293]: lost connection after RCPT from localhost[127.0.0.1] Aug 21 18:23:08 Thomas-PC postfix/smtpd[4293]: disconnect from localhost[127.0.0.1]
the subroutine returns this array of hashes
[
{ servicename => "sshd.service", servicestatus => "running" },
{ servicename => "cups.service", servicestatus => "running" },
{ servicename => "ntpd.service", servicestatus => "dead" },
{ servicename => "named.service", servicestatus => "running" },
{ servicename => "postfix.service", servicestatus => "running" },
]
I would try to read the answer into a var, and then process the tokens using split ( it seems there are empty lines between tokens) something like:
open(F,"<",file) || die "...";
{ local $/; $in=; } # slurp file
foreach $line ( split(/\n\n/,$in) )
{
if ( $line =~ /([a-z]+.service)\s-.*(running|dead)/s ) {
......
}