Systemd: Messages are logged at the end of script execution - django

I have Django site that have custom management command that sync data from one system to database every 5 min. In the script for the command there is serveral log messages. When I manually execute the command, everything is working fine and each log message is outputed to stdout/stderr at the time as it should be. No problem here.
For running the command every 5min, I setup systemd service and timer and it is working as it should be with one minor thing. All messsages from the script are logged in systemd at the time when the script execution ended, not at the time when they happened. The script is usually running about one minute and log message is outputed sporadically as each subtask in the script has ended. In my case, systemd logged all messages as if they happend at the same time, more precisely at the end of execution.
So, log is looking something like this and pay attention on timestamp of messages.
Jul 16 09:20:01 SmallServer systemd[1]: Started DjngoSite Sync daemon.
Jul 16 09:20:40 SmallServer python[21265]: Task 1 completed
Jul 16 09:20:40 SmallServer python[21265]: Task 2 completed
Jul 16 09:20:40 SmallServer python[21265]: Task 3 completed
Jul 16 09:20:40 SmallServer python[21265]: Task 4 completed
Jul 16 09:20:40 SmallServer python[21265]: Sync ended
But, I want to look like this:
Jul 16 09:20:01 SmallServer systemd[1]: Started DjngoSite Sync daemon.
Jul 16 09:20:11 SmallServer python[21265]: Task 1 completed
Jul 16 09:20:15 SmallServer python[21265]: Task 2 completed
Jul 16 09:20:22 SmallServer python[21265]: Task 3 completed
Jul 16 09:20:39 SmallServer python[21265]: Task 4 completed
Jul 16 09:20:40 SmallServer python[21265]: Sync ended
I cannot figure out is this issue with systemd or with Django. I am writing messages to the stdout as it is shown in documentation.

Python is buffering what is written to stdout.
What helps is to set enviromental variable PYTHONUNBUFFERED=1
Enviromental variable PYTHONUNBUFFERED
Python -u flag
With Python version 3.7 was changed that the text layer of the stdout and stderr streams now is unbuffered.

Related

Tomcat Server Loaded errir

I tried to install and start the Tomcat on my AWS EC2 instance.
However, I failed to run tomcat server after installed it.
I followed the following article for that
https://blog.devops4me.com/aws-tutorial-how-to-install-tomcat-in-aws-ec2-install/
And after finished step 6, and reloaded tomcat server, I tried to restart tomcat service like step 7. However, my terminal showed as below:
Feb 14 20:17:07 ip-172-31-54-104.ec2.internal systemd[1]: tomcat.service: control process exited, code=exited status=203
Feb 14 20:17:07 ip-172-31-54-104.ec2.internal systemd[1]: Failed to start Tomcat Server.
Feb 14 20:17:07 ip-172-31-54-104.ec2.internal systemd[1]: Unit tomcat.service entered failed state.
Feb 14 20:17:07 ip-172-31-54-104.ec2.internal systemd[1]: tomcat.service failed.
Feb 14 20:17:10 ip-172-31-54-104.ec2.internal systemd[1]: [/etc/systemd/system/tomcat.service:22] Missing '='.
Feb 14 20:17:22 ip-172-31-54-104.ec2.internal systemd[1]: tomcat.service holdoff time over, scheduling restart.
Feb 14 20:17:22 ip-172-31-54-104.ec2.internal systemd[1]: tomcat.service failed to schedule restart job: Unit is not loaded properly: Bad message.
Feb 14 20:17:22 ip-172-31-54-104.ec2.internal systemd[1]: Unit tomcat.service entered failed state.
Feb 14 20:17:22 ip-172-31-54-104.ec2.internal systemd[1]: tomcat.service failed.
I don't really understand how this work, and tried to search error on internet, but still doesn't have any clue of how to solve this problem, if anyone can give me a hint, that would be great, thank you!

how to get Jenkins to be assessable from aws ec2 instance

so this is the problem I have installed open jdk 8 for jenkins. jenkins is insalled and running given
● jenkins.service - LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; generated)
Active: active (exited) since Thu 2021-10-21 19:22:55 UTC; 20min ago
Docs: man:systemd-sysv-generator(8)
Process: 437 ExecStart=/etc/init.d/jenkins start (code=exited, status=0/SUCCESS)
Oct 21 19:22:52 ip-172-31-30-187 systemd[1]: Starting LSB: Start Jenkins at boot time...
Oct 21 19:22:53 ip-172-31-30-187 jenkins[437]: Correct java version found
Oct 21 19:22:53 ip-172-31-30-187 jenkins[437]: * Starting Jenkins Automation Server jenkins
Oct 21 19:22:54 ip-172-31-30-187 su[619]: (to jenkins) root on none
Oct 21 19:22:54 ip-172-31-30-187 su[619]: pam_unix(su-l:session): session opened for user jenkins by (u>
Oct 21 19:22:54 ip-172-31-30-187 su[619]: pam_unix(su-l:session): session closed for user jenkins
Oct 21 19:22:55 ip-172-31-30-187 jenkins[437]: ...done.
Oct 21 19:22:55 ip-172-31-30-187 systemd[1]: Started LSB: Start Jenkins at boot time.
however, using serverip:8080 brings up nothing
used this tutorial https://www.youtube.com/watch?v=B6K1IF-489M&t=36s
port 8080 is also added to security group
this problem was not solved but making a fresh ec2 instance and installing Jenkins by following that tutorial did the trick

Not able to save crash dump using kdump

I have a VPS server on amazon's AWS Lightsail service.
I've been testing kdump using the following two commands (to trigger an automatic kernel crash):
# echo 1 > /proc/sys/kernel/sysrq
# echo c > /proc/sysrq-trigger
The problem is that the system crashed and rebooted, but there's no dump saved.
Here is a list of checking I've done:
[centos#server crash]$ systemctl status kdump
● kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2019-03-18 07:43:34 UTC; 5 days ago
Process: 4119 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
Main PID: 4119 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/kdump.service
Mar 18 07:43:32 ip-.ap-northeast-1.compute.internal systemd[1]: Starting Crash recovery kernel arming...
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal kdumpctl[4119]: kexec: loaded kdump kernel
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal kdumpctl[4119]: Starting kdump: [OK]
Mar 18 07:43:34 ip-.ap-northeast-1.compute.internal systemd[1]: Started Crash recovery kernel arming.
[centos#server crash]$ dmesg | grep Reserving
[ 0.000000] Reserving 256MB of memory at 368MB for crashkernel (System RAM: 2047MB)
[centos#server crash]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.1.3.el7.x86_64 root=UUID=f41e390f-835b-4223-a9bb-9b45984ddf8d ro console=tty0 crashkernel=256M console=ttyS0,115200
[centos#server crash]$ grep -v ^# /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31
default reboot
There's no log of the crash in the /var/log/messages indicating any error there might be. So I wonder what I might have missed. Or an AWS Lightsail VPS is not capable of saving a kdump at all...?

Failed to start Redis In-Memory Data Store. Ubuntu 18.04

I am trying to install redis on my AWS server. I have Ubuntu 18.04 installed on it. I am following steps to install redis from digitalocean article.
When i run sudo systemctl status redis command i am getting below error.
screenshot
I tried to edit /etc/systemd/system/redis.service file and added Type=forking under [Service] section but still getting the same error.
Can anyone suggest me how i can get it fixed?
Thanks in advance.
Based on same digitalocean tutorial, actually it's running fine.
Run this command sudo systemctl restart redis.service, we get (showing "failed" on last line):
● redis.service - Redis In-Memory Data Store
Loaded: loaded (/etc/systemd/system/redis.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-06-28 12:03:11 +03; 1min 0s ago
Process: 20428 ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf (code=exited, status=
Main PID: 20428 (code=exited, status=203/EXEC)
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Service hold-off time over, scheduling restar
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Scheduled restart job, restart counter is at
Jun 28 12:03:11 XYZ systemd[1]: Stopped Redis In-Memory Data Store.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Start request repeated too quickly.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Failed with result 'exit-code'.
Jun 28 12:03:11 XYZ systemd[1]: Failed to start Redis In-Memory Data Store.
But if you run sudo service redis-server status, we get (showing "running" on 3rd line):
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-06-28 11:50:13 +03; 19min ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 19278 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 19371 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCC
Main PID: 19382 (redis-server)
Tasks: 4 (limit: 4915)
CGroup: /system.slice/redis-server.service
└─19382 /usr/bin/redis-server 127.0.0.1:6379
Jun 28 11:50:13 XYZ systemd[1]: Starting Advanced key-value store...
Jun 28 11:50:13 XYZ systemd[1]: redis-server.service: Can't open PID file /var/run/redis/red
Jun 28 11:50:13 XYZ systemd[1]: Started Advanced key-value store.
After searching for hours, it seems like it's some difference between systemctl & service and nothing more, but the actual redis server is running fine. Corrects me if that's not the case. Here's the link: https://askubuntu.com/questions/903354/difference-between-systemctl-and-service-commands
You can even check if redis is working fine, by redis-cli ping, should print PONG
I also encountered this problem, then I tried to check it again.
Finally, I found that when I authorized /var/lib/redis, I entered the wrong command, causing the redis account to have no access to /var/lib/redis.
sudo chown redis:redis /var/lib/redis
sudo systemctl restart redis
succeeded.

The rrdcached is failed to stop

I am using rrdtool-1.4.7-1 version of rrd tool.
In normal case, If all RRD data has been flushed then "clean shutdown; all RRDs flushed message" displays in syslog logs.
Below message will show when shutdown is performed.
Mar 21 10:36:28 rrdcached[52232]: caught SIGTERM
Mar 21 10:36:28 rrdcached[52232]: starting shutdown
Mar 21 10:36:29 rrdcached[52232]: clean shutdown; all RRDs flushed
Mar 21 10:36:29 rrdcached[52232]: removing journals
Mar 21 10:36:29 rrdcached[52232]: goodbye
but in problematic case, below message display and failed to stop rrd cached.
Mar 21 10:36:28 rrdcached[52232]: caught SIGTERM
Mar 21 10:36:28 rrdcached[52232]: starting shutdown
I have also checked on top logs and flushing is in progress on directory.
Please let me know in which case rrd cached daemon will fail to stop?