OpenSUSE service: where does the log go? - opensuse

I am trying to set up an application as a service on OpenSUSE LEAP 15.
Googling around I found that one does that (or should I say "one can do that"?) by providing a file <servicename>.service in /usr/lib/systemd/system/ and then one enables that service using YaST.
I provided such a file (copying from a tomcat.service file already on the machine and replacing the misc. entries with values relevant for my application).
The setup using YaST seemed to have worked OK, the service was listed and I enabled it. But now I have an issue: when I start the application using service <servicename> start the startup fails. Using service <servicename> status I see the last 10 lines of some log which read:
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at java.lang.reflect.Method.invoke(Method.java:498)
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:371)
Jun 08 14:41:04 test-vm ctlscript.sh[31955]: at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:458)
...
This is the tail of some Java stacktrace, so obviously there is some exception while starting up.
But to be able to figure out what is going wrong I would need to see more from that log but where is this service command logging to? I.e. from which logfile does the above content of the service ... status command come from?

Ah - I finally found the solution: one can use journalctl -u <servicename> to see all log entries of a specific service. While that doesn't answer the original question it serves the underlying purpose of the question, namely to see all log entries for a specific service.

Related

Attempting to put a CloudWatch Agent onto an On-Premise server. Issues with cwagent-otel-collector

As said in the title, I am attempting to put a CloudWatch Agent (CW agent) on my On-Premise-Server (OPS).
After running this line of code that I got from the AWS User Guide to start the CW agent:
& $Env:ProgramFiles\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1 -m ec2 -a start
I got this error:
****** processing cwagent-otel-collector ******
cwagent-otel-collector will not be started as it has not been configured yet.
****** processing amazon-cloudwatch-agent ******
AmazonCloudWatchAgent has been started
I did/do not know what this was so I searched and found that when someone else had this issue, they did not create a config file.
I did create a config file (named config.json by default) using the configuration wizard and I am still having the issue.
I have tried looking into a number of pages on that user guide, but nothing has resolved the issue.
Thank you in advance for any assistance you can provide.
This message is info and not an error.
CloudWatch agent is bundled with the AWS OpenTelemetry collector agent. They're actually two agents. CloudWatch agent and Otel collector have separate configuration files. If you provide a config for one and not the other, it will only start the one that is configured. This is expected behavior.
Thank you for taking the time to answer. I have since resolved the issue (recently).
Everything from the command I was using to the path where the file resided was incorrect.
Starting over and going through all the steps again with background information helped.
The first installation combined with learning everything for the first time produced the issue.
Anyone having this issue I recommend that when you hit a wall like this you start over. I know it is not what anyone wants to do, but in the end it saved time.

NetworkManager.conf reset to "404: NotFound" every time service is restarted

i encountered a problem while working with the NetworkManager service on my Raspberry pi 4, i am running it with openSUSE Tumbeweed JeOS, so the default is not networkmanager but wicked, i started to notice that every time i tried to restart it the service failed.
Watching through the logs i noticed this:
Oct 31 16:49:37 suse-server NetworkManager[2966]: <warn> [1635698977.7227] config: Default config file invalid: /etc/NetworkManager/NetworkManager.conf: Key file contains line “404: Not Found” which is not a key-value pair, group, or comment
Basically every time NetworkManager is restarted something overwrites the config file, probably with a predefined template from GitHub (when you open a invalid link in raw.github.com you get 404: Not Found), i temporarily switched back to wicked but i need to resolve this problem because i need to run HASS.IO, which unfortunately supports only NetworkManager.

Custom metrics (Disk-space-utilization) is missing on the AWS console

I want to create a cloudwatch alarm for the diskspace-utilization.
I've folowed the AWS doc below
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
It is creating the cron on my instance and I've checked my system log as well.
Sep 22 12:20:01 ip-#### CRON[13921]: (ubuntu) CMD
(~/volumeAlarm/aws-scripts-mon/mon-put-instance-data.pl
--disk-space-util --disk-space-avail --disk-space-used --disk-path=/ --from-cron)
Sep 22 12:20:13 ip-#### CRON[13920]: (ubuntu) MAIL (mailed 1 byte of output; but got status 0x004b, #012)
also manually running the command,
./mon-put-instance-data.pl --disk-space-util --disk-space-avail
--disk-space-used --disk-path=/
shows the result,
print() on closed filehandle MDATA at CloudWatchClient.pm line 167.
Successfully reported metrics to CloudWatch. Reference Id:####
But there is no metrics in the aws console, So that I can set the alarm,
Please help, If someone solved the problem.
CloudWatch scripts will get the instance's meta data and write it to a
local file /var/tmp/aws-mon/instance-id, if the file or folder has
incorrect permission that the script cannot write to file
/var/tmp/aws-mon/instance-id, then it may throw error like "print() on
closed filehandle MDATA at CloudWatchClient.pm line 167". Sorry for
making assumption. A possible scenario is: the root user executed the
mon-get-instance-stats.pl or mon-put-instance-data.pl scripts
initially, and the scripts has generated the file/folder on place,
then the root user switched back to different user and execute the
CloudWatch scripts again, this error shows up. To fix this, you need
to remove the folder /var/tmp/aws-mon/, and re-execute the CloudWatch
scripts to re-generate the folder and files again.
This is the support answer that I get from the aws support on having the same issue may be it will help u too. Also do check your AWSAccessKey for the EC2 instance as well.

Vora Manager 1.3 log rotation

Is there any log rotation in Vora 1.3? After 2 months of running Vora 1.3 I realized I'm almost of disk space on my nodes because /var/log/vora-manager is like 46 Gb. So I had to stop it, kill the logs and restart.
But maybe I missed some setting?
Edit 1: The log file is supposed to be stored in /var/log/vora/vora-manager, not the folder I mentioned above, but still I saw a huge log file there. The file /var/log/vora-manager is also mentioned in the line 178 of control.py script that is supposed to start a worked vora-manager.
You are right -- the vora-manager log file is not written into the standard /var/log/vora directory, instead it is written to /var/log/vora-manager. This has been corrected in Vora 1.4.
The logs should be rotating based on the vora_manager_log_max_file_size variable which is also set in Ambari.
Something must have gone wrong whenever vora tries to rotate the logs. I propose you search through your log file for the following line and see if it is followed by some kind of error:
vora.vora-manager-master: [c.740b0d26] : Running['sudo', '-i', '-u',
'root', '/usr/sbin/logrotate', '/etc/logrotate.d/vora-manager-master']
You can also change the verbosity of the logger by setting the vora_manager_log_level config variable in Ambari from INFO to WARNING. Be ware this will hide the log rotation log messages.

schedule task not working, log file with no errors

I have a simple task schedule that sends a email for test.
It's not working at all,looking at the logs:
scheduler.log
Jul 8, 2016 1:20 PM Information scheduler-1
[test] Executing at Fri Jul 08 13:20:00 PDT 2016
It shows that it has run, and I also think its not running other task.
Looking at the application log I also see no errors.
Is there any other place I should be looking at?
The log you show above indicates if the scheduler ran if expected. It does not indicate if you page you ran was success.
To find out what happened with the page, go the the scheduled task editor and "Save output to a file".
Then specify a file name. Depending on the nature of the scheduled task, you may want to publish to a shared directory, or keep it hidden way.
Make sure to choose "Overwrite" so that you can always get the latest result of your scheduled task.