Bitnami Redmine 4.1.1-5 login with default credentials fails - redmine

I've installed a brand new Bitnami Redmine from this image under VirtualBox
bitnami-redmine-4.1.1-5-linux-debian-10-x86_64.ova
It works, I access the web homepage, but I am unable to login with default credentials. I printed my credentials with
cat /home/bitnami/bitnami_credentials
which shows me what I need
Welcome to the Bitnami Redmine Stack
******************************************************************************
The default username and password is 'user' and '*******'.
******************************************************************************
You can also use this password to access the databases and any other component the stack includes.
Please refer to https://docs.bitnami.com/ for more details.
I copy/paste the values into login fields but login always fails. What am I doing wrong?
Additional info
My pre-start.log:
sudo cat /opt/bitnami/var/log/pre-start.log
## 2020-10-28 13:33:50+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/010_set_system_user_password...
## 2020-10-28 13:33:50+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/020_hostname...
## 2020-10-28 13:33:50+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/030_swap_file...
650000+0 records in
650000+0 records out
665600000 bytes (666 MB, 635 MiB) copied, 1.62172 s, 410 MB/s
Setting up swapspace version 1, size = 634.8 MiB (665595904 bytes)
no label, UUID=b4ec7182-58ad-428a-a447-414acfb3154e
## 2020-10-28 13:34:06+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/040_stack_etc...
## 2020-10-28 13:34:06+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/050_clean_pids...
## 2020-10-28 13:34:07+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/060_check_if_demo_machine...
## 2020-10-28 13:34:07+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/070_change_boot_log_permissions...
## 2020-10-28 13:34:07+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/080_prevent_incoming_connections...
## 2020-10-28 13:34:09+00:00 ## INFO ## 80 has been blocked
## 2020-10-28 13:34:11+00:00 ## INFO ## 443 has been blocked
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/090_get_default_passwords...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/100_regenerate_keys...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/110_configure_default_passwords...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/120_reenable_incoming_connections...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/130_resize_fs_partition...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/140_recreate_ssh_host_keys...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/150_welcome_message...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/160_welcome_message_reload...
## 2020-10-28 13:34:11+00:00 ## INFO ## Running /opt/bitnami/var/init/pre-start/170_enable_ufw...
My post-start.log:
## 2020-10-28 13:34:39+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/010_bitnami_agent_extra...
## 2020-10-28 13:34:39+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/020_bitnami_agent...
## 2020-10-28 13:34:39+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/030_update_ip...
## 2020-10-28 13:34:47+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/040_update_welcome_file...
## 2020-10-28 13:34:47+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/050_bitnami_credentials_file...
## 2020-10-28 13:34:47+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/060_start_gonit...
Starting gonit daemon
## 2020-10-28 13:34:47+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/070_clean_metadata...

I wanted an instant solution for easily create fully functional ticketing environment, but this Bitnami image couldn't give it to me.
I finally gave up and just downloaded the latest Turnkey Redmine image, which is the same environment, but a bit more. And it works for first try.
I know it's not the answer for my question, but a working solution.

Related

Failed to determine the health of the cluster when initial password setting in ElasticSearch

I tried to install ElasticSearch on AWS ec2
I tried to set up initial password of ElasticSearch with following command and got this error message
/usr/share/elasticsearch/bin
$ ./elasticsearch-reset-password -u elasticsearch
ERROR: Failed to determine the health of the cluster.
Here is my elasticsearch.yml file
======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:#
#cluster.name: aaaaaaa
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
##node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.bind_host: 0.0.0.0
#network.publish_host: ${HOSTNAME}
network.host: ["0.0.0.0"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: - ${HOSTNAME}
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: - ${HOSTNAME}
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 22-10-2022 08:48:06
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
# cluster.initial_master_nodes: ["aaaaaaa"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
################################################################################################################# just try to write something to upload this post ignore this ########################################################
This error means that your cluster is not ready yet when you are trying to change the cluster password.
First, you should configure your cluster and ensure that is healthy and then change the password.
The default username and password for Elasticsearch is "elastic" and "changeme".

TOC in Rhtml malfunctioning

I am knitting my Rmd file into HTML. Since there are a lot of codes in my Rmd file, I did the following so that my reader can navigate to a certain section within the HTML file:
---
title: "xxx"
author: "xxx"
date: "xx/xx/xx"
output:
html_document:
toc: true
toc_float: true
---
However, the TOC in the HTML file is not working correctly:
Is there a way to solve such issue ? Thanks a lot!
I realize what is causing the problem. Instead of doing:
# A {.tabset}
## a
## b
### b1
### b2
## c
Doing
# A {.tabset}
## a
## b {.tabset}
### b1
### b2
## c
solves the problem. Having smaller headers in tabset kind of mess up the TOC.

log kong api logs to syslog

I use the syslog plugin available with kong api gateway and I have the following:
{ "api_id": "some_id",
"id": "some_id",
"created_at": 4544444,
"enabled": true,
"name": "syslog",
"config":
{ "client_errors_severity": "info",
"server_errors_severity": "info",
"successful_severity": "info",
"log_level": "emerg"
} }
I use a centos7 ,and i have the following conf file(/etc/rsyslog.conf)
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template precise,"%syslogpriority%,%syslogfacility%,%timegenerated%,%HOSTNAME%,%syslogtag%,%msg%\n"
$ActionFileDefaultTemplate precise
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
Syslog daemon is running on the centos 7 machine and it's configured with logging level severity(info) same as or lower than the set config.log_level(emerg).
I am unable to see any logs on
/var/log/messages
syslog daemon configured with logging level severity same as or lower than the set config.log_level for proper logging.

What would cause Elastic Search to not produce log files?

I have installed Elastic Search V5.0 on my ubuntu64 virtual machine via the debian package given using this tutorial from Elastic
As explained in the tutorial sudo -i service elasticsearch start wont give any messages here (poor design imo)
I tried adding STDOUT.log file to the directory and it still is empty after starting elastic search
If I sudo bin/elasticsearch I get this trace:
Exception in thread "main" ElasticsearchParseException[malformed, expected settings to start with 'object', instead was [VALUE_STRING]]
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:70)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:50)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:50)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:938)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:927)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)
at org.elasticsearch.bootstrap.Bootstrap.initialEnvironment(Bootstrap.java:207)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:247)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96)
at org.elasticsearch.cli.Command.main(Command.java:62)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73)
And without sudo provlidges:
Exception in thread "main" SettingsException[Failed to load settings from
/usr/share/elasticsearch/config/elasticsearch.yml]; nested: AccessDeniedException[/usr/share/elasticsearch/config/elasticsearch.yml];
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/elasticsearch.yml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:927)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)
at org.elasticsearch.bootstrap.Bootstrap.initialEnvironment(Bootstrap.java:207)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:247)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96)
at org.elasticsearch.cli.Command.main(Command.java:62)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73)
I'm not a fan of posting so much text on stackoverflow but here is my configuration located at: /etc/elasticsearch
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
cluster.name: sdc-test-es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
#node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
The problem is with misconfigured Elasticsearch.
Ensure that ES uses config file from correct location. For example, "without sudo priviledges" outputs wrong location.
Ensure you don't have mistakes in config.
To isolate problem, comment out everything in config, then uncomment line-by-line your custom settings in config, and try to start. If find the line causing the problem, check documentation.
Also, try to start without "-d" option, Elasticsearch will output full stacktrace in console, and it should tell more about misconfigured setting.

Xymon/Hobbit Xymon & ControlByWeb Temperature Monitor

Looking for help with the pre-made script for the ControlByWeb Temperature Monitor made for the Xymon monitoring server. I have followed the instructions step by step. The Column "cbwtemp" is not being generated. Currently runnimg xymon 4.3.12 with this script http://www.revpol.com/files/xymon_cbw_temp.sh Not sure if there has been changes to the coding on the newer xymon server that is preventing the script to run properly. I have checked the logs and no errors pertaining the script.
Here is the script:
#!/bin/bash
#
# NAME
# ----
# - xymon_cbw_temp.sh
#
# DESCRIPTION
# -----------
# - Simple Bash shell script to monitor from 1 to 4 temperature readings
# from a ControlByWeb 4-temp/2-relay module and the X-300 8-temp/3-relay
# 7-day programmable thermostat module and report status and
# temperature readings back to a Xymon server
#
# - The most current version of this script may be
# found at http://www.revpol.com/xymon_cbw_temp_script
#
# - Instructions to integrate the output of this script to be monitored
# and graphed by a Xymon server may also be found at the above URL
#
# - If you find this script useful, I'd love to know. Send me an email!
#
# William A. Arlofski
# Reverse Polarity, LLC
# 860-824-2433 Office
# http://www.revpol.com/
#
# HISTORY
# -------
# - 20100307 - Initial version release
#
# - 20100318 - Minor modifications to also work with the new
# X-300 (8-temp & thermostat) module
# - Increased the curl timeout from 3 to 10 seconds
#
# - 20100319 - Modifications to deal with a situation where a temperature
# sensor stops communicating with the module. Modified 2nd
# grep in getcurtemp() module and added testcomm() function to
# see if the current temp is "xx.x" and flag CURCOLOR and COLOR
# as red if yes
#
# - 20100322 - Added "ADMINMSG" variable to allow for an administrative
# messages to be added to the top of the report
#
# - 20100408 - Minor errors in grammar fixed
#
# - 20100520 - Modification to getcurtemp() function to catch cases where the
# whole number portion of the temperature was a single digit.
#
# - 20101014 - Added a "SCALE" variable for display output. Enter an "F" or a
# "C" to match the scale setting in your temperature module
# - Added "SCALE" variable to the end of all temperature variables
# - Rewrote the parsezone() function to add individual sensor
# information to the report via a new "ZONEMSG" variable. This
# will help end users understand why a particular sensor is in
# yellow or red condition without having to check the "ZONE"
# variable in this script
# - Renamed "HOST" variable to "MODULE" throughout script
# - Modified the default "ADMINMSG" variable to use "MACHINEDOTS"
# and "MODULE" variables as an example
# - Added the "ZONEMSG" to become part of the "MSG" variable that is
# returned to the Xymon server
# - Quoted a few more strings
#
# - 20130502 - Spelling errors, general cleanup (extra spaces etc), note that
# $HOST can be host[:port]
#
###############################################################################
#
# Copyright (C) 2010 William A. Arlofski - waa-at-revpol-dot-com
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# or visit http://www.gnu.org/licenses/gpl.txt
#
###############################################################################
#
# Set some variables
# ------------------
# Local System Binaries
# ---------------------
GREP="/bin/grep"
TR="/usr/bin/tr"
CURL="/usr/bin/curl"
# Location/device specific variables
# ----------------------------------
#
# FQDN or IP address of the ControlByWeb
# temperature monitor to pull data from
# and user login and password
# (User should be blank)
#---------------------------------------
MODULE="host.example.com:80"
USER=""
PASS="password"
# Name of Xymon test
# ------------------
COLUMN="cbwtemp"
# Define the temperature zone(s) to monitor
# -----------------------------------------
# Format is as follows:
#
# ZONE="Sensor#:TestType:Location:WarnTmp1:CritTmp1:[WarnTmp2:CritTmp2] \
# [ Sensor#:TestType:Location:WarnTmp:CritTmp:[WarnTmp2:CritTmp2] ...]"
#
# Where the fields are as follows:
# - Sensor : The sensor number from 1 to 4
# - TestType : The type of test: UP, DOWN, RANGE, IGNORE.
# If TestType is RANGE, WarnTmp2 and CritTmp2 are required
# If TestType is IGNORE, then only Sensor:TestType:Location are required
# - Location : The name of the location - Must match your Xymon definitions
# - WarnTmp1 : The value at which to set test COLOR and alert to yellow for UP or DOWN tests
# For RANGE test types, this value is used as if the test were a DOWN test
# - CritTmp1 : The value at which to set test COLOR and alert to red for UP or DOWN tests
# For RANGE test types, this value is used as if the test were a DOWN test
# - WarnTmp2 : Only used for RANGE type tests. The value at which to set test COLOR
# to yellow as temperature increases
# - CritTmp2 : Only used for RANGE type tests. The value at which to set test COLOR
# to red as temperature increases
#
#
# The ControlByWeb temperature monitor
# supports up to four temperature sensors
# and reports the temperature as XX.Y in
# either Celsius or Fahrenheit
# ---------------------------------------
#ZONES="1:UP:ServerRoom:88.0:98.0 \
# 2:IGNORE:Outside \
# 3:RANGE:Office:70.0:65.0:80.0:82.0"
# 4:DOWN:Basement:36.0:32.0"
ZONES="1:RANGE:Cellar:38.0:33.0:60.0:70.0"
# Define the scale - Only used in report text
# -------------------------------------------
SCALE="F"
# Add an administrative message to the top of the report page
# Not necessary, but can be a quick way to know what server
# is polling a ControlByWeb module, where the module is
# physically located, and perhaps some instructional
# information
# -----------------------------------------------------------
ADMINMSG="- ${MACHINEDOTS} is the host polling the ControlByWeb X-300 module ${MODULE}
- The ${MODULE} X-300 ControlByWeb module physically resides in the cellar"
###############################################################################
# --------------------------------------------------
# Nothing should need to be modified below this line
# --------------------------------------------------
###############################################################################
#
# ----------------------------
# Set required Xymon variables
# ----------------------------
COLOR="green"
MSG=""
# ----------------
# Set up functions
# ----------------
#
# Get the four sensor temperature outputs from the
# "state.xml" page from ControlByWeb temperature monitor
# ------------------------------------------------------
getdata() {
TEMPS=`"$CURL" -s -m 10 -u "$USER:$PASS" "$MODULE/state.xml"`
# If the device returns no data, or is offline, or does not respond,
# or if curl fails for any reason, then just fail and exit the script.
# Xymon will alert purple indicating that it has not seen data for this
# test after 20 minutes (default). I suppose we COULD instead force a
# yellow alert for all temps for this device during this condition...
# ---------------------------------------------------------------------
EXIT="$?"
if [ "$EXIT" != "0" ] || [ -z "$TEMPS" ]; then
exit 1
fi
}
# Separate zone components:
# - Skip all temperature values for IGNORE test types
# - Assign WarnTmp2 and CrtTmp2 for RANGE test types
# Formatting here is ugly to get resonable output in the display
# with minimal use of HTML to clutter up email and SMS reports :(
# ---------------------------------------------------------------
parsezone () {
sensornum=`echo "$zone" | cut -d':' -f1`
testtype=`echo "$zone" | cut -d':' -f2 | "$TR" [a-z] [A-Z]`
location=`echo "$zone" | cut -d':' -f3`
ZONEMSG="${ZONEMSG}
- Sensor - #$sensornum
Monitoring - $location
Test Type - $testtype"
case "$testtype" in
UP | DOWN )
warntemp1=`echo "$zone" | cut -d':' -f4` ;
crittemp1=`echo "$zone" | cut -d':' -f5` ;
ZONEMSG="${ZONEMSG}
- Warning Temp - ${warntemp1}${SCALE}, Critical Temp - ${crittemp1}${SCALE}
" ;;
RANGE )
warntemp1=`echo "$zone" | cut -d':' -f4` ;
crittemp1=`echo "$zone" | cut -d':' -f5` ;
warntemp2=`echo "$zone" | cut -d':' -f6` ;
crittemp2=`echo "$zone" | cut -d':' -f7` ;
ZONEMSG="${ZONEMSG}
LOW -- Warning Temp - ${warntemp1}${SCALE}, Critical Temp - ${crittemp1}${SCALE}
HIGH -- Warning Temp - ${warntemp2}${SCALE}, Critical Temp - ${crittemp2}${SCALE}
" ;;
IGNORE )
ZONEMSG="${ZONEMSG}
" ;;
esac
}
# Pull current zone's temperature reading out of xml tags
# -------------------------------------------------------
getcurtemp () {
# Each of the four temperatures is represented
# as a line <sensorXtemp>[-][Y]YY.Z</sensorXtemp>
# where X is the sensor number from 1 to 8,
# Y is the temp in degrees, and Z is the tenths
# We only want the numeric portion including the
# negative (hyphen) symbol between the tags for
# the current sensor we are looping for
# Also need to check for "xx.x" in case a temperature
# sensor is not communicating with the module
# ---------------------------------------------------
curtemp=`echo "$TEMPS" \
| "$GREP" -Eo "sensor$sensornum.*sensor$sensornum" \
| "$GREP" -Eo [-]*\(\(x\)\|\([0-9]\)\)\{1,3\}\.\(\(x\)\|[0-9]\)`
}
# Test to make sure that we have a numeric value and not "xx.x"
# which would mean that this temperature sensor is broken, or
# otherwise not communicating with the module. Flag this test's
# CURCOLOR red and the main test COLOR red as well.
# -------------------------------------------------------------
testcomm () {
if [ "$curtemp" == "xx.x" ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
return 1
else
return 0
fi
}
# Test for temperature RISING and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testup() {
# Is current temp greater than the critical temp?
# -----------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$crittemp1" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is current temp greater than the warning temp?
# ----------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$warntemp1" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
# Test for temperature DECREASING and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testdown() {
# Is current temp less than the critical temp?
# --------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -le `echo "$crittemp1" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is current temp less than the warning temp?
# -------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -le `echo "$warntemp1" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
# Test for temperature being within the defined RANGE
# and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testrange() {
# Is the current temp is outside of the high and low critical levels?
# -------------------------------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -le `echo "$crittemp1" | "$TR" -d .` ] \
|| [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$crittemp2" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is the current temp is outside of the high and low warning levels?
# ------------------------------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -le `echo "$warntemp1" | "$TR" -d .` ] \
|| [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$warntemp2" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
###########################################################
# -----------
# Main Script
# -----------
# Use curl to pull the data from the module
# -----------------------------------------
getdata
# Loop through the defined zones
# ------------------------------
for zone in $ZONES; do
CURCOLOR="green"
parsezone
getcurtemp
# Make sure that the sensor we are testing is
# actually communicating before we move onto the
# UP, DOWN, RANGE or IGNORE tests.
# ----------------------------------------------
if testcomm ; then
# Determine if this is an UP or DOWN test
# ---------------------------------------
case "$testtype" in
UP )
testup
;;
DOWN )
testdown
;;
RANGE )
testrange
;;
IGNORE )
# Do not test anything. Just append
# the $CURCOLOR (green), $location
# and $curtemp values to the Xymon
# Server report for this "zone"
;;
* )
exit 1
;;
esac
fi
# Build the text of the status message
# that will be sent to the Xymon Server
# -------------------------------------
MSG="${MSG}
&${CURCOLOR} ${location} : ${curtemp}${SCALE}"
done
# Prepend the administrative message to the report
# ------------------------------------------------
MSG="${ADMINMSG}
${ZONEMSG}
<hr>
${MSG}"
# Send final report to Xymon Server
# ---------------------------------
$BB $BBDISP "status $MACHINE.$COLUMN $COLOR `date`
${MSG}
"
###########################################################
I work for ControlByWeb and looked in to this issue a bit for you. I installed Xymon onto a computer and was able to get the referenced script working. There were a few changes I had to make though.
Xymon by default will ignore messages sent to it unless it is about a host in the host.cfg file. The script uses a variable called $MACHINE when it goes to send the message and in my case, this variable was blank. Thus it didn't match a host in the hosts.cfg file and was ignored.
To fix this, I ended up adding a line near the top of the script file to define this variable to a hostname that existed in the hosts.cfg file.
USER=""
PASS="webrelay"
MACHINE="xymon.example.com"
I then saved the script file in xymon/client/ext and gave it 755 permissions and set the user/group to xymon.
In addition to this, I found the $BB variable used at the very end was dependent on another variable that wasn't initialized. To fix this, I added the following line to the xymon/client/etc/xymonclient.cfg file:
XYMONCLIENTHOME="/home/xymon/client"
Finally, I changed the /xymon/client/etc/clientlaunch.cfg file to include the following:
[cbwtemp]
ENVFILE $XYMONCLIENTHOME/etc/xymonclient.cfg
CMD $XYMONCLIENTHOME/ext/xymon_cbw_temp.sh
LOGFILE $XYMONCLIENTLOGS/xymon_cbw_temp.log
INTERVAL 1m
If everything worked, it should now show a new column next to the host you specified within a few minutes.
Let me know if this fixes the issue for you or not.