Xymon/Hobbit Xymon & ControlByWeb Temperature Monitor - hobbitmon

Looking for help with the pre-made script for the ControlByWeb Temperature Monitor made for the Xymon monitoring server. I have followed the instructions step by step. The Column "cbwtemp" is not being generated. Currently runnimg xymon 4.3.12 with this script http://www.revpol.com/files/xymon_cbw_temp.sh Not sure if there has been changes to the coding on the newer xymon server that is preventing the script to run properly. I have checked the logs and no errors pertaining the script.
Here is the script:
#!/bin/bash
#
# NAME
# ----
# - xymon_cbw_temp.sh
#
# DESCRIPTION
# -----------
# - Simple Bash shell script to monitor from 1 to 4 temperature readings
# from a ControlByWeb 4-temp/2-relay module and the X-300 8-temp/3-relay
# 7-day programmable thermostat module and report status and
# temperature readings back to a Xymon server
#
# - The most current version of this script may be
# found at http://www.revpol.com/xymon_cbw_temp_script
#
# - Instructions to integrate the output of this script to be monitored
# and graphed by a Xymon server may also be found at the above URL
#
# - If you find this script useful, I'd love to know. Send me an email!
#
# William A. Arlofski
# Reverse Polarity, LLC
# 860-824-2433 Office
# http://www.revpol.com/
#
# HISTORY
# -------
# - 20100307 - Initial version release
#
# - 20100318 - Minor modifications to also work with the new
# X-300 (8-temp & thermostat) module
# - Increased the curl timeout from 3 to 10 seconds
#
# - 20100319 - Modifications to deal with a situation where a temperature
# sensor stops communicating with the module. Modified 2nd
# grep in getcurtemp() module and added testcomm() function to
# see if the current temp is "xx.x" and flag CURCOLOR and COLOR
# as red if yes
#
# - 20100322 - Added "ADMINMSG" variable to allow for an administrative
# messages to be added to the top of the report
#
# - 20100408 - Minor errors in grammar fixed
#
# - 20100520 - Modification to getcurtemp() function to catch cases where the
# whole number portion of the temperature was a single digit.
#
# - 20101014 - Added a "SCALE" variable for display output. Enter an "F" or a
# "C" to match the scale setting in your temperature module
# - Added "SCALE" variable to the end of all temperature variables
# - Rewrote the parsezone() function to add individual sensor
# information to the report via a new "ZONEMSG" variable. This
# will help end users understand why a particular sensor is in
# yellow or red condition without having to check the "ZONE"
# variable in this script
# - Renamed "HOST" variable to "MODULE" throughout script
# - Modified the default "ADMINMSG" variable to use "MACHINEDOTS"
# and "MODULE" variables as an example
# - Added the "ZONEMSG" to become part of the "MSG" variable that is
# returned to the Xymon server
# - Quoted a few more strings
#
# - 20130502 - Spelling errors, general cleanup (extra spaces etc), note that
# $HOST can be host[:port]
#
###############################################################################
#
# Copyright (C) 2010 William A. Arlofski - waa-at-revpol-dot-com
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2, as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# or visit http://www.gnu.org/licenses/gpl.txt
#
###############################################################################
#
# Set some variables
# ------------------
# Local System Binaries
# ---------------------
GREP="/bin/grep"
TR="/usr/bin/tr"
CURL="/usr/bin/curl"
# Location/device specific variables
# ----------------------------------
#
# FQDN or IP address of the ControlByWeb
# temperature monitor to pull data from
# and user login and password
# (User should be blank)
#---------------------------------------
MODULE="host.example.com:80"
USER=""
PASS="password"
# Name of Xymon test
# ------------------
COLUMN="cbwtemp"
# Define the temperature zone(s) to monitor
# -----------------------------------------
# Format is as follows:
#
# ZONE="Sensor#:TestType:Location:WarnTmp1:CritTmp1:[WarnTmp2:CritTmp2] \
# [ Sensor#:TestType:Location:WarnTmp:CritTmp:[WarnTmp2:CritTmp2] ...]"
#
# Where the fields are as follows:
# - Sensor : The sensor number from 1 to 4
# - TestType : The type of test: UP, DOWN, RANGE, IGNORE.
# If TestType is RANGE, WarnTmp2 and CritTmp2 are required
# If TestType is IGNORE, then only Sensor:TestType:Location are required
# - Location : The name of the location - Must match your Xymon definitions
# - WarnTmp1 : The value at which to set test COLOR and alert to yellow for UP or DOWN tests
# For RANGE test types, this value is used as if the test were a DOWN test
# - CritTmp1 : The value at which to set test COLOR and alert to red for UP or DOWN tests
# For RANGE test types, this value is used as if the test were a DOWN test
# - WarnTmp2 : Only used for RANGE type tests. The value at which to set test COLOR
# to yellow as temperature increases
# - CritTmp2 : Only used for RANGE type tests. The value at which to set test COLOR
# to red as temperature increases
#
#
# The ControlByWeb temperature monitor
# supports up to four temperature sensors
# and reports the temperature as XX.Y in
# either Celsius or Fahrenheit
# ---------------------------------------
#ZONES="1:UP:ServerRoom:88.0:98.0 \
# 2:IGNORE:Outside \
# 3:RANGE:Office:70.0:65.0:80.0:82.0"
# 4:DOWN:Basement:36.0:32.0"
ZONES="1:RANGE:Cellar:38.0:33.0:60.0:70.0"
# Define the scale - Only used in report text
# -------------------------------------------
SCALE="F"
# Add an administrative message to the top of the report page
# Not necessary, but can be a quick way to know what server
# is polling a ControlByWeb module, where the module is
# physically located, and perhaps some instructional
# information
# -----------------------------------------------------------
ADMINMSG="- ${MACHINEDOTS} is the host polling the ControlByWeb X-300 module ${MODULE}
- The ${MODULE} X-300 ControlByWeb module physically resides in the cellar"
###############################################################################
# --------------------------------------------------
# Nothing should need to be modified below this line
# --------------------------------------------------
###############################################################################
#
# ----------------------------
# Set required Xymon variables
# ----------------------------
COLOR="green"
MSG=""
# ----------------
# Set up functions
# ----------------
#
# Get the four sensor temperature outputs from the
# "state.xml" page from ControlByWeb temperature monitor
# ------------------------------------------------------
getdata() {
TEMPS=`"$CURL" -s -m 10 -u "$USER:$PASS" "$MODULE/state.xml"`
# If the device returns no data, or is offline, or does not respond,
# or if curl fails for any reason, then just fail and exit the script.
# Xymon will alert purple indicating that it has not seen data for this
# test after 20 minutes (default). I suppose we COULD instead force a
# yellow alert for all temps for this device during this condition...
# ---------------------------------------------------------------------
EXIT="$?"
if [ "$EXIT" != "0" ] || [ -z "$TEMPS" ]; then
exit 1
fi
}
# Separate zone components:
# - Skip all temperature values for IGNORE test types
# - Assign WarnTmp2 and CrtTmp2 for RANGE test types
# Formatting here is ugly to get resonable output in the display
# with minimal use of HTML to clutter up email and SMS reports :(
# ---------------------------------------------------------------
parsezone () {
sensornum=`echo "$zone" | cut -d':' -f1`
testtype=`echo "$zone" | cut -d':' -f2 | "$TR" [a-z] [A-Z]`
location=`echo "$zone" | cut -d':' -f3`
ZONEMSG="${ZONEMSG}
- Sensor - #$sensornum
Monitoring - $location
Test Type - $testtype"
case "$testtype" in
UP | DOWN )
warntemp1=`echo "$zone" | cut -d':' -f4` ;
crittemp1=`echo "$zone" | cut -d':' -f5` ;
ZONEMSG="${ZONEMSG}
- Warning Temp - ${warntemp1}${SCALE}, Critical Temp - ${crittemp1}${SCALE}
" ;;
RANGE )
warntemp1=`echo "$zone" | cut -d':' -f4` ;
crittemp1=`echo "$zone" | cut -d':' -f5` ;
warntemp2=`echo "$zone" | cut -d':' -f6` ;
crittemp2=`echo "$zone" | cut -d':' -f7` ;
ZONEMSG="${ZONEMSG}
LOW -- Warning Temp - ${warntemp1}${SCALE}, Critical Temp - ${crittemp1}${SCALE}
HIGH -- Warning Temp - ${warntemp2}${SCALE}, Critical Temp - ${crittemp2}${SCALE}
" ;;
IGNORE )
ZONEMSG="${ZONEMSG}
" ;;
esac
}
# Pull current zone's temperature reading out of xml tags
# -------------------------------------------------------
getcurtemp () {
# Each of the four temperatures is represented
# as a line <sensorXtemp>[-][Y]YY.Z</sensorXtemp>
# where X is the sensor number from 1 to 8,
# Y is the temp in degrees, and Z is the tenths
# We only want the numeric portion including the
# negative (hyphen) symbol between the tags for
# the current sensor we are looping for
# Also need to check for "xx.x" in case a temperature
# sensor is not communicating with the module
# ---------------------------------------------------
curtemp=`echo "$TEMPS" \
| "$GREP" -Eo "sensor$sensornum.*sensor$sensornum" \
| "$GREP" -Eo [-]*\(\(x\)\|\([0-9]\)\)\{1,3\}\.\(\(x\)\|[0-9]\)`
}
# Test to make sure that we have a numeric value and not "xx.x"
# which would mean that this temperature sensor is broken, or
# otherwise not communicating with the module. Flag this test's
# CURCOLOR red and the main test COLOR red as well.
# -------------------------------------------------------------
testcomm () {
if [ "$curtemp" == "xx.x" ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
return 1
else
return 0
fi
}
# Test for temperature RISING and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testup() {
# Is current temp greater than the critical temp?
# -----------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$crittemp1" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is current temp greater than the warning temp?
# ----------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$warntemp1" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
# Test for temperature DECREASING and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testdown() {
# Is current temp less than the critical temp?
# --------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -le `echo "$crittemp1" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is current temp less than the warning temp?
# -------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -le `echo "$warntemp1" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
# Test for temperature being within the defined RANGE
# and set the test's CURCOLOR
# Set the main COLOR variable for the Xymon report if necessary
# -------------------------------------------------------------
testrange() {
# Is the current temp is outside of the high and low critical levels?
# -------------------------------------------------------------------
if [ `echo "$curtemp" | "$TR" -d .` -le `echo "$crittemp1" | "$TR" -d .` ] \
|| [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$crittemp2" | "$TR" -d .` ]; then
CURCOLOR="red"
# Now, since red is the most critical color, just set
# the main status color for this report to red too
# ---------------------------------------------------
COLOR="red"
# Is the current temp is outside of the high and low warning levels?
# ------------------------------------------------------------------
elif [ `echo "$curtemp" | "$TR" -d .` -le `echo "$warntemp1" | "$TR" -d .` ] \
|| [ `echo "$curtemp" | "$TR" -d .` -ge `echo "$warntemp2" | "$TR" -d .` ]; then
CURCOLOR="yellow"
# Set main status color to yellow only if it is not already worse (red)
# ---------------------------------------------------------------------
if [ "$COLOR" != "red" ]; then
COLOR="yellow"
fi
fi
}
###########################################################
# -----------
# Main Script
# -----------
# Use curl to pull the data from the module
# -----------------------------------------
getdata
# Loop through the defined zones
# ------------------------------
for zone in $ZONES; do
CURCOLOR="green"
parsezone
getcurtemp
# Make sure that the sensor we are testing is
# actually communicating before we move onto the
# UP, DOWN, RANGE or IGNORE tests.
# ----------------------------------------------
if testcomm ; then
# Determine if this is an UP or DOWN test
# ---------------------------------------
case "$testtype" in
UP )
testup
;;
DOWN )
testdown
;;
RANGE )
testrange
;;
IGNORE )
# Do not test anything. Just append
# the $CURCOLOR (green), $location
# and $curtemp values to the Xymon
# Server report for this "zone"
;;
* )
exit 1
;;
esac
fi
# Build the text of the status message
# that will be sent to the Xymon Server
# -------------------------------------
MSG="${MSG}
&${CURCOLOR} ${location} : ${curtemp}${SCALE}"
done
# Prepend the administrative message to the report
# ------------------------------------------------
MSG="${ADMINMSG}
${ZONEMSG}
<hr>
${MSG}"
# Send final report to Xymon Server
# ---------------------------------
$BB $BBDISP "status $MACHINE.$COLUMN $COLOR `date`
${MSG}
"
###########################################################

I work for ControlByWeb and looked in to this issue a bit for you. I installed Xymon onto a computer and was able to get the referenced script working. There were a few changes I had to make though.
Xymon by default will ignore messages sent to it unless it is about a host in the host.cfg file. The script uses a variable called $MACHINE when it goes to send the message and in my case, this variable was blank. Thus it didn't match a host in the hosts.cfg file and was ignored.
To fix this, I ended up adding a line near the top of the script file to define this variable to a hostname that existed in the hosts.cfg file.
USER=""
PASS="webrelay"
MACHINE="xymon.example.com"
I then saved the script file in xymon/client/ext and gave it 755 permissions and set the user/group to xymon.
In addition to this, I found the $BB variable used at the very end was dependent on another variable that wasn't initialized. To fix this, I added the following line to the xymon/client/etc/xymonclient.cfg file:
XYMONCLIENTHOME="/home/xymon/client"
Finally, I changed the /xymon/client/etc/clientlaunch.cfg file to include the following:
[cbwtemp]
ENVFILE $XYMONCLIENTHOME/etc/xymonclient.cfg
CMD $XYMONCLIENTHOME/ext/xymon_cbw_temp.sh
LOGFILE $XYMONCLIENTLOGS/xymon_cbw_temp.log
INTERVAL 1m
If everything worked, it should now show a new column next to the host you specified within a few minutes.
Let me know if this fixes the issue for you or not.

Related

Running F-stack DPDK executable - Unsupported Rx multi queue mode 1

I have a C++ program which does lots of stuff, but most importantly it is setup to use F-Stack, which is built on DPDK:
int main(int argc, char * argv[])
{
ff_init(argc, argv);
...
}
And I run the program like this:
sudo ./main --conf /etc/f-stack.conf --proc-type=primary
This is the error message I am receiving:
virtio_dev_configure(): Unsupported Rx multi queue mode 1
Port0 dev_configure = -22
EAL: Error - exiting with code: 1
Cause: init_port_start failed
I have not had this problem before when running this executable on a Centos 8 AWS instance. I am now running this on a Centos 8 Alibaba Cloud instance. So there's possibly some difference when running on Alibaba.
The only other thing I can think of is that there might be a configuration problem. However, I copied /etc/f-stack.conf from my AWS instance to Alibaba and updated some IP addresses, nothing else. So nothing significant has changed.
Any idea what's going on here and how to fix it?
Edit: here is my /etc/f-stack.conf file (without IP addresses included):
[dpdk]
# Hexadecimal bitmask of cores to run on.
lcore_mask=1
# Number of memory channels.
channel=4
# Specify base virtual address to map.
#base_virtaddr=0x7f0000000000
# Promiscuous mode of nic, defualt: enabled.
promiscuous=1
numa_on=1
# TX checksum offload skip, default: disabled.
# We need this switch enabled in the following cases:
# -> The application want to enforce wrong checksum for testing purposes
# -> Some cards advertize the offload capability. However, doesn't calculate che cksum.
tx_csum_offoad_skip=0
# TCP segment offload, default: disabled.
tso=0
# HW vlan strip, default: enabled.
vlan_strip=1
# sleep when no pkts incoming
# unit: microseconds
idle_sleep=0
# sent packet delay time(0-100) while send less than 32 pkts.
# default 100 us.
# if set 0, means send pkts immediately.
# if set >100, will dealy 100 us.
# unit: microseconds
pkt_tx_delay=100
# use symmetric Receive-side Scaling(RSS) key, default: disabled.
symmetric_rss=0
# PCI device enable list.
# And driver options
#pci_whitelist=02:00.0
# enabled port list
#
# EBNF grammar:
#
# exp ::= num_list {"," num_list}
# num_list ::= <num> | <range>
# range ::= <num>"-"<num>
# num ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'
#
# examples
# 0-3 ports 0, 1,2,3 are enabled
# 1-3,4,7 ports 1,2,3,4,7 are enabled
#
# If use bonding, shoule config the bonding port id in port_list
# and not config slave port id in port_list
# such as, port 0 and port 1 trank to a bonding port 2,
# should set `port_list=2` and config `[port2]` section
port_list=0
# Number of vdev.
nb_vdev=0
# Number of bond.
nb_bond=0
# Each core write into own pcap file, which is open one time, close one time if enough.
# Support dump the first snaplen bytes of each packet.
# if pcap file is lager than savelen bytes, it will be closed and next file was dumped into.
[pcap]
enable=0
snaplen=96
savelen=16777216
savepath=.
# Port config section
# Correspond to dpdk.port_list's index: port0, port1...
[port0]
addr=<ADDR>
netmask=<NETMASK>
broadcast=<BROADCAST>
gateway=<GATEWAY>
# IPv6 net addr, Optional parameters.
#addr6=ff::02
#prefix_len=64
#gateway6=ff::01
# Multi virtual IPv4/IPv6 net addr, Optional parameters.
# `vip_ifname`: default `f-stack-x`
# `vip_addr`: Separated by semicolons, MAX number 64;
# Only support netmask 255.255.255.255, broadcast x.x.x.255 no w, hard code in `ff_veth_setvaddr`.
# `vip_addr6`: Separated by semicolons, MAX number 64.
# `vip_prefix_len`: All addr6 use the same prefix now, default 64.
#vip_ifname=lo0
#vip_addr=192.168.1.3;192.168.1.4;192.168.1.5;192.168.1.6
#vip_addr6=ff::03;ff::04;ff::05;ff::06;ff::07
#vip_prefix_len=64
# lcore list used to handle this port
# the format is same as port_list
#lcore_list=0
# bonding slave port list used to handle this port
# need to config while this port is a bonding port
# the format is same as port_list
#slave_port_list=0,1
# Vdev config section
# orrespond to dpdk.nb_vdev's index: vdev0, vdev1...
# iface : Shouldn't set always.
# path : The vuser device path in container. Required.
# queues : The max queues of vuser. Optional, default 1, greater or equal to the number of processes.
# queue_size : Queue size.Optional, default 256.
# mac : The mac address of vuser. Optional, default random, if vhost use phy NIC, it should be set to the phy NIC's mac.
# cq : Optional, if queues = 1, default 0; if queues > 1 default 1.
#[vdev0]
##iface=/usr/local/var/run/openvswitch/vhost-user0
#path=/var/run/openvswitch/vhost-user0
#queues=1
#queue_size=256
#mac=00:00:00:00:00:01
#cq=0
# bond config section
# See http://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
#[bond0]
#mode=4
#slave=0000:0a:00.0,slave=0000:0a:00.1
#primary=0000:0a:00.0
#mac=f0:98:38:xx:xx:xx
## opt argument
#socket_id=0
#xmit_policy=l23
#lsc_poll_period_ms=100
#up_delay=10
#down_delay=50
# Kni config: if enabled and method=reject,
# all packets that do not belong to the following tcp_port and udp_port
# will transmit to kernel; if method=accept, all packets that belong to
# the following tcp_port and udp_port will transmit to kernel.
#[kni]
#enable=1
#method=reject
# The format is same as port_list
#tcp_port=80,443
#udp_port=53
# FreeBSD network performance tuning configurations.
# Most native FreeBSD configurations are supported.
[freebsd.boot]
hz=100
# Block out a range of descriptors to avoid overlap
# with the kernel's descriptor space.
# You can increase this value according to your app.
fd_reserve=1024
kern.ipc.maxsockets=262144
net.inet.tcp.syncache.hashsize=4096
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=65536
kern.ncallout=262144
kern.features.inet6=1
net.inet6.ip6.auto_linklocal=1
net.inet6.ip6.accept_rtadv=2
net.inet6.icmp6.rediraccept=1
net.inet6.ip6.forwarding=0
[freebsd.sysctl]
kern.ipc.somaxconn=32768
kern.ipc.maxsockbuf=16777216
net.link.ether.inet.maxhold=5
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.sendspace=16384
net.inet.tcp.recvspace=8192
#net.inet.tcp.nolocaltimewait=1
net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.sack.enable=1
net.inet.tcp.blackhole=1
net.inet.tcp.msl=2000
net.inet.tcp.delayed_ack=0
net.inet.udp.blackhole=1
net.inet.ip.redirect=0
net.inet.ip.forwarding=0
Edit 2: I added pci_whitelist=[PCIe BDF of NIC] to config and ran the following command:
The reason for the error is because of the check for virtio PMD in function virtio_dev_configure file [dpdk root folder]drivers/net/virtio/virtio_ethdev.c. This can be due to the current Fstack enables RSS for better flow distribution over its port-queue pair.
There 2 possible solution to fix the problem is to
find the configuration parameter in f-stack.conf to disable RSS or
change the FSTACK port configuration logic not to use RSS (by editing code).
File: lib/ff_dpdk_if.c
edit: line 627 from port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS; to port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE; and rebuild the fstack
Note: if you use physical NIC RSS is supported in most of cases. hence there will be no error there.

name input/output files in snakemake according to variable (not wildcard) in config.yaml

I am trying to edit and run a snakemake pipeline. In a nutshell, the snakemake pipeline calls a default genome aligner (minimap) and produces output files with this name. I am trying to add a variable aligner to config.yaml to specify the aligner I want to call. Also (where I am actually stuck), the output files should have the name of the aligner specified in config.yaml.
My config.yaml looks like this:
# this config.yaml is passed to Snakefile in pipeline-structural-variation subfolder.
# Snakemake is run from this pipeline-structural-variation folder; it is necessary to
# pass an appropriate path to the input-files (the ../ prefix is sufficient for this demo)
aligner: "ngmlr" # THIS IS THE VARIABLE I AM ADDING TO THIS FILE. VALUES COULD BE minimap or ngmlr
# FASTQ file or folder containing FASTQ files
# check if this has to be gzipped
input_fastq: "/nexusb/Gridion/20190917PGD2staal2/PD170815/PD170815_cat_all.fastq.gz" # original is ../RawData/GM24385_nf7_chr20_af.fastq.gz
# FASTA file containing the reference genome
# note that the original reference sequence contains only the sequence of chr20
reference_fasta: "/nexus/bhinckel/19/ONT_projects/PGD_breakpoint/ref_hg19_local/hg19_chr1-y.fasta" # original is ../ReferenceData/human_g1k_v37_chr20_50M.fasta
# Minimum SV length
min_sv_length: 300000 # original value was 40
# Maximum SV length
max_sv_length: 1000000 # original value was 1000000. Note that the value I used to run the pipeline for the sample PD170677 was 100000000000, which will be coerced to NA in the R script (/home/bhinckel/ont_tutorial_sv/ont_tutorial_sv.R)
# Min read length. Shorter reads will be discarded
min_read_length: 1000
# Min mapping quality. Reads will lower mapping quality will be discarded
min_read_mapping_quality: 20
# Minimum read support required to call a SV (auto for auto-detect)
min_read_support: 'auto'
# Sample name
sample_name: "PD170815" # original value was GM24385.nf7.chr20_af. Note that this can be a list
I am posting below the sections of my snakefile which generate output files with the extension _minimap2.bam, which I would like to replace by either _minimap2.bam or _ngmlr.bam, depending on aligner on config.yaml
# INPUT BAM folder
bam = None
if "bam" in config:
bam = os.path.join(CONFDIR, config["bam"])
# INPUT FASTQ folder
FQ_INPUT_DIRECTORY = []
if not bam:
if not "input_fastq" in config:
print("\"input_fastq\" not specified in config file. Exiting...")
FQ_INPUT_DIRECTORY = os.path.join(CONFDIR, config["input_fastq"])
if not os.path.exists(FQ_INPUT_DIRECTORY):
print("Could not find {}".format(FQ_INPUT_DIRECTORY))
MAPPED_BAM = "{sample}/alignment/{sample}_minimap2.bam" # Original
#MAPPED_BAM = "{sample}/alignment/{sample}_{alignerName}.bam" # this did not work
#MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam" # this did nor work either
else:
MAPPED_BAM = find_file_in_folder(bam, "*.bam", single=True)
...
if config['aligner'] == 'minimap':
rule index_minimap2:
input:
REF = FA_REF
output:
"{sample}/index/minimap2.idx"
threads: config['threads']
conda: "env.yml"
shell:
"minimap2 -t {threads} -ax map-ont --MD -Y {input.REF} -d {output}"
rule map_minimap2:
input:
FQ = FQ_INPUT_DIRECTORY,
IDX = rules.index_minimap2.output,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | minimap2 -t {threads} -K 500M -ax map-ont --MD -Y {input.IDX} - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
else:
print(f"Aligner is {config['aligner']} - skipping indexing step for minimap2")
rule map_ngmlr:
input:
REF = FA_REF,
FQ = FQ_INPUT_DIRECTORY,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | ngmlr -r {input.REF} -t {threads} -x ont - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
I initially tried to create a alignerName parameter, similar to the sample parameter, as shown below:
# Parameter: sample_name
sample = "sv_sample01"
if "sample_name" in config:
sample = config['sample_name']
###############
#
# code below created by me
#
###############
# Parameter: aligner_name
alignerName = "defaultAligner"
if "aligner" in config:
alignerName = config['aligner']
Then I tried to input {alignerName} wherever I have minimap2 on my input/ output files (see commented MAPPED_BAM variable definition above), though this is throwing an error. I guess snakemake will interpret {alignerName} as a wildcard, though what I want is simply to pass the variable name defined in config['aligner'] to input/ output files. I also tried with f-string (MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam"), though I guess this it did not work either.
You are close!
The way wildcards work in snakemake is they get interpreted 'last', while f-strings get interpreted first. To not interpret a curly brace in an f-string you can escape it with another curly brace, like so:
print(f"{{keep curly}}")
>>> {keep curly}
So all we need to do is
MAPPED_BAM = f"{{sample}}/alignment/{{sample}}_{config['aligner']}.bam"

fabric: why can't I get local("history") to print out anything?

Here's my fabfile
from fabric.api import local, task
#task
def tracking(suffix=""):
buffer_ = "*" * 40
print (buffer_)
local("whoami")
print (buffer_)
local("env | grep dn")
#this one comes out empty...
print (buffer_)
out = local("history")
print (buffer_)
Everything prints out as expected, except for the history:
****************************************
[localhost] local: whoami
jluc
****************************************
[localhost] local: env | grep dn
dn_cb=/Users/jluc/.berkshelf/cookbooks
dn_cc=/Users/jluc/kds2/chef/chef-repo/cookbooks
dn_khtmldump=/Users/jluc/kds2/out/tests/dump2static
dn_cv=/Users/jluc/kds2/chef/vagrant/ubuntu2
****************************************
[localhost] local: history
****************************************
But nothing wrong with history on the command line...
history | tail -5
613 history
614 fab -f fabfile2.py tracking
615 history | tail -5
616 cls
617 history | tail -5
What gives? Adding shell="/bin/bash" didn't help either.
MacOs Sierra
According to the docs:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
I'd interpret this as meaning if you want the history command to work, you need to capture the output first. Try changing all your local commands to include both shell="/bin/bash", and capture=True

What would cause Elastic Search to not produce log files?

I have installed Elastic Search V5.0 on my ubuntu64 virtual machine via the debian package given using this tutorial from Elastic
As explained in the tutorial sudo -i service elasticsearch start wont give any messages here (poor design imo)
I tried adding STDOUT.log file to the directory and it still is empty after starting elastic search
If I sudo bin/elasticsearch I get this trace:
Exception in thread "main" ElasticsearchParseException[malformed, expected settings to start with 'object', instead was [VALUE_STRING]]
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:70)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:50)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:50)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:938)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:927)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)
at org.elasticsearch.bootstrap.Bootstrap.initialEnvironment(Bootstrap.java:207)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:247)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96)
at org.elasticsearch.cli.Command.main(Command.java:62)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73)
And without sudo provlidges:
Exception in thread "main" SettingsException[Failed to load settings from
/usr/share/elasticsearch/config/elasticsearch.yml]; nested: AccessDeniedException[/usr/share/elasticsearch/config/elasticsearch.yml];
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/elasticsearch.yml
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:927)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)
at org.elasticsearch.bootstrap.Bootstrap.initialEnvironment(Bootstrap.java:207)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:247)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103)
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96)
at org.elasticsearch.cli.Command.main(Command.java:62)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73)
I'm not a fan of posting so much text on stackoverflow but here is my configuration located at: /etc/elasticsearch
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
cluster.name: sdc-test-es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
#node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
The problem is with misconfigured Elasticsearch.
Ensure that ES uses config file from correct location. For example, "without sudo priviledges" outputs wrong location.
Ensure you don't have mistakes in config.
To isolate problem, comment out everything in config, then uncomment line-by-line your custom settings in config, and try to start. If find the line causing the problem, check documentation.
Also, try to start without "-d" option, Elasticsearch will output full stacktrace in console, and it should tell more about misconfigured setting.

Need to filter out valid IP addresses using regex

I have a Radius client configuration file in /etc/raddb/server in that want to get valid IP address without commented line,So I'm using
grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /etc/raddb/server
127.0.0.1
192.168.0.147
But I want to ignore 127.0.0.1 which is commented with # so how to do this stuff??
/etc/raddb/server file is as follow
cat /etc/raddb/server
# pam_radius_auth configuration file. Copy to: /etc/raddb/server
#
# For proper security, this file SHOULD have permissions 0600,
# that is readable by root, and NO ONE else. If anyone other than
# root can read this file, then they can spoof responses from the server!
#
# There are 3 fields per line in this file. There may be multiple
# lines. Blank lines or lines beginning with '#' are treated as
# comments, and are ignored. The fields are:
#
# server[:port] secret [timeout]
#
# the port name or number is optional. The default port name is
# "radius", and is looked up from /etc/services The timeout field is
# optional. The default timeout is 3 seconds.
#
# If multiple RADIUS server lines exist, they are tried in order. The
# first server to return success or failure causes the module to return
# success or failure. Only if a server fails to response is it skipped,
# and the next server in turn is used.
#
# The timeout field controls how many seconds the module waits before
# deciding that the server has failed to respond.
#
# server[:port] shared_secret timeout (s)
#127.0.0.1 secret 1
#other-server other-secret 3
192.168.0.147:1812 testing123 1
#
# having localhost in your radius configuration is a Good Thing.
#
# See the INSTALL file for pam.conf hints.
try grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /etc/raddb/server