I'm having a hard time trying to discover what the next comand is doing.
I'm trying to monitor different services on Linux using systemctl. I need a Json output with all the services on Linux that are running on the machine.
The problem is that with this comand the Status ouput is: "enable enabled". I only need the first parameter (state), and trying to delete the second one (Vendor preset) I really don't get it working. Basically because I don't understand it. I know with Sed is trying to replace some strings but with so many characters for me this isn't readable.
echo "{\"data\":[$(systemctl list-unit-files --type=service|grep \.service|grep -v "#"|sed -E -e "s/\.service\s+/\",\"{#STATUS}\":\"/;s/(\s+)?$/\"},/;s/^/{\"{#NAME}\":\"/;$ s/.$//")]}"
Result:
"data": [{
"{#NAME}": "accounts-daemon",
"{#STATUS}": "enabled enabled"
},
{
"{#NAME}": "acpid",
"{#STATUS}": "disabled enabled"
}, {
"{#NAME}": "zabbix-agent",
"{#STATUS}": "enabled enabled"
}
]
}
Expected result:
"data": [{
"{#NAME}": "accounts-daemon",
"{#STATUS}": "enabled"
},
{
"{#NAME}": "acpid",
"{#STATUS}": "disabled"
}, {
"{#NAME}": "zabbix-agent",
"{#STATUS}": "enabled"
}
]
}
Command without "sed": systemctl list-unit-files --type=service
UNIT FILE
STATE
VENDOR PRESET
accounts-daemon.service
enabled
enabled
acpid.service
disabled
enabled
zabbix-agent
static
enabled
The relevant substitute in your code is
s/(\s+)?$/
Try to replace that by deleting everyting starting with the first seperator (\s)
That is
s/\s.*$/
The modified command becomes
echo "{\"data\":[$(systemctl list-unit-files --type=service|grep \.service|grep -v "#"|sed -E -e "s/\.service\s+/\",\"{#STATUS}\":\"/;s/\s.*$/\"},/;s/^/{\"{#NAME}\":\"/;$ s/.$//")]}"
I'm trying to get the value of value.txt which I think works well in ansible, the file value looks exactly like this
[ec2-user#ip-192-168-1-45]$ cat value.txt
this-is-the-value
I would like to assign the output value as the replacement of the "CHANGE_ME" keywords from file.txt
[ec2-user#ip-192-168-1-45]$ cat file.txt
asdasdh kajsdlkjasdlk CHANGE_ME ajsdlkjasdlkjasd
asdkjhakjsd: CHANGE_ME
jasdlkjadsl{
aksldjlkasd: CHANGE_ME
}
I'm using this ansible playbook to combine the 2 process however it seems it doesn't replace the the "CHANGE_ME" when I try to verify the file.txt
- name: check output
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: cat the file
shell: "cat value.txt"
register: cat_value
- debug: var=cat_value.stdout
- name: modify file.txt
replace:
regexp: "{{ cat_value.stdout }}"
replace: "CHANGE_ME"
path: "{{ playbook_dir }}/file.txt"
The OUTPUT goes like this
[ec2-user#ip-192-168-1-45]$ ansible-playbook ansible.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [check output] ******************************************************************************************************************************************************************************
TASK [cat the file] ******************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug] *************************************************************************************************************************************************************************************
ok: [localhost] => {
"cat_value.stdout": "this-is-the-value"
}
TASK [modify file.txt] ***************************************************************************************************************************************************************************
ok: [localhost]
PLAY RECAP ***************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ec2-user#ip-192-168-1-45]$ cat file.txt
asdasdh kajsdlkjasdlk CHANGE_ME ajsdlkjasdlkjasd
asdkjhakjsd: CHANGE_ME
jasdlkjadsl{
aksldjlkasd: CHANGE_ME
}
I want to replace the " and \r\ from a variable content using Ansible.
I have the following data in a variable result thatI register the output to the variable from the previous task
curl -s -H \"Authorization: JWT eyJ4NWMiOlsiTUlJQytqQ0NBHuHO96csEQ\r\" https://hub.docker.com/v2/repositories/talasecurityinc/?page_size=10000 | jq -r '.results|.[]|.name'
In the above content I want to replace the \ and \r\ with null.
I have tried the below way but it doesn't work for me.
- set_fact: final_out="{{result | replace('\', "") | replace('\r\', '')}}"
The expected output is
curl -s -H "Authorization: JWT eyJ4NWMiOlsiTUlJQytqQ0NBHuHO96csEQ" https://hub.docker.com/v2/repositories/talasecurityinc/?page_size=10000 | jq -r '.results|.[]|.name'
The example playbook snippet would be helpful for me since I am new to ansible.
Escaping Hell....
I was not able to use replace, probably because I didn't try hard/smart enough. Meanwhile, in your specific case, you can achieve the expected result with a single regex_replace filter call so it was easier (and it worked right away :)).
I used yaml folded blocks (>) with white space control (-) to minimize the escape hassle. If you don't know what that is, have a look at a yaml doc (learn yaml in y minutes is my favorite one)
Note that the remaining backslashes in the last result below are added by ansible to escape the double quotes in the output.
---
- name: Escape chars
hosts: localhost
gather_facts: false
vars:
test: >-
curl -s -H \"Authorization: JWT eyJ4NWMiOlsiTUlJQytqQ0NBHuHO96csEQ\r\"
https://hub.docker.com/v2/repositories/talasecurityinc/?page_size=10000
| jq -r '.results|.[]|.name'
tasks:
- name: Show the untouched var
debug:
var: test
- name: Escape the var as intended
debug:
msg: >-
{{ test | regex_replace('\\r?\\?', '') }}
which results in
PLAY [Escape chars] ********************************************************************
TASK [Show the untouched var] **********************************************************
ok: [localhost] => {
"test": "curl -s -H \\\"Authorization: JWT eyJ4NWMiOlsiTUlJQytqQ0NBHuHO96csEQ\\r\\\" https://hub.docker.com/v2/repositories/talasecurityinc/?page_size=10000 | jq -r '.results|.[]|.name'"
}
TASK [Escape the var as intended] ******************************************************
ok: [localhost] => {
"msg": "curl -s -H \"Authorization: JWT eyJ4NWMiOlsiTUlJQytqQ0NBHuHO96csEQ\" https://hub.docker.com/v2/repositories/talasecurityinc/?page_size=10000 | jq -r '.results|.[]|.name'"
}
PLAY RECAP *****************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
I am using Ansible to deploy my project and I trying to check if an specified package is installed, but I have a problem with it task, here is the task:
- name: Check if python-apt is installed
command: dpkg -l | grep python-apt
register: python_apt_installed
ignore_errors: True
And here is the problem:
$ ansible-playbook -i hosts idempotent.yml
PLAY [lxc-host] ***************************************************************
GATHERING FACTS ***************************************************************
ok: [10.0.3.240]
TASK: [idempotent | Check if python-apt is installed] *************************
failed: [10.0.3.240] => {"changed": true, "cmd": ["dpkg", "-l", "|", "grep", "python-apt"], "delta": "0:00:00.015524", "end": "2014-07-10 14:41:35.207971", "rc": 2, "start": "2014-07-10 14:41:35.192447"}
stderr: dpkg-query: error: package name in specifier '|' is illegal: must start with an alphanumeric character
...ignoring
PLAY RECAP ********************************************************************
10.0.3.240 : ok=2 changed=1 unreachable=0 failed=0
Why is illegal this character '|' .
From the doc:
command - Executes a command on a remote node
The command module takes the command name followed by a list of
space-delimited arguments. The given command will be executed on all
selected nodes. It will not be processed through the shell, so
variables like $HOME and operations like "<", ">", "|", and "&" will
not work (use the shell module if you need these features).
shell - Executes a commands in nodes
The shell module takes the command name followed by a list of space-delimited arguments.
It is almost exactly like the command module but runs the command
through a shell (/bin/sh) on the remote node.
Therefore you have to use shell: dpkg -l | grep python-apt.
read about the command module in the Ansible documentation:
It will not be processed through the shell, so .. operations like "<", ">", "|", and "&" will not work
As it recommends, use the shell module:
- name: Check if python-apt is installed
shell: dpkg -l | grep python-apt
register: python_apt_installed
ignore_errors: True
For what it's worth, you can check/confirm the installation in a debian environment using the apt command:
- name: ensure python-apt is installed
apt: name=python-apt state=present
I'd like to set up Loggly to run on AWS Elastic Beanstalk, but can't find any information on how to do this. Is there any guide anywhere, or some general guidance on how to start?
This is how I do it, for papertrailapp.com (which I prefer instead of loggly). In your /ebextensions folder (see more info) you create logs.config, where specify:
container_commands:
01-set-correct-hostname:
command: hostname www.example.com
02-forward-rsyslog-to-papertrail:
# https://papertrailapp.com/systems/setup
command: echo "*.* #logs.papertrailapp.com:55555" >> /etc/rsyslog.conf
03-enable-remote-logging:
command: echo -e "\$ModLoad imudp\n\$UDPServerRun 514\n\$ModLoad imtcp\n\$InputTCPServerRun 514\n\$EscapeControlCharactersOnReceive off" >> /etc/rsyslog.conf
04-restart-syslog:
command: service rsyslog restart
55555 should be replaced with the UDP port number provided by papertrailapp.com. Every time after new instance bootstrap this config will be applied. Then, in your log4j.properties:
log4j.rootLogger=WARN, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.header=true
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=[%p] %t %c: %m%n
I'm not sure whether it's an optimal solution. Read more about this mechanism in jcabi-beanstalk-maven-plugin
You can also use the installation script from loggly itself.
The setup below follows the instructions for the legacy setup on https://www.loggly.com/docs/configure-syslog-script/ with minor changes (no confirmation prompts, sudo command replaced since no tty is available)
(edit: updated link, seems to be an outdated solution now in loggly docs)
Place the following script in .ebextensions/loggly.config
Replace TOKEN and ACCOUNT with your own.
#
# Install loggly.com on AWS Elastic Beanstalk
# Tested with node.js environment
# Save this file as .ebextensions/loggly.config
# Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
# See Also /var/log/eb-tools.log
#
commands:
01_loggly_dl:
command: wget -q -O /tmp/loggly.py https://www.loggly.com/install/configure-syslog.py
02_loggly_config:
command: su --session-command="python /tmp/loggly.py setup --auth TOKEN --account ACCOUNT --yes"
Here is a link to loggly support site for using syslogd with loggly:
http://wiki.loggly.com/loggingconfiguration
or using the loggly api with your own app:
http://wiki.loggly.com/apidocumention
Here is an elasticbeanstalk config for Loggly that I've just started using thanks to pointers from this thread and the logging SaaS vendors setup instructions. [Loggly Config Mgmt, Papertrail rsyslog ]
Save the file as loggly.config in the .ebextensions directory and make sure to check the YAML formatting conventions (no tabs, etc). Substitute your Loggly TCP port number, username, password and domain name into the angle brackets as required.
Note that for AWS ruby versions of elasticbeanstalk, there may be differences in the EC2 /etc/rsyslog setup. For example, if /etc/rsyslog.d already exists, and there is already an "$IncludeConfig /etc/rsyslog.d/*.conf" directive, then command "01-forward-rsyslog-to-loggly:" can be removed.
Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
files:
"/etc/rsyslog.d/90-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
*.* ##logs.loggly.com:<yourportnum> # !!!Loggly supplied port number for each app!!!
# ### end of the forwarding rule ###
encoding: plain
"/tmp/loggly.py" :
mode: "000755"
owner: root
group: root
content: |
import json
import sys
import urllib2
'''
Auto-authenticate Syslog TCP inputs.
Usage: python inputs.py -u user -p pass -s subdomain
'''
state = None
params = {}
for i in range(len(sys.argv)):
arg = sys.argv[i]
if state:
params[state] = arg
state = None
if arg == '--username' or arg == '-u':
state = 'username'
if arg == '--password' or arg == '-p':
state = 'password'
if arg == '--subdomain' or arg == '-s':
state = 'subdomain'
url = 'https://%s.loggly.com/api/inputs' % params['subdomain']
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, url, params['username'], params['password'])
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
opener.open(url)
urllib2.install_opener(opener)
inputs = json.loads(urllib2.urlopen(url).read())
for input in inputs:
if input['service']['name'] == 'syslogtcp':
url = 'https://%s.loggly.com/api/inputs/%d/adddevice' % \
(params['subdomain'], input['id'])
response = urllib2.urlopen(url, {}).read()
print response
encoding: plain
commands:
01-forward-rsyslog-to-loggly:
# http://loggly.com/support/sending-data/logging-from/syslog/rsyslog/cd
command: test "$(grep -s '90-loggly.conf' /etc/rsyslog.conf)" == "" && echo -e "\n# Include the loggly.conf file\n\$IncludeConfig /etc/rsyslog.d/90-loggly.conf" >> /etc/rsyslog.conf
02-restart-syslog:
command: service rsyslog restart
03-inform_loggly:
command: "python /tmp/loggly.py -u <Yourloginname> -p <Yourpassword> -s <Yourdomainname>"
Typically, /etc/rsyslog.config will have a "$IncludeConfig /etc/rsyslog.d/*.conf" at the end - so you can simply introduce your own configuration file using the "files:" portion of your .ebextensions file. This works whether you are deploying to fresh servers or not.
For a ruby production.log, you might have something like this in a .ebextensions/01loggly.config file. Note this picks up your beanstalk environment name too as a loggly tag.
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/myapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglyid#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# Add a tag for file events
# For production.log
$InputFileName /var/app/support/logs/production.log
$InputFileTag production-log
$InputFileStateFile stat-production-log #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
# Send to Loggly then discard
if $programname == 'myapp-production-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'myapp-production-log' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
For Tomcat, you might do something like this in a .ebextesions/01logglyg.config file:
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/mytomcatapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglygidhere#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# catalina.log
$InputFileName /var/log/tomcat7/catalina.log
$InputFileTag catalina-log
$InputFileStateFile stat-catalina-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-log' then ~
# catalina.out
$InputFileName /var/log/tomcat7/catalina.out
$InputFileTag catalina-out
$InputFileStateFile stat-catalina-out
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-out' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-out' then ~
# host-manager.log
$InputFileName /var/log/tomcat7/host-manager.log
$InputFileTag host-manager
$InputFileStateFile stat-host-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'host-manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'host-manager' then ~
# initd.log
$InputFileName /var/log/tomcat7/initd.log
$InputFileTag initd
$InputFileStateFile stat-initd
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'initd' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'initd' then ~
# localhost.log
$InputFileName /var/log/tomcat7/localhost.log
$InputFileTag localhost-log
$InputFileStateFile stat-localhost-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'localhost-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'localhost-log' then ~
# manager.log
$InputFileName /var/log/tomcat7/manager.log
$InputFileTag manager
$InputFileStateFile stat-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'manager' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
This config is working for me - though I haven't yet determined how to get multi-line entries coming into a single entry in Loggly yet.
I know this is question is fairly old but I found that the answers really didnt answer the question or just plain didnt work correctly when implemented. I found that this works (file .ebextenstions/02loggly.config):
container_commands:
01-transform-rsyslog.conf:
command: sed "s/NODE_ENV/$NODE_ENV/g" scripts/22-loggly.conf.temp > scripts/22-loggly.conf
02-setup-rsyslog.conf:
command: cp scripts/22-loggly.conf /etc/rsyslog.d/22-loggly.conf
03-restart:
command: /sbin/service rsyslog restart
the "01-transform-rsyslog.conf" step is optional; I use that to set a tag by NODE_ENV in the file. "22-loggly.conf.temp" is a modified version of the "22-loggly.conf" file that gets created at "/etc/rsyslog.d/" when you run the linux source setup script (https://www.loggly.com/install/configure-syslog.py). I just installed it on a ec2 instance and copied the file.
Note I had to prepend '/sbin' to my service command because it was failing for me without it. Also, this restarts syslog on every deploy, which should be fine.
Now you just have to make sure your app logs to syslog. For Java it is going to be log4j or similar. For Node.js (which is what I'm using), rconsole works (https://github.com/tblobaum/rconsole).
None of the things I tried seemed to work, and the loggly documentation is very confusing!
I hope that this will help someone, this is how I got it to work.
Paste the following in .ebextensions/loggly.config
files:
"/etc/rsyslog.conf" :
mode: "000644"
owner: root
group: root
content: |
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
# Input for FILE.LOG
$InputFileName /var/app/current/PATH_TO_YOUR_LOG_FILE
$InputFileTag social_php:
$InputFileStateFile stat-social_php #this must be unique for each file being polled
$InputFileSeverity info
$InputRunFileMonitor
#Add a tag for events from this file
$template LogglyFormatsocial_php,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [TOKEN#41058 tag=\"php_log\"] %msg%\n"
if $programname == 'social_php' then ##logs.loggly.com:37146;LogglyFormatsocial_php
if $programname == 'social_php' then ~
*.* ##logs.loggly.com:37146
commands:
01-restart-syslog:
command: service rsyslog restart
Replace all instances of social_php with the tag that makes sense for your application.
Replace /var/app/current/PATH_TO_YOUR_LOG_FILE with your log file location
Follow my loggly configuration in elasticbeanstalk. For Linux + log4j
on .ebextensions file configuration
container_commands:
01_configure_sudo_access:
command: sed -i -- 's/ requiretty/ \!requiretty/g' /etc/sudoers
02_loggy_configure:
command: sudo python .ebextensions/scripts/loggly_config.py
03_restore_sudo_access:
command: sed -i -- 's/ \!requiretty/ requiretty/g' /etc/sudoers
Loggly script in python for default AMI:
import os
rsyslog_path = '/etc/rsyslog.conf'
loggly_file_path = '/etc/rsyslog.d/22-loggly.conf'
class LogglyConfig:
def __init__(self):
self.__linux_log()
self.__config_loggly_for_log4j()
def __linux_log(self):
#not installed on this machine
if not os.path.exists(loggly_file_path):
os.system('rm -f configure-linux.sh')
os.system('wget https://www.loggly.com/install/configure-linux.sh')
os.system('sudo bash configure-linux.sh -a DOMAIN -t TOKEN -u USER -p PASSWORD -s')
def __config_loggly_for_log4j(self):
f = open(rsyslog_path,'r')
file_text = f.read()
f.close()
file_text = file_text.replace('#$ModLoad imudp', '$ModLoad imudp')
file_text = file_text.replace('#$UDPServerRun 514', '$UDPServerRun 514')
f = open(rsyslog_path,'w')
f.write(file_text)
f.close()
os.system('service rsyslog restart')
LogglyConfig()
In log4j.properties on your java project
log4j.rootLogger=INFO, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.SyslogHost=localhost
log4j.appender.SYSLOG.Facility=Local3
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=java %d{ISO8601} %p %t %c{1}.%M - %m%n