Is it possible to re-assign the variable value a few times inside IF in one script block?
I have a script block where I need to pass variable values to different environments:
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
def ENV_SERVER = ['192.168.141.230']
def UML_SUFFIX = ['stage-or']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE ADN deploy'
def ENV_SERVER = ['192.168.111.30']
def UML_SUFFIX = ['stage-sg']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE SG deploy'
def ENV_SERVER = ['stage-sg-pbo-api.example.com']
def UML_SUFFIX = ['stage-ba']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
}
}
But I receive an error in Jenkins job on second variable assignment:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 80: The current scope already contains a variable of the name ENV_SERVER
# line 80, column 11.
def ENV_SERVER = ['192.168.111.30']
^
WorkflowScript: 81: The current scope already contains a variable of the name UML_SUFFIX
# line 81, column 11.
def UML_SUFFIX = ['stage-sg']
^
Or perhaps any other ways to multiple assignment inside one IF part of script block.
Using def defines the variable. This is only needed on the first call.
so removing the def on the other calls should work
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
def ENV_SERVER = ['192.168.141.230']
def UML_SUFFIX = ['stage-or']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE ADN deploy'
ENV_SERVER = ['192.168.111.30']
UML_SUFFIX = ['stage-sg']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE SG deploy'
ENV_SERVER = ['stage-sg-pbo-api.example.com']
UML_SUFFIX = ['stage-ba']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
}
}
The variables will only be scoped to the if block, so you wont have access to them outside that block.
Related
I'm wondering if it's possible to know when the script in user data executes completely?
data "template_file" "script" {
template = file("${path.module}/installing.sh")
}
data "template_cloudinit_config" "config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
filename = "install.sh"
content = "${data.template_file.script.rendered}"
}
}
resource "aws_instance" "web" {
ami = "ami-04e7b4117bb0488e4"
instance_type = "t2.micro"
key_name = "KEY"
vpc_security_group_ids = [aws_default_security_group.default.id]
subnet_id = aws_default_subnet.default_az1.id
associate_public_ip_address = true
iam_instance_profile = "Role_S3"
user_data = data.template_cloudinit_config.config.rendered
tags = {
Name = "Terraform-Ansible"
}
}
And in the content of the script I have this.
It tells me Terraform successfully apply the changes, but the script is still running, is there a way I can monitor that?
#!/usr/bin/env bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
sudo apt update
sudo apt upgrade -y
sudo apt install -y unzip
echo END
No, You can not confirm the user data status from the terraform, as it posts launching script that executes once EC2 instance launched. But you will need some extra effort on init script that one way to check.
How to check User Data status while launching the instance in aws
If you do something that is mentioned above to make some marker file once user data completed, then you can try this to check.
resource "null_resource" "user_data_status_check" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m wait for few minute for instance warm up, adjust accordingly \x1B[0m"
# wait 30 sec
sleep 30
ssh -i yourkey.pem instance_ip ConnectTimeout=30 -o 'ConnectionAttempts 5' test -f "/home/user/markerfile.txt" && echo found || echo not found
if [ $? -eq 0 ]; then
echo "user data sucessfully executed"
else
echo "Failed to execute user data"
fi
EOT
}
triggers = {
#remove this once you test it out as it should run only once
always_run ="${timestamp()}"
}
depends_on = ["aws_instance.my_instance"]
}
so this script will check marker file on the newly launch server by doing ssh with timeout 30 seconds with max attempts 5.
Here are some pointers to remember:
User data shell scripts must start with the Shebang #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Scripts entered as user data are run as the root user, so no need to use the sudo command in the init script.
When a user data script is processed, it is copied to and run from /var/lib/cloud/instances/instance-id/. The script is not deleted after it is run and can be found in this directory with the name user-data.txt So to check if your shell script made to the server refer this directory and the file.
The cloud-init output log file (/var/log/cloud-init-output.log) captures console output of your user_data shell script. to know how your user_data shell script was executed and its output check this file.
Source: https://www.middlewareinventory.com/blog/terraform-aws-ec2-user_data-example/
Well I use these two ways to confirm.
At the end of cloudinit config file this line sends me a notification through whatsapp (using callmebot). Thus no matter how much does it take to setup, I always get notified when it's ready to use. I watch some series or read something in that time. no time wasted.
curl -X POST "https://api.callmebot.com/whatsapp.php?phone=12345678910&text=Ec2+transcoder+setup+complete&apikey=12345"
At the end of cloudinit config this line runs -
echo "for faster/visual confirmation of above execution.."
wget https://www.sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4 -O /home/ubuntu/dpnd_comp.mp4
When I sign in to the instance I can see directly the file.
And I'm loving it. Hope this helps someone. Also, don't forget to tell me your method too.
We need to return as an option on this Jenkinsfile only the revision number from our k8s applications, but the command returns the entire output, and all my regex and escapes on the command was not working on it. Here is the code:
choiceType: 'PT_SINGLE_SELECT',
description: 'Revision of the application on kubernetes',
name: 'revision',
omitValueField: false,
randomName: 'choice-parameter-5633384460832177',
referencedParameters: 'namespaces,deployment',
script: [
$class: 'GroovyScript',
script: [
classpath: [],
sandbox: true,
script: """
if (namespaces.equals("Select")){
return["Nothing to do - Select your deployment"]
} else {
def revResult = null
def kubecmd0 = "kubectl rollout history deploy --kubeconfig=${kubefilePrd} -n " + namespaces + " " + deployment + " "
def kubecmd1 = kubecmd0.execute().in.text.split().toList()
return kubecmd1
}
"""
]
On Jenkins's job:
printscreen
Is there any function or magic regex that could solve this?
Problem solved:
[$class: 'CascadeChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Revision of the application on kubernetes',
name: 'revision',
randomName: 'choice-parameter-5633384460832177',
referencedParameters: 'namespaces,deployment',
script: [
$class: 'GroovyScript',
script: [
classpath: [],
sandbox: true,
script: """
if (namespaces.equals("Select")){
return["Nothing to do - Select your deployment"]
} else {
def command = "kubectl rollout history deploy --kubeconfig=${kubefilePrd} -n " + namespaces + " " + deployment + "| grep -v REVISION | grep -v deployment | cut -f1 -d' '"
def output = ['bash', '-c', command].execute().in.text
return output.split().toList()
}
"""
]
Basically, it's necessary to call a bash, and not use groovy directly. Works for me. :)
I am trying to reference variables that are set in my jenkinsfile in my serverless.yml file.
In jenkinsfile i have this
environment {
HELLO = 'hello-world'
}
In serverless.yml file i have this
custom:
secret: ${env:HELLO}
When running jenkins pipeline i get this error
A valid environment variable to satisfy the declaration 'env:HELLO' could not be found.
Here is my full jenkins file as requested, end goal i want to use val1 and val2 and env variables but if i can figure out how to to with hello world it is the same thing.
import com.lmig.intl.cloud.jenkins.exception.BuildException
def getJobName() {
return env.JOB_NAME
}
environment {
HELLO = 'hello-world'
}
def getEnvironment() {
def jobName = getJobName().split('/')
def environment = jobName[1].toLowerCase()
return environment.toLowerCase()
}
node('linux'){
stage('Checkout'){
checkout scm
}
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
def val2 = new groovy.json.JsonSlurper().parseText(secret)[1].SASLPASSWORD
if(val1 != '' && val2 != ''){
echo "Vault Secret pulled Successfully"
}else{
echo "Vault Secret Not Found"
throw new BuildException("Vault Secret Not Found")
}
}
}
stage('Deploy') {
def ENVIRONMENT = getEnvironment().replaceAll("\\_","")
withAWS(credentials:'aws-cred') {
sh 'npm i serverless-python-requirements'
sh 'npm install --save-dev serverless-step-functions'
sh 'npm install serverless-deployment-bucket --save-dev'
sh 'npm i serverless-pseudo-parameters'
sh 'npm i serverless-plugin-resource-tagging'
sh 'pip3 install --user -r requirements.txt'
sh "serverless deploy --stage ${ENVIRONMENT}"
}
}
}
You can use sed to replace the placeholder: ${env:HELLO}to real value, if you can make jenkin job always be executed on Linux slave.
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
sh """
sed -i 's/\${env:HELLO}/${val1}/' <relative path to>/serverless.yml
"""
I did a quick practice with a simple pipeline as following, the sed command I give work well.
node('docker') {
stage('A') {
sh '''
set +x
echo 'custom:' > serverless.yml
echo ' secret: ${env:HELLO}' >> serverless.yml
echo '### Before replace ###'
cat serverless.yml
'''
def val1 = 'hello'
sh """
set +x
sed -i 's/\${env:HELLO}/${val1}/' ./serverless.yml
echo '### After replace ###'
cat serverless.yml
"""
}
}
Output of job build
[script-pipeline-practice] Running shell script
+ set +x
### Before replace ###
custom:
secret: ${env:HELLO}
[Pipeline] sh
[script-pipeline-practice] Running shell script
+ set +x
### After replace ###
custom:
secret: hello
This is the error I am receiving:
[*] Exception! Exiting.
Traceback (most recent call last):
File "bhnet.py", line 59, in <module>
client.close()
AttributeError: 'module' object has no attribute 'close'
Below is the code straight from the book I am following. Is there anyone that can tell me what is going on?
import sys, socket, getopt, threading, subprocess
#def some global variables
listen = False
command = False
upload = False
execute = ""
target = ""
upload_destination = ""
port = 0
client = socket
def usage():
print "bhp net tool"
print
print "usage:bhpnet.py -t target_host -p port"
print "-l --listen -listen on [host]:[port] for incoming connections"
print "-e --execute=file_to_run -execute the given file upon - receiving a connection"
print "-c --command -initialize a command shell"
print "-u --upload=destination -upon receiving connection upload a -file and write to [destination]"
print
print
print "examples:"
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -c"
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -u =c:\\target.exe"
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -e=\"cat /etc.passwd\""
print "echo 'python' | ./bhpnet.py -t 192.168.8.135 -p 135"
sys.exit(0)
def client_sender(buffer):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
#connect to our target host
client.connect((target,port))
if len(buffer):
client.send(buffer)
while True:
#now wait for data back
recv_len = 1
response = ""
while recv_len:
data = client.recv(4096)
recv_len = len(data)
response+= data
if recv_len < 4096:
break
print response,
#wait for more input
buffer = raw_input("")
buffer += "\n"
#send it off
client.send(buffer)
except:
print "[*] Exception! Exiting."
# tear down the connection
client.close()
enterkey()
def server_loop():
global target
# if no target is defined, we listen on all interfaces
if not len(target):
target = "0.0.0.0"
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((target,port))
server.listen(5)
while True:
client.socket, addr = server.accept()
#spin off a thread to handle our new client
client_thread = threading.Thread(target=client_handler, args=(client_socket,))
client_thread.start()
def run_command(command):
#trim the newline
command = command.rstrip()
#run the command and get the output back
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)
except:
output = "Failed to execute command.\r\n"
#send the output back to the client
return output
def client_handler(client_socket):
global upload
global command
global execute
#check for upload
if len(upload_destination):
#read in all of our bytes and write to our destination
file_buffer = ""
# keep reading data until none is available
while True:
data = client_socket.recv(1024)
if not data:
break
else:
file_buffer += data
# now we take these bytes and try to write them out
try:
file_descriptor = open(upload_destination, "wb")
file_descriptor.write(file_buffer)
file_descriptor.close()
# acknowledge that we wrote the file out
except:
client_socket.send("Failed to save file to %s\r\n" % upload_destination)
#check for command execution
if len(execute):
# run the command
output = run_command(execute)
client_socket.send(output)
# now we go into another loop if a command shell was requested
if command:
while True:
#show a simple prompt
client_socket.send("<BHP:#> ")
# now we recive until we see a linefeed
cmd_buffer = ""
while "\n" not in cmd_buffer:
cmd_buffer += client_socket.recv(1024)
# send back the command output
response = run_command(cmd_buffer)
#send back the response
client_socket.send(response)
def main():
global listen
global port
global execute
global command
global upload_destination
global target
if not len(sys.argv[1]):
usage()
#read the commandline options
try:
opts, args = getopt.getopt(sys.argv[1:], "hle:t:p:cu:", ["help", "listen", "execute", "target", "port", "command", "upload"])
except getopt.GetoptError as err:
print str(err)
usage()
for o,a in opts:
if o in ("-h", "--help"):
usage()
elif o in ("-l", "--listen"):
listen = True
elif o in ("-e", "--execute"):
execute = a
elif o in ("-c", "--commandshell"):
command = True
elif o in ("-u", "--upload"):
upload_destination = a
elif o in ("-t", "--target"):
target = a
elif o in ("-p", "--port"):
port = int(a)
else:
assert False, "unhandled option"
#are we going to listen or just send data from stdin?
if not listen and len(target) and port >0:
#read in the buffer from the commandline
#this is will block, so send ctrl-D if not sending input
#to stdin
buffer = sys.stdin.read()
#send data off
client_sender(buffer)
#are we going to listen and potentially
#upload things, execute commands, and drop a shell back
#depending on our command line options above
if listen:
server_loop()
main()
#This code is tested and will work for you
import sys
import socket
import getopt
import threading`enter code here`
import subprocess
import pdb
# globals
listen = False
command = False
upload = False
execute = ""
target = ""
upload_destination = ""
port = 0
def usage():
print "BHP Net Tool"
print
print "Usage: bhpnet.py -t target_host -p port"
print "-l --listen listen on [host]:[port] for incoming connections"
print "-e --execute=file_to_run execute the given file upon receiving a connection"
print "-c --command initialize a command shell"
print "-u --upload upon receiving connection upload a file and write to [destination]"
print
print
print "Examples: "
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -c"
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -u=c:\\target.exe"
print "bhpnet.py -t 192.168.0.1 -p 5555 -l -e='cat /etc/passwd'"
print "echo 'ABCDEFGHI' | ./bhpnet.py -t 192.168.11.12 -p 135"
sys.exit(0)
def main():
global listen
global port
global execute
global command
global upload_destination
global target
if not len(sys.argv[1:]):
usage()
try:
opts, args = getopt.getopt(sys.argv[1:], "hle:t:p:cu:",["help", "listen", "execute", "target", "port", "command", "upload"])
except getopt.GetoptError as err:
print str(err)
usage()
for o,a in opts:
if o in ("-h", "--help"):
usage()
elif o in ("-l", "--listen"):
listen = True
elif o in ("-e", "--execute"):
execute = a
elif o in ("-c", "--commandshell"):
command = True
elif o in ("-u", "--upload"):
upload_destination = a
elif o in ("-t", "--target"):
target = a
elif o in ("-p", "--port"):
port = int(a)
# else:
# assert False, "Unhandled Option"
if not listen and len(target) and port > 0:
buffer = sys.stdin.read()
client_sender(buffer)
if listen:
server_loop()
def client_sender(buffer):
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
client.connect((target, port))
if len(buffer):
client.send(buffer)
# wait for data back
while True:
recv_len = 1
response = ""
while recv_len:
data = client.recv(4096)
recv_len = len(data)
response += data
if recv_len < 4096:
break
print response,
# wait for more input
buffer = raw_input("")
buffer += "\r\n"
print "[*] Sending: '%s'" % buffer
client.send(buffer)
except Exception as err:
print "[*] Exception! Exiting. %s" % err
client.close()
def server_loop():
global target
if not len(target):
target = "0.0.0.0"
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((target,port))
server.listen(5)
while True:
client_socket,addr = server.accept()
client_thread = threading.Thread(target=client_handler,
args=(client_socket,))
client_thread.start()
def run_command(command):
command = command.rstrip()
print "[*] Processing command: %s" % command
try:
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True)
except Exception as err:
output = "Failed to execute command.\r\n"
return output
def client_handler(client_socket):
global upload
global execute
global command
if len(upload_destination):
file_buffer = ""
while True:
data = client_socket.recv(1024)
if not data:
break
else:
file_buffer += data
try:
file_descriptor = open(upload_destination, "wb")
file_descriptor.write(file_buffer)
file_descriptor.close()
client_socket.send("Successfully saved file to %s\r\n" % upload_destination)
except:
client_socket.send("Successfully saved file to %s\r\n" % upload_destination)
# check for command execution
if len(execute):
output = run_command(execute)
client_socket.send(output)
# another loop if command shell requested
if command:
while True:
client_socket.send("<BHP:#> ")
cmd_buffer = ""
while "\n" not in cmd_buffer:
cmd_buffer += client_socket.recv(1024)
print "[*] Recv'd command: %s" % cmd_buffer
response = run_command(cmd_buffer)
client_socket.send(response)
main()
I'd like to set up Loggly to run on AWS Elastic Beanstalk, but can't find any information on how to do this. Is there any guide anywhere, or some general guidance on how to start?
This is how I do it, for papertrailapp.com (which I prefer instead of loggly). In your /ebextensions folder (see more info) you create logs.config, where specify:
container_commands:
01-set-correct-hostname:
command: hostname www.example.com
02-forward-rsyslog-to-papertrail:
# https://papertrailapp.com/systems/setup
command: echo "*.* #logs.papertrailapp.com:55555" >> /etc/rsyslog.conf
03-enable-remote-logging:
command: echo -e "\$ModLoad imudp\n\$UDPServerRun 514\n\$ModLoad imtcp\n\$InputTCPServerRun 514\n\$EscapeControlCharactersOnReceive off" >> /etc/rsyslog.conf
04-restart-syslog:
command: service rsyslog restart
55555 should be replaced with the UDP port number provided by papertrailapp.com. Every time after new instance bootstrap this config will be applied. Then, in your log4j.properties:
log4j.rootLogger=WARN, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.header=true
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=[%p] %t %c: %m%n
I'm not sure whether it's an optimal solution. Read more about this mechanism in jcabi-beanstalk-maven-plugin
You can also use the installation script from loggly itself.
The setup below follows the instructions for the legacy setup on https://www.loggly.com/docs/configure-syslog-script/ with minor changes (no confirmation prompts, sudo command replaced since no tty is available)
(edit: updated link, seems to be an outdated solution now in loggly docs)
Place the following script in .ebextensions/loggly.config
Replace TOKEN and ACCOUNT with your own.
#
# Install loggly.com on AWS Elastic Beanstalk
# Tested with node.js environment
# Save this file as .ebextensions/loggly.config
# Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
# See Also /var/log/eb-tools.log
#
commands:
01_loggly_dl:
command: wget -q -O /tmp/loggly.py https://www.loggly.com/install/configure-syslog.py
02_loggly_config:
command: su --session-command="python /tmp/loggly.py setup --auth TOKEN --account ACCOUNT --yes"
Here is a link to loggly support site for using syslogd with loggly:
http://wiki.loggly.com/loggingconfiguration
or using the loggly api with your own app:
http://wiki.loggly.com/apidocumention
Here is an elasticbeanstalk config for Loggly that I've just started using thanks to pointers from this thread and the logging SaaS vendors setup instructions. [Loggly Config Mgmt, Papertrail rsyslog ]
Save the file as loggly.config in the .ebextensions directory and make sure to check the YAML formatting conventions (no tabs, etc). Substitute your Loggly TCP port number, username, password and domain name into the angle brackets as required.
Note that for AWS ruby versions of elasticbeanstalk, there may be differences in the EC2 /etc/rsyslog setup. For example, if /etc/rsyslog.d already exists, and there is already an "$IncludeConfig /etc/rsyslog.d/*.conf" directive, then command "01-forward-rsyslog-to-loggly:" can be removed.
Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
files:
"/etc/rsyslog.d/90-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
*.* ##logs.loggly.com:<yourportnum> # !!!Loggly supplied port number for each app!!!
# ### end of the forwarding rule ###
encoding: plain
"/tmp/loggly.py" :
mode: "000755"
owner: root
group: root
content: |
import json
import sys
import urllib2
'''
Auto-authenticate Syslog TCP inputs.
Usage: python inputs.py -u user -p pass -s subdomain
'''
state = None
params = {}
for i in range(len(sys.argv)):
arg = sys.argv[i]
if state:
params[state] = arg
state = None
if arg == '--username' or arg == '-u':
state = 'username'
if arg == '--password' or arg == '-p':
state = 'password'
if arg == '--subdomain' or arg == '-s':
state = 'subdomain'
url = 'https://%s.loggly.com/api/inputs' % params['subdomain']
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, url, params['username'], params['password'])
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
opener.open(url)
urllib2.install_opener(opener)
inputs = json.loads(urllib2.urlopen(url).read())
for input in inputs:
if input['service']['name'] == 'syslogtcp':
url = 'https://%s.loggly.com/api/inputs/%d/adddevice' % \
(params['subdomain'], input['id'])
response = urllib2.urlopen(url, {}).read()
print response
encoding: plain
commands:
01-forward-rsyslog-to-loggly:
# http://loggly.com/support/sending-data/logging-from/syslog/rsyslog/cd
command: test "$(grep -s '90-loggly.conf' /etc/rsyslog.conf)" == "" && echo -e "\n# Include the loggly.conf file\n\$IncludeConfig /etc/rsyslog.d/90-loggly.conf" >> /etc/rsyslog.conf
02-restart-syslog:
command: service rsyslog restart
03-inform_loggly:
command: "python /tmp/loggly.py -u <Yourloginname> -p <Yourpassword> -s <Yourdomainname>"
Typically, /etc/rsyslog.config will have a "$IncludeConfig /etc/rsyslog.d/*.conf" at the end - so you can simply introduce your own configuration file using the "files:" portion of your .ebextensions file. This works whether you are deploying to fresh servers or not.
For a ruby production.log, you might have something like this in a .ebextensions/01loggly.config file. Note this picks up your beanstalk environment name too as a loggly tag.
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/myapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglyid#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# Add a tag for file events
# For production.log
$InputFileName /var/app/support/logs/production.log
$InputFileTag production-log
$InputFileStateFile stat-production-log #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
# Send to Loggly then discard
if $programname == 'myapp-production-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'myapp-production-log' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
For Tomcat, you might do something like this in a .ebextesions/01logglyg.config file:
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/mytomcatapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglygidhere#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# catalina.log
$InputFileName /var/log/tomcat7/catalina.log
$InputFileTag catalina-log
$InputFileStateFile stat-catalina-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-log' then ~
# catalina.out
$InputFileName /var/log/tomcat7/catalina.out
$InputFileTag catalina-out
$InputFileStateFile stat-catalina-out
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-out' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-out' then ~
# host-manager.log
$InputFileName /var/log/tomcat7/host-manager.log
$InputFileTag host-manager
$InputFileStateFile stat-host-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'host-manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'host-manager' then ~
# initd.log
$InputFileName /var/log/tomcat7/initd.log
$InputFileTag initd
$InputFileStateFile stat-initd
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'initd' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'initd' then ~
# localhost.log
$InputFileName /var/log/tomcat7/localhost.log
$InputFileTag localhost-log
$InputFileStateFile stat-localhost-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'localhost-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'localhost-log' then ~
# manager.log
$InputFileName /var/log/tomcat7/manager.log
$InputFileTag manager
$InputFileStateFile stat-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'manager' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
This config is working for me - though I haven't yet determined how to get multi-line entries coming into a single entry in Loggly yet.
I know this is question is fairly old but I found that the answers really didnt answer the question or just plain didnt work correctly when implemented. I found that this works (file .ebextenstions/02loggly.config):
container_commands:
01-transform-rsyslog.conf:
command: sed "s/NODE_ENV/$NODE_ENV/g" scripts/22-loggly.conf.temp > scripts/22-loggly.conf
02-setup-rsyslog.conf:
command: cp scripts/22-loggly.conf /etc/rsyslog.d/22-loggly.conf
03-restart:
command: /sbin/service rsyslog restart
the "01-transform-rsyslog.conf" step is optional; I use that to set a tag by NODE_ENV in the file. "22-loggly.conf.temp" is a modified version of the "22-loggly.conf" file that gets created at "/etc/rsyslog.d/" when you run the linux source setup script (https://www.loggly.com/install/configure-syslog.py). I just installed it on a ec2 instance and copied the file.
Note I had to prepend '/sbin' to my service command because it was failing for me without it. Also, this restarts syslog on every deploy, which should be fine.
Now you just have to make sure your app logs to syslog. For Java it is going to be log4j or similar. For Node.js (which is what I'm using), rconsole works (https://github.com/tblobaum/rconsole).
None of the things I tried seemed to work, and the loggly documentation is very confusing!
I hope that this will help someone, this is how I got it to work.
Paste the following in .ebextensions/loggly.config
files:
"/etc/rsyslog.conf" :
mode: "000644"
owner: root
group: root
content: |
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
# Input for FILE.LOG
$InputFileName /var/app/current/PATH_TO_YOUR_LOG_FILE
$InputFileTag social_php:
$InputFileStateFile stat-social_php #this must be unique for each file being polled
$InputFileSeverity info
$InputRunFileMonitor
#Add a tag for events from this file
$template LogglyFormatsocial_php,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [TOKEN#41058 tag=\"php_log\"] %msg%\n"
if $programname == 'social_php' then ##logs.loggly.com:37146;LogglyFormatsocial_php
if $programname == 'social_php' then ~
*.* ##logs.loggly.com:37146
commands:
01-restart-syslog:
command: service rsyslog restart
Replace all instances of social_php with the tag that makes sense for your application.
Replace /var/app/current/PATH_TO_YOUR_LOG_FILE with your log file location
Follow my loggly configuration in elasticbeanstalk. For Linux + log4j
on .ebextensions file configuration
container_commands:
01_configure_sudo_access:
command: sed -i -- 's/ requiretty/ \!requiretty/g' /etc/sudoers
02_loggy_configure:
command: sudo python .ebextensions/scripts/loggly_config.py
03_restore_sudo_access:
command: sed -i -- 's/ \!requiretty/ requiretty/g' /etc/sudoers
Loggly script in python for default AMI:
import os
rsyslog_path = '/etc/rsyslog.conf'
loggly_file_path = '/etc/rsyslog.d/22-loggly.conf'
class LogglyConfig:
def __init__(self):
self.__linux_log()
self.__config_loggly_for_log4j()
def __linux_log(self):
#not installed on this machine
if not os.path.exists(loggly_file_path):
os.system('rm -f configure-linux.sh')
os.system('wget https://www.loggly.com/install/configure-linux.sh')
os.system('sudo bash configure-linux.sh -a DOMAIN -t TOKEN -u USER -p PASSWORD -s')
def __config_loggly_for_log4j(self):
f = open(rsyslog_path,'r')
file_text = f.read()
f.close()
file_text = file_text.replace('#$ModLoad imudp', '$ModLoad imudp')
file_text = file_text.replace('#$UDPServerRun 514', '$UDPServerRun 514')
f = open(rsyslog_path,'w')
f.write(file_text)
f.close()
os.system('service rsyslog restart')
LogglyConfig()
In log4j.properties on your java project
log4j.rootLogger=INFO, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.SyslogHost=localhost
log4j.appender.SYSLOG.Facility=Local3
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=java %d{ISO8601} %p %t %c{1}.%M - %m%n