Step Functions AWS SAM CLI Local Connection Refused Error - amazon-web-services

Following the steps in the AWS documentation
https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local-lambda.html
using aws-stepfuncitons-local docker container
I'm getting a connection refused error at the last step
2019-05-28 12:37:05.004: arn:aws:states:us-east-1:123456789012:execution:HelloWorld:test :
{
"Type":"ExecutionFailed",
"PreviousEventId":5,
"ExecutionFailedEventDetails":
{
"Error":"Lambda.SdkClientException",
"Cause":"Unable to execute HTTP request: Connect to 127.0.0.1:3001 [/127.0.0.1] failed: Connection refused (Connection refused)"
}
}
Any help on how to resolve it would be greatly appreciated.

The docker container can't talk to services on the host network. To get it to work you need to add '--network host'.
Example:
docker run -p 8083:8083 --network host --env-file aws-stepfunctions-local-credentials.txt amazon/aws-stepfunctions-local
More details here and here

Just update LAMBDA_ENDPOINT to http://host.docker.internal:3001

It took me a few days, so that's what works for me:
on my Mac: LAMBDA_ENDPOINT=http://host.docker.internal:5000/ (5000 for moto_server, use 3001 otherwise) works.
And use docker run -p 8083:8083 --env-file aws-stepfunctions-local-credentials.txt amazon/aws-stepfunctions-local
on ubuntu-latest (github action), however, this combination works:
LAMBDA_ENDPOINT=http://127.0.0.1:5000/
docker run -p 8083:8083 --env-file aws-stepfunctions-local-credentials.txt amazon/aws-stepfunctions-local --network host
=> note the additional --network host
To handle both cases inside the same codebase (in python):
_, output = subprocess.getstatusoutput('uname -s')
if output == 'Darwin': # on a Mac
sam_step_function_local_cmd = subprocess.Popen(
[
'docker', 'run',
'-p=8083:8083',
'--env-file=aws-stepfunctions-local-credentials-mac.txt',
'amazon/aws-stepfunctions-local:1.7.3'
],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
else: # on linux
sam_step_function_local_cmd = subprocess.Popen(
[
'docker', 'run',
'-p=8083:8083',
'--env-file=aws-stepfunctions-local-credentials-linux.txt',
'--network=host',
'amazon/aws-stepfunctions-local:1.7.3'
],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
try:
mock_ssm(
func
)(*args, **kwargs)
finally:
sam_step_function_local_cmd.terminate()
aws-stepfunctions-local-credentials-linux.txt:
AWS_DEFAULT_REGION=eu-west-1
LAMBDA_ENDPOINT=http://127.0.0.1:5000/
aws-stepfunctions-local-credentials-mac.txt:
AWS_DEFAULT_REGION=eu-west-1
LAMBDA_ENDPOINT=http://host.docker.internal:5000/

Related

Argo giving x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs error

I've installed Argo on a managed k8 service following the guidelines here.
When i launch the following example task i get an error (if you have argo installed you should be able to copy paster the below code):
# create a.yml
cat >> a.yml<<EOL
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world- # Name of this Workflow
spec:
entrypoint: whalesay # Defines "whalesay" as the "main" template
templates:
- name: whalesay # Defining the "whalesay" template
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"] # This template runs "cowsay" in the "whalesay" image with arguments "hello world"
EOL
# submit a.yml
argo --insecure-skip-tls-verify --insecure-skip-verify -n argo submit a.yml
# monitor
$ argo list
# NAME STATUS AGE DURATION PRIORITY
# hello-world-hxrcp Succeeded 4m 10s 0
argo watch --insecure-skip-tls-verify --insecure-skip-verify -v -n argo hello-world-hxrcp
# DEBU[2021-06-09T19:37:22.125Z] CLI version version="{v3.0.7 2021-05-25T18:57:09Z e79e7ccda747fa4487bf889142c744457c26e9f7 v3.0.7 clean go1.16.3 gc linux/amd64}"
# DEBU[2021-06-09T19:37:22.125Z] Client options opts="(argoServerOpts=(url=127.0.0.1:2746,path=,secure=true,insecureSkipVerify=true,http=true),instanceID=)"
# DEBU[2021-06-09T19:37:22.125Z] curl -H 'Accept: text/event-stream' -H 'Authorization: ******' 'https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0'
# FATA[2021-06-09T19:37:22.536Z] Get "https://127.0.0.1:2746/api/v1/workflow-events/argo?listOptions.fieldSelector=metadata.name%3Dhello-world-hxrcp&listOptions.resourceVersion=0": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
Why am i seeing this error ?
The install process was this:
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml
CLI (taken from the latest version here):
# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.7/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
sudo mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
# link with server
# recommended on user panel in interface
cat >> ~/.bashrc <<EOL
export ARGO_SERVER='127.0.0.1:2746'
export ARGO_HTTP1=true
export ARGO_SECURE=true
export ARGO_BASE_HREF=
export ARGO_TOKEN=''
export ARGO_NAMESPACE=argo
export ARGO_INSECURE_SKIP_VERIFY=true
EOL
# check it works:
argo list
Heyo, I ran into this issue when setting up with the argo helm chart on kind. The problem is that you have to disable tls verification for the executor (the thing that executes the workflow) using the ARGO_KUBELET_INSECURE env var. Here are the docs https://argoproj.github.io/argo-workflows/environment-variables/#executor
Sorry I don't have the exact code change you need for your setup, but I'm sure you can figure that out now that you know what the problem is ;).
Here's what my helm values.yaml file looks like in case that helps anyone else:
server:
serviceType: LoadBalancer
extraArgs:
- --auth-mode=server
controller:
containerRuntimeExecutor: k8sapi
executor:
env:
- name: ARGO_KUBELET_INSECURE
value: true

ocserv could not execute script for the incoming connection

connect-script = /app/connect.sh
disconnect-script = /app/disconnect.sh
I have the above configuration in my ocserv.conf in the docker container, but ocserv fails to execute /app/connect.sh when there is a connection. I cann't find the real cause from the following log, has anyone had the same issue?
ocserv[26]: main[test]:xxx.xxx.179.135:57352 user of group 'Route' authenticated (using cookie)
ocserv[29]: main[test]:xxx.xxx.179.135:57352 executing script up /app/connect.sh
ocserv[29]: main[test]:xxx.xxx.179.135:57352 main-user.c:379: Could not execute script /app/connect.sh
ocserv[26]: main[test]:xxx.xxx.179.135:57352 connect-script exit status: 1
ocserv[26]: main[test]:xxx.xxx.179.135:57352 failed authentication attempt for user 'test'
The content of /app/connect.sh:
#!/bin/bash
echo "$(date) [info] User ${USERNAME} Connected - Server: ${IP_REAL_LOCAL} VPN IP: ${IP_REMOTE} Remote IP: ${IP_REAL} Device:${DEVICE}"
Well, I figured it out myself that the docker container I created doesn't have bash, and one solution is to substitute #!/bin/bash with #!/bin/sh.

Automating network interface related configurations on red hat ami-7.5

I have an ENI created, and I need to attach it as a secondary ENI to my EC2 instance dynamically using cloud formation. As I am using red hat AMI, I have to go ahead and manually configure RHEL which includes steps as mentioned in below post.
Manually Configuring secondary Elastic network interface on Red hat ami- 7.5
Can someone please tell me how to automate all of this using cloud formation. Is there a way to do all of it using user data in a cloud formation template? Also, I need to make sure that the configurations remain even if I reboot my ec2 instance (currently the configurations get deleted after reboot.)
Though it's not complete automation but you can do below to make sure that the ENI comes up after every reboot of your ec2 instance (only for RHEL instances). If anyone has any better suggestion, kindly share.
vi /etc/systemd/system/create.service
Add below content
[Unit]
Description=XYZ
After=network.target
[Service]
ExecStart=/usr/local/bin/my.sh
[Install]
WantedBy=multi-user.target
Change permissions and enable the service
chmod a+x /etc/systemd/system/create.service
systemctl enable /etc/systemd/system/create.service
Below shell script does the configuration on rhel for ENI
vi /usr/local/bin/my.sh
add below content
#!/bin/bash
my_eth1=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/0e:3f:96:77:bb:f8/local-ipv4s/`
echo "this is the value--" $my_eth1 "hoo"
GATEWAY=`ip route | awk '/default/ { print $3 }'`
printf "NETWORKING=yes\nNOZEROCONF=yes\nGATEWAYDEV=eth0\n" >/etc/sysconfig/network
printf "\nBOOTPROTO=dhcp\nDEVICE=eth1\nONBOOT=yes\nTYPE=Ethernet\nUSERCTL=no\n" >/etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1
ip route add default via $GATEWAY dev eth1 tab 2
ip rule add from $my_eth1/32 tab 2 priority 600
Start the service
systemctl start create.service
You can check if the script ran fine or not by --
journalctl -u create.service -b
Still need to figure out the joining of the secondary ENI from Linux, but this was the Python script I wrote to have the instance find the corresponding ENI and attach it to itself. Basically the script works by taking a predefined naming tag for both the ENI and Instance, then joins the two together.
Pre-reqs for setting this up are:
IAM role on the instance to allow access to S3 bucket where script is stored
Install pip and the AWS CLI in the user data section
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli --upgrade
aws configure set default.region YOUR_REGION_HERE
pip install boto3
sleep 180
Note on sleep 180 command: I have my ENI swap out on instance in an autoscaling group. This allows an extra 3 min for the other instance to shut down and drop the ENI, so the new one can pick it up. May or may not be necessary for your use case.
AWS CLI command in user data to download the file onto the instance (example below)
aws s3api get-object --bucket YOURBUCKETNAME --key NAMEOFOBJECT.py /home/ec2-user/NAMEOFOBJECT.py
# coding: utf-8
import boto3
import sys
import time
client = boto3.client('ec2')
# Get the ENI ID
eni = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_id = eni['NetworkInterfaces'][0]['NetworkInterfaceId']
# Get ENI status
eni_status = eni['NetworkInterfaces'][0]['Status']
print('Current Status: {}\n'.format(eni_status))
# Detach if in use
if eni_status == 'in-use':
eni_attach_id = eni['NetworkInterfaces'][0]['Attachment']['AttachmentId']
eni_detach = client.detach_network_interface(
AttachmentId=eni_attach_id,
DryRun=False,
Force=False
)
print(eni_detach)
# Wait until ENI is available
print('start\n-----')
while eni_status != 'available':
print('checking...')
eni_state = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_status = eni_state['NetworkInterfaces'][0]['Status']
print('ENI is currently: ' + eni_status + '\n')
if eni_status != 'available':
time.sleep(10)
print('end')
# Get the instance ID
instance = client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the tag name of your instance here']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
)
instance_id = instance['Reservations'][0]['Instances'][0]['InstanceId']
# Attach the ENI
response = client.attach_network_interface(
DeviceIndex=1,
DryRun=False,
InstanceId=instance_id,
NetworkInterfaceId=eni_id
)

Saltstack fails when creating minions from a map file on DigitalOcean

I can't create a minion from the map file, no idea what's happened. A month ago my script was working correctly, right now it fails. I was trying to do some research about it but I could't find anything about it. Could someone have a look on my DEBUG log? The minion is created on DigitalOcean but the master server can't connect to it at all.
so I run:
salt-cloud -P -m /etc/salt/cloud.maps.d/production.map -l debug
The master is running on Ubuntu 16.04.1 x64, the minion also.
I use the latest saltstack's library:
echo "deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/latest xenial main" >> /etc/apt/sources.list.d/saltstack.list
I tested both 2016.3.2 and 2016.3.3, what is interesting, the same script was working correctly 4 weeks ago, I assume something had to change.
ERROR:
Writing /usr/lib/python2.7/dist-packages/salt-2016.3.3.egg-info
* INFO: Running install_ubuntu_git_post()
disabled
Created symlink from /etc/systemd/system/multi-user.target.wants/salt-minion.service to /lib/systemd/system/salt-minion.service.
* INFO: Running install_ubuntu_check_services()
* INFO: Running install_ubuntu_restart_daemons()
Job for salt-minion.service failed because a configured resource limit was exceeded. See "systemctl status salt-minion.service" and "journalctl -xe" for details.
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
* ERROR: No init.d support for salt-minion was found
* ERROR: Fai
[DEBUG ] led to run install_ubuntu_restart_daemons()!!!
[ERROR ] Failed to deploy 'minion-zk-0'. Error: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 2293, in create_multiprocessing
local_master=parallel_data['local_master']
File "/usr/lib/python2.7/dist-packages/salt/cloud/__init__.py", line 1281, in create
output = self.clouds[func](vm_)
File "/usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py", line 481, in create
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 527, in bootstrap
deployed = deploy_script(**deploy_kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1516, in deploy_script
if root_cmd(deploy_command, tty, sudo, **ssh_kwargs) != 0:
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 2167, in root_cmd
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
File "/usr/lib/python2.7/dist-packages/salt/utils/cloud.py", line 1784, in _exec_ssh_cmd
cmd, proc.exitstatus
SaltCloudSystemExit: Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_ID '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
[DEBUG ] LazyLoaded nested.output
minion-zk-0:
----------
Error:
Command 'ssh -t -t -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oControlPath=none -oPasswordAuthentication=no -oChallengeResponseAuthentication=no -oPubkeyAuthentication=yes -oIdentitiesOnly=yes -oKbdInteractiveAuthentication=no -i /etc/salt/keys/cloud/do.pem -p 22 root#REMOVED_IP '/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd/deploy.sh -c '"'"'/tmp/.saltcloud-5d18c002-e817-46d5-9fb2-d3bdb2dfe7fd'"'"' -P git v2016.3.3'' failed. Exit code: 1
root#master-zk:/etc/salt/cloud.maps.d# salt '*' test.ping
minion-zk-0:
Minion did not return. [No response]
root#master-zk:/etc/salt/cloud.maps.d#
It is located in your cloud configuration somewhere in /etc/salt/cloud.profiles.d/, /etc/salt/cloud.providers.d/ or /etc/salt/cloud.d/. Just figure out where and change the value salt to your masters ip.
I currently do this in my providers setting like that:
hit-vcenter:
driver: vmware
user: 'foo'
password: 'secret'
url: 'some url'
protocol: 'https'
port: 443
minion:
master: 10.1.10.1

How to set up Loggly on Elastic Beanstalk?

I'd like to set up Loggly to run on AWS Elastic Beanstalk, but can't find any information on how to do this. Is there any guide anywhere, or some general guidance on how to start?
This is how I do it, for papertrailapp.com (which I prefer instead of loggly). In your /ebextensions folder (see more info) you create logs.config, where specify:
container_commands:
01-set-correct-hostname:
command: hostname www.example.com
02-forward-rsyslog-to-papertrail:
# https://papertrailapp.com/systems/setup
command: echo "*.* #logs.papertrailapp.com:55555" >> /etc/rsyslog.conf
03-enable-remote-logging:
command: echo -e "\$ModLoad imudp\n\$UDPServerRun 514\n\$ModLoad imtcp\n\$InputTCPServerRun 514\n\$EscapeControlCharactersOnReceive off" >> /etc/rsyslog.conf
04-restart-syslog:
command: service rsyslog restart
55555 should be replaced with the UDP port number provided by papertrailapp.com. Every time after new instance bootstrap this config will be applied. Then, in your log4j.properties:
log4j.rootLogger=WARN, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.facility=local1
log4j.appender.SYSLOG.header=true
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=[%p] %t %c: %m%n
I'm not sure whether it's an optimal solution. Read more about this mechanism in jcabi-beanstalk-maven-plugin
You can also use the installation script from loggly itself.
The setup below follows the instructions for the legacy setup on https://www.loggly.com/docs/configure-syslog-script/ with minor changes (no confirmation prompts, sudo command replaced since no tty is available)
(edit: updated link, seems to be an outdated solution now in loggly docs)
Place the following script in .ebextensions/loggly.config
Replace TOKEN and ACCOUNT with your own.
#
# Install loggly.com on AWS Elastic Beanstalk
# Tested with node.js environment
# Save this file as .ebextensions/loggly.config
# Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
# See Also /var/log/eb-tools.log
#
commands:
01_loggly_dl:
command: wget -q -O /tmp/loggly.py https://www.loggly.com/install/configure-syslog.py
02_loggly_config:
command: su --session-command="python /tmp/loggly.py setup --auth TOKEN --account ACCOUNT --yes"
Here is a link to loggly support site for using syslogd with loggly:
http://wiki.loggly.com/loggingconfiguration
or using the loggly api with your own app:
http://wiki.loggly.com/apidocumention
Here is an elasticbeanstalk config for Loggly that I've just started using thanks to pointers from this thread and the logging SaaS vendors setup instructions. [Loggly Config Mgmt, Papertrail rsyslog ]
Save the file as loggly.config in the .ebextensions directory and make sure to check the YAML formatting conventions (no tabs, etc). Substitute your Loggly TCP port number, username, password and domain name into the angle brackets as required.
Note that for AWS ruby versions of elasticbeanstalk, there may be differences in the EC2 /etc/rsyslog setup. For example, if /etc/rsyslog.d already exists, and there is already an "$IncludeConfig /etc/rsyslog.d/*.conf" directive, then command "01-forward-rsyslog-to-loggly:" can be removed.
Deploy per normal scripts or aws.push. To help debug the push, ssh & tail /var/log/cfn-init.log
files:
"/etc/rsyslog.d/90-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
*.* ##logs.loggly.com:<yourportnum> # !!!Loggly supplied port number for each app!!!
# ### end of the forwarding rule ###
encoding: plain
"/tmp/loggly.py" :
mode: "000755"
owner: root
group: root
content: |
import json
import sys
import urllib2
'''
Auto-authenticate Syslog TCP inputs.
Usage: python inputs.py -u user -p pass -s subdomain
'''
state = None
params = {}
for i in range(len(sys.argv)):
arg = sys.argv[i]
if state:
params[state] = arg
state = None
if arg == '--username' or arg == '-u':
state = 'username'
if arg == '--password' or arg == '-p':
state = 'password'
if arg == '--subdomain' or arg == '-s':
state = 'subdomain'
url = 'https://%s.loggly.com/api/inputs' % params['subdomain']
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, url, params['username'], params['password'])
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
opener.open(url)
urllib2.install_opener(opener)
inputs = json.loads(urllib2.urlopen(url).read())
for input in inputs:
if input['service']['name'] == 'syslogtcp':
url = 'https://%s.loggly.com/api/inputs/%d/adddevice' % \
(params['subdomain'], input['id'])
response = urllib2.urlopen(url, {}).read()
print response
encoding: plain
commands:
01-forward-rsyslog-to-loggly:
# http://loggly.com/support/sending-data/logging-from/syslog/rsyslog/cd
command: test "$(grep -s '90-loggly.conf' /etc/rsyslog.conf)" == "" && echo -e "\n# Include the loggly.conf file\n\$IncludeConfig /etc/rsyslog.d/90-loggly.conf" >> /etc/rsyslog.conf
02-restart-syslog:
command: service rsyslog restart
03-inform_loggly:
command: "python /tmp/loggly.py -u <Yourloginname> -p <Yourpassword> -s <Yourdomainname>"
Typically, /etc/rsyslog.config will have a "$IncludeConfig /etc/rsyslog.d/*.conf" at the end - so you can simply introduce your own configuration file using the "files:" portion of your .ebextensions file. This works whether you are deploying to fresh servers or not.
For a ruby production.log, you might have something like this in a .ebextensions/01loggly.config file. Note this picks up your beanstalk environment name too as a loggly tag.
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/myapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglyid#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# Add a tag for file events
# For production.log
$InputFileName /var/app/support/logs/production.log
$InputFileTag production-log
$InputFileStateFile stat-production-log #this must be unique for each file being polled
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
# Send to Loggly then discard
if $programname == 'myapp-production-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'myapp-production-log' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
For Tomcat, you might do something like this in a .ebextesions/01logglyg.config file:
# For docs on eb configs, see http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
# This set of commands sets up loggly forwarding
files:
"/etc/rsyslog.d/mytomcatapp-loggly.conf" :
mode: "000664"
owner: root
group: root
content: |
$template LogglyFormat,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [yourlogglygidhere#41058 tag=`{ "Ref" : "AWSEBEnvironmentName" }`] %msg%\n"
*.* ##logs-01.loggly.com:514;LogglyFormat
# One time config
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# catalina.log
$InputFileName /var/log/tomcat7/catalina.log
$InputFileTag catalina-log
$InputFileStateFile stat-catalina-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-log' then ~
# catalina.out
$InputFileName /var/log/tomcat7/catalina.out
$InputFileTag catalina-out
$InputFileStateFile stat-catalina-out
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'catalina-out' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'catalina-out' then ~
# host-manager.log
$InputFileName /var/log/tomcat7/host-manager.log
$InputFileTag host-manager
$InputFileStateFile stat-host-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'host-manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'host-manager' then ~
# initd.log
$InputFileName /var/log/tomcat7/initd.log
$InputFileTag initd
$InputFileStateFile stat-initd
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'initd' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'initd' then ~
# localhost.log
$InputFileName /var/log/tomcat7/localhost.log
$InputFileTag localhost-log
$InputFileStateFile stat-localhost-log
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'localhost-log' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'localhost-log' then ~
# manager.log
$InputFileName /var/log/tomcat7/manager.log
$InputFileTag manager
$InputFileStateFile stat-manager
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
if $programname == 'manager' then ##logs-01.loggly.com:514;LogglyFormat
if $programname == 'manager' then ~
encoding: plain
commands:
00-make-work-directory:
command: mkdir -p /var/spool/rsyslog
01-restart-syslog:
command: service rsyslog restart
This config is working for me - though I haven't yet determined how to get multi-line entries coming into a single entry in Loggly yet.
I know this is question is fairly old but I found that the answers really didnt answer the question or just plain didnt work correctly when implemented. I found that this works (file .ebextenstions/02loggly.config):
container_commands:
01-transform-rsyslog.conf:
command: sed "s/NODE_ENV/$NODE_ENV/g" scripts/22-loggly.conf.temp > scripts/22-loggly.conf
02-setup-rsyslog.conf:
command: cp scripts/22-loggly.conf /etc/rsyslog.d/22-loggly.conf
03-restart:
command: /sbin/service rsyslog restart
the "01-transform-rsyslog.conf" step is optional; I use that to set a tag by NODE_ENV in the file. "22-loggly.conf.temp" is a modified version of the "22-loggly.conf" file that gets created at "/etc/rsyslog.d/" when you run the linux source setup script (https://www.loggly.com/install/configure-syslog.py). I just installed it on a ec2 instance and copied the file.
Note I had to prepend '/sbin' to my service command because it was failing for me without it. Also, this restarts syslog on every deploy, which should be fine.
Now you just have to make sure your app logs to syslog. For Java it is going to be log4j or similar. For Node.js (which is what I'm using), rconsole works (https://github.com/tblobaum/rconsole).
None of the things I tried seemed to work, and the loggly documentation is very confusing!
I hope that this will help someone, this is how I got it to work.
Paste the following in .ebextensions/loggly.config
files:
"/etc/rsyslog.conf" :
mode: "000644"
owner: root
group: root
content: |
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
# Input for FILE.LOG
$InputFileName /var/app/current/PATH_TO_YOUR_LOG_FILE
$InputFileTag social_php:
$InputFileStateFile stat-social_php #this must be unique for each file being polled
$InputFileSeverity info
$InputRunFileMonitor
#Add a tag for events from this file
$template LogglyFormatsocial_php,"<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [TOKEN#41058 tag=\"php_log\"] %msg%\n"
if $programname == 'social_php' then ##logs.loggly.com:37146;LogglyFormatsocial_php
if $programname == 'social_php' then ~
*.* ##logs.loggly.com:37146
commands:
01-restart-syslog:
command: service rsyslog restart
Replace all instances of social_php with the tag that makes sense for your application.
Replace /var/app/current/PATH_TO_YOUR_LOG_FILE with your log file location
Follow my loggly configuration in elasticbeanstalk. For Linux + log4j
on .ebextensions file configuration
container_commands:
01_configure_sudo_access:
command: sed -i -- 's/ requiretty/ \!requiretty/g' /etc/sudoers
02_loggy_configure:
command: sudo python .ebextensions/scripts/loggly_config.py
03_restore_sudo_access:
command: sed -i -- 's/ \!requiretty/ requiretty/g' /etc/sudoers
Loggly script in python for default AMI:
import os
rsyslog_path = '/etc/rsyslog.conf'
loggly_file_path = '/etc/rsyslog.d/22-loggly.conf'
class LogglyConfig:
def __init__(self):
self.__linux_log()
self.__config_loggly_for_log4j()
def __linux_log(self):
#not installed on this machine
if not os.path.exists(loggly_file_path):
os.system('rm -f configure-linux.sh')
os.system('wget https://www.loggly.com/install/configure-linux.sh')
os.system('sudo bash configure-linux.sh -a DOMAIN -t TOKEN -u USER -p PASSWORD -s')
def __config_loggly_for_log4j(self):
f = open(rsyslog_path,'r')
file_text = f.read()
f.close()
file_text = file_text.replace('#$ModLoad imudp', '$ModLoad imudp')
file_text = file_text.replace('#$UDPServerRun 514', '$UDPServerRun 514')
f = open(rsyslog_path,'w')
f.write(file_text)
f.close()
os.system('service rsyslog restart')
LogglyConfig()
In log4j.properties on your java project
log4j.rootLogger=INFO, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.SyslogHost=localhost
log4j.appender.SYSLOG.Facility=Local3
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=java %d{ISO8601} %p %t %c{1}.%M - %m%n