Logstash input Cloudwatch logs config - amazon-web-services

I am needing to pull in logs from from Cloudwatch to Logstash for my application load balancers. I have multiple that I want to read in for. I was wondering if anyone knew the capabilities behind the filters field in the config file.
Basically I am curious if i can put multiple LoadBalancer IDs in the filters field or if I have to have seperate input fields for each one?
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name1", "LoadBalancer" => "name2"}
region => "my_region"
}
}
OR
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name1"}
region => "my_region"
}
}
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name2"}
region => "my_region"
}
}
Thanks for the help ahead of time!

What you can possibly do is:
First Create a Log group in CloudWatch , Follow this link
Then you can add this plugin to your Logstash and make use of log group you created in cloudwatch.
Hope it will help !

Related

Naming an AWS EC2 Security group

Using the AWS dashboard, and under Security Groups, I see them listed under the following columns:
Name......Security Group ID.....Security Group Name.....VPC ID. Description.....Owner......
The AWS PHP SDK v 3.xx has a createSecurityGroup method under the Ec2Client, that allows the creation of security group. I am using it, but I can't figure out how to set the "name" value (first column). The docs do not describe how to do this.
I tried adding a Name parameter (to mimic the CLI), but it did not work.
$Ec2Client = new Aws\Ec2\Ec2Client([
'version' => 'latest',
'region' => 'us-east-1,
'profile' => 'default']);
$SecGroupParams = ['Name' => 'My Security Group','Description' => 'My Security Group', 'GroupName' => 'my_security_group', 'VpcId' => 'vpc-xxxxxx']
Ec2Client->createSecurityGroup($SecGroupParams);
The group is create, but the name is empty (just like when it's created using the dashboard).
Any idea how to do this?
Picture of dashboard:
What you are trying to do with 'Name' => 'My Security Group' will not work, as the Name should be a Tag with key called "Name". So you have to tag your security group. This is done using CreateTags in PHP.
you should mention the Name in tags.
$SecGroupParams = ['Name' => 'My Security Group','Description' => 'My Security Group', 'GroupName' => 'my_security_group', 'VpcId' => 'vpc-xxxxxx', 'Tags' => [['Key' => 'Name', 'Value' => 'My Security Group']]]

Push data directly from Filebeats to AWS ES managed service

My issue is that I am trying to stream data from Filebeat to AWS ElasticSearch.
I approached this by providing the AWS endpoint in the beats output entry.
I tried both port 80 and 443 to no avail.
I checked this post, and from this I suppose that it is possible to push directly to AWSbut still cannot figure out how.
It would be really helpful if any of you has been through this and could shed some light!
Thank you!
Turns out it was a problem with permissions.
Make sure that the logs filebeat is trying to stream have the same permission as the filebeat.yml.
You can simply issue a chmod 777 to both files.
Finally, make sure, to prepend :443 after AWS ES endpoint.
I was using 7.10 version of Filebeat and Logstash.
Below blog
help me lot.
Steps are as:
Open filebeat.yml in any editor of your choice from location
/etc/filebeat/ on Linux or
C:\Program Files\filebeat-7.10.0 on windows
filebeat:
inputs:
– paths:
– E:/nginx-1.20.1/logs/.log
input_type: log
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
output:
logstash:
hosts: [“localhost:5044”]
Logstash Configuration
input {
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
overwrite => [ “message” ]
}
mutate {
convert => [“response”, “integer”]
convert => [“bytes”, “integer”]
convert => [“responsetime”, “float”]
}
geoip {
source => “clientip”
target => “geoip”
add_tag => [ “nginx-geoip” ]
}
date {
match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
remove_field => [ “timestamp” ]
}
useragent {
source => “agent”
}
}
output {
elasticsearch {
hosts => [“https://arun-learningsubway-ybalglooophuhyjmik3zmkmiq4.ap-south-1.es.amazonaws.com:443”]
index => “arun_nginx”
document_type => “%{[#metadata][type]}”
user => “myusername”
password => “mypassword”
manage_template => false
template_overwrite => false
ilm_enabled => false
}
}

Cloudtrail logs to AWS Elasticsearch

Attempting to get cloudtrail logs of multiple aws accounts from s3 into elasticsearch and things appear to be working on and off until now where everything ground to halt. and error show is shown below
[2018-10-16T21:33:42,096][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}
[2018-10-16T21:33:44,406][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/, :path=>"/"}
[2018-10-16T21:33:44,430][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/"}
[2018-10-16T21:33:51,426][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>413, :url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/_bulk"}
Also here is my logstash config as am using logstash to do ingestion
```
input {
s3 {
bucket => "dummy-s3"
region => "eu-west-1"
type => "cloudtrail"
sincedb_path => "/tmp/logstash/cloudtrail"
exclude_pattern => "/CloudTrail-Digest/"
interval => 120
codec => "json"
}
}
filter {
if [type] == "cloudtrail" {
json {
source => "message"
}
split {
field => "Records"
add_tag => "splitted"
}
if ("splitted" in [tags]) {
date {
match => ["eventTime", "ISO8601"]
remove_tag => ["splitted"]
remove_field => ["timestamp"]
}
}
geoip {
source => "[Records][sourceIPAddress]"
target => "geoip"
add_tag => ["cloudtrail-geoip"]
}
mutate {
gsub => [
"eventSource", "\.amazonaws\.com$", "",
"apiVersion", "_", "-"
]
}
}
}
output {
elasticsearch {
hosts => ["vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443"]
ssl => true
index => "cloudtrail-%{+YYYY.MM.dd}"
doc_as_upsert => true
template_overwrite => true
}
stdout {
codec => rubydebug
}
}
}
When log-stash start or restarted from ubuntu ec2 logs as ingested for a few minutes then stops
Any help will really be appreciated.

Amazon SNS Service Push Notification delivered or not

I am using Aws SNS to send notification, and sending notifications to different topics and is working perfectly.
When i publish notification, i got array like
object(Aws\Result)#84 (1) {
["data":"Aws\Result":private]=>
array(2) {
["MessageId"]=>
string(36) "************-7a29-591f-8765-************"
["#metadata"]=>
array(4) {
["statusCode"]=>
int(200)
["effectiveUri"]=>
string(40) "https://sns.ap-southeast-1.amazonaws.com"
["headers"]=>
array(4) {
["x-amzn-requestid"]=>
string(36) "************-b737-5831-abf4-************"
["content-type"]=>
string(8) "text/xml"
["content-length"]=>
string(3) "294"
["date"]=>
string(29) "Fri, 28 Oct 2016 08:59:05 GMT"
}
["transferStats"]=>
array(1) {
["http"]=>
array(1) {
[0]=>
array(0) {}
}
}
}
}
}
I am using php at server side,
But i need to generate a report for each topic separately, that which endpoints (subscriber of topic) get notification and whether notifications are failed, and what is the percentage of successful delivery.
After a lot of research, i found that Aws CloudWatch can do my work, i also searched on stackOverflow for this, and got this answer:
How to confirm delivery status when using amazonSNS mobile push?
I also generated some log in CloudWatch,
By describeLogGroups, i am getting array like
[logGroups] => Array(
[0] => Array
(
[logGroupName] => sns/ap-southeast-1/************/app/GCM/AndroidN
[creationTime] => ************
[retentionInDays] => 30
[metricFilterCount] => 0
[arn] => arn:aws:logs:ap-southeast-1:************:log-group:sns/ap-southeast-1/************/app/GCM/AndroidN:*
[storedBytes] => 3133
)
)
By describeLogStreams, i am getting array like
[logStreams] => Array(
[0] => Array
(
[logStreamName] => 25
[creationTime] => 1477574852344
[firstEventTimestamp] => 1477574831966
[lastEventTimestamp] => 1477574831966
[lastIngestionTime] => 1477574852374
[uploadSequenceToken] => ***********************8
[arn] => arn:aws:logs:ap-southeast-1:*********:log-group:sns/ap-southeast-1/**********/app/GCM/AndroidN:log-stream:25
[storedBytes] => 627
)
)
but confused that how to access it for a particular topic, because in my website i create some topics for group of users (endpoints), and now i want to show for every topic how much notification sent/failed to endpoints not to topic,
Thanks in Anticipants.

logstash cloudfront codec plugin: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read"

Logstash version 1.5.0.1
I am trying to use the logstash s3 input plugin to download cloudfront logs and the cloudfront codec plugin to filter the stream.
I installed the cloudfront codec with bin/plugin install logstash-codec-cloudfront.
I am getting the following: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read".
Here is the full error message from /var/logs/logstash/logstash.log
{:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}
My logstash config file: /etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "cloudfront"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
I'm using a similar s3 input stream successfully to get my cloudtrail logs into logstash that is based on the Answer from a stackoverflow post.
CloudFront logfile from s3 (I only included the header from the file):
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type
The header looks like it is basically the correct format based on lines 26-29 from the cloudfront plugin github repo cloudfront_spec.rb
and the official AWS CloudFront Access Logs docs.
Any ideas? Thanks!
[UPDATE 9/23/2015]
Based on this post I tried using the gzip_lines codec plugin, installed with bin/plugin install logstash-codec-gzip_lines and parse the file with a filter, unfortunately I am getting the exact same error. It looks like it is an issue with the first character of the log file having #.
For the record, here is the new attempt, including an updated pattern for parsing the cloudfront logfile due to four new fields:
/etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "gzip_lines"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
grok {
type => "cloudfront"
pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
}
mutate {
type => "cloudfront"
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
type => "cloudfront"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
}
(this question should probably be marked as duplicate, but until then I copy my answer to the same question on ServerFault)
I had the same issue, changing from
codec > "gzip_lines"
to
codec => "plain"
in the input fixed it for me. Looks like S3 input automatically uncompress gzip files. https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13
FTR here is the full config that is working for me:
input {
s3 {
bucket => "[BUCKET NAME]"
delete => false
interval => 60 # seconds
prefix => "CloudFront/"
region => "us-east-1"
type => "cloudfront"
codec => "plain"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
if [type] == "cloudfront" {
if ( ("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
drop {}
}
grok {
match => { "message" => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}" }
}
mutate {
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
date {
locale => "en"
timezone => "UCT"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
}