My issue is that I am trying to stream data from Filebeat to AWS ElasticSearch.
I approached this by providing the AWS endpoint in the beats output entry.
I tried both port 80 and 443 to no avail.
I checked this post, and from this I suppose that it is possible to push directly to AWSbut still cannot figure out how.
It would be really helpful if any of you has been through this and could shed some light!
Thank you!
Turns out it was a problem with permissions.
Make sure that the logs filebeat is trying to stream have the same permission as the filebeat.yml.
You can simply issue a chmod 777 to both files.
Finally, make sure, to prepend :443 after AWS ES endpoint.
I was using 7.10 version of Filebeat and Logstash.
Below blog
help me lot.
Steps are as:
Open filebeat.yml in any editor of your choice from location
/etc/filebeat/ on Linux or
C:\Program Files\filebeat-7.10.0 on windows
filebeat:
inputs:
– paths:
– E:/nginx-1.20.1/logs/.log
input_type: log
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
output:
logstash:
hosts: [“localhost:5044”]
Logstash Configuration
input {
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
overwrite => [ “message” ]
}
mutate {
convert => [“response”, “integer”]
convert => [“bytes”, “integer”]
convert => [“responsetime”, “float”]
}
geoip {
source => “clientip”
target => “geoip”
add_tag => [ “nginx-geoip” ]
}
date {
match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
remove_field => [ “timestamp” ]
}
useragent {
source => “agent”
}
}
output {
elasticsearch {
hosts => [“https://arun-learningsubway-ybalglooophuhyjmik3zmkmiq4.ap-south-1.es.amazonaws.com:443”]
index => “arun_nginx”
document_type => “%{[#metadata][type]}”
user => “myusername”
password => “mypassword”
manage_template => false
template_overwrite => false
ilm_enabled => false
}
}
Related
First of a warning: I'm a junior level with little experience using centos.
I'm running a puppet environment with a few different machines some example modules I'm running is consul and puppet-dns for the ubuntu machines I have used netplan to configure up my dns clients.
Dns Server machine
include dns::server
# Forwarders
dns::server::options { '/etc/bind/named.conf.options':
dnssec_enable => false,
dnssec_validation => no,
forwarders => [ 'IP1' ],
}
dns::zone { 'consul':
zone_type => forward,
forward_policy => only,
allow_forwarder => [ '127.0.0.1 port 8600' ],
}
DNS Client setup
/^(Debian|Ubuntu)$/: {
class { 'netplan':
config_file => '/etc/netplan/50-cloud-init.yaml',
ethernets => {
'ens3' => {
'dhcp4' => true,
'nameservers' => {
'search' => ['node.consul'],
'addresses' => [ "$dir_ip" ],
}
}
},
netplan_apply => true,
}
In order to replicate this on Centos7, I came accross ifcfg files
(/etc/sysconfig/network-scripts/ifcfg-ens3) however, I am not sure how to replicate the result from above within one of this files. Does anyone have experience with this ?
After some reading, I decided to edit /etc/resolv.conf with the help of puppetmod: saz-resolv_conf
class { 'resolv_conf':
nameservers => ["$dir_ip"],
searchpath => ['node.consul'],
}
I was a bit skeptical about this at first since the file had automated items from OpenStack, however, everything is working as expected.
I have seen the posts on StackOverflow with similar issues as mine, but none of them helped me resolve the issue. This is why I am creating a new post.
When I first set AWS SNS up, I tested sending SMS using the online console, it worked!
Then I wrote PHP code to send a sample message. I got some errors but was able to resolve them easily. However, when I got a success response, the message was not received at all.
I thought it is a Limitation issue. So I contacted AWS and increased my limit to 20US/month. I updated the preferences, but still the same result. I get a success response, the dashboard shows that a message was sent successfully (although it takes time to update the number of messages sent). But the message is not received.
Here is my code for reference:
<?php
require './aws/aws-autoloader.php';
use Aws\Sns\SnsClient;
use Aws\Exception\AwsException;
$sdk = new SnsClient([
'region' => 'us-east-1',
'version' => 'latest',
'credentials' => [
'key' => 'XXXXXXXXXXXXXXXXXXX',
'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
]
]);
try {
$result = $sdk->publish([
'Message' => 'Reminder - You are scheduled for a session on 2020-05-20 at 4:30 PM',
'MessageStructure' => 'String',
'PhoneNumber' => '+1XXX789XXXX',
'MessageAttributes' => [
'AWS.SNS.SMS.SenderID' => [
'DataType' => 'String',
'StringValue' => 'MyName'
],
'AWS.SNS.SMS.SMSType' => [
'DataType' => 'String',
'StringValue' => 'Transactional'
]
]
]);
var_dump($result);
echo "\n";
} catch (AwsException $e) {
// output error message if fails
var_dump($e->getMessage());
}
And here is the result object:
object(Aws\Result)#119 (2) {
["data":"Aws\Result":private]=>
array(2) {
["MessageId"]=>
string(36) "8cf11950-cdb0-5503-9b69-4e6e9b61eaba"
["#metadata"]=>
array(4) {
["statusCode"]=>
int(200)
["effectiveUri"]=>
string(35) "https://sns.us-east-1.amazonaws.com"
["headers"]=>
array(4) {
["x-amzn-requestid"]=>
string(36) "0488b803-8776-57bc-b9a9-ef3dd1a71805"
["content-type"]=>
string(8) "text/xml"
["content-length"]=>
string(3) "294"
["date"]=>
string(29) "Tue, 19 May 2020 21:50:09 GMT"
}
["transferStats"]=>
array(1) {
["http"]=>
array(1) {
[0]=>
array(0) {
}
}
}
}
}
["monitoringEvents":"Aws\Result":private]=>
array(0) {
}
}
I am out of ideas. Not sure how to resolve this issue. Any help would be appreciated.
Thanks,
I believe by SMS's nature it can fail from time to time.
By my own observation, AWS employ some regional SMS providers that will help them send the SMS to your carriers.
Sometimes the SMS provider fails to send. Sometimes they are rejected by the receiving carrier.
At the moment, we have no way to be sure if It fails ( and react on this situation ) If the SMS is critical to your business, I suggest using another SMS service in conjunction with AWS.
Attempting to get cloudtrail logs of multiple aws accounts from s3 into elasticsearch and things appear to be working on and off until now where everything ground to halt. and error show is shown below
[2018-10-16T21:33:42,096][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}
[2018-10-16T21:33:44,406][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/, :path=>"/"}
[2018-10-16T21:33:44,430][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/"}
[2018-10-16T21:33:51,426][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>413, :url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/_bulk"}
Also here is my logstash config as am using logstash to do ingestion
```
input {
s3 {
bucket => "dummy-s3"
region => "eu-west-1"
type => "cloudtrail"
sincedb_path => "/tmp/logstash/cloudtrail"
exclude_pattern => "/CloudTrail-Digest/"
interval => 120
codec => "json"
}
}
filter {
if [type] == "cloudtrail" {
json {
source => "message"
}
split {
field => "Records"
add_tag => "splitted"
}
if ("splitted" in [tags]) {
date {
match => ["eventTime", "ISO8601"]
remove_tag => ["splitted"]
remove_field => ["timestamp"]
}
}
geoip {
source => "[Records][sourceIPAddress]"
target => "geoip"
add_tag => ["cloudtrail-geoip"]
}
mutate {
gsub => [
"eventSource", "\.amazonaws\.com$", "",
"apiVersion", "_", "-"
]
}
}
}
output {
elasticsearch {
hosts => ["vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443"]
ssl => true
index => "cloudtrail-%{+YYYY.MM.dd}"
doc_as_upsert => true
template_overwrite => true
}
stdout {
codec => rubydebug
}
}
}
When log-stash start or restarted from ubuntu ec2 logs as ingested for a few minutes then stops
Any help will really be appreciated.
I am needing to pull in logs from from Cloudwatch to Logstash for my application load balancers. I have multiple that I want to read in for. I was wondering if anyone knew the capabilities behind the filters field in the config file.
Basically I am curious if i can put multiple LoadBalancer IDs in the filters field or if I have to have seperate input fields for each one?
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name1", "LoadBalancer" => "name2"}
region => "my_region"
}
}
OR
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name1"}
region => "my_region"
}
}
input {
cloudwatch {
namespace => "AWS/ApplicationELB"
metrics => [my_metrics]
filters => {"LoadBalancer" => "name2"}
region => "my_region"
}
}
Thanks for the help ahead of time!
What you can possibly do is:
First Create a Log group in CloudWatch , Follow this link
Then you can add this plugin to your Logstash and make use of log group you created in cloudwatch.
Hope it will help !
Logstash version 1.5.0.1
I am trying to use the logstash s3 input plugin to download cloudfront logs and the cloudfront codec plugin to filter the stream.
I installed the cloudfront codec with bin/plugin install logstash-codec-cloudfront.
I am getting the following: Error: Object: #Version: 1.0 is not a legal argument to this wrapper, cause it doesn't respond to "read".
Here is the full error message from /var/logs/logstash/logstash.log
{:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}
My logstash config file: /etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "cloudfront"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
I'm using a similar s3 input stream successfully to get my cloudtrail logs into logstash that is based on the Answer from a stackoverflow post.
CloudFront logfile from s3 (I only included the header from the file):
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type
The header looks like it is basically the correct format based on lines 26-29 from the cloudfront plugin github repo cloudfront_spec.rb
and the official AWS CloudFront Access Logs docs.
Any ideas? Thanks!
[UPDATE 9/23/2015]
Based on this post I tried using the gzip_lines codec plugin, installed with bin/plugin install logstash-codec-gzip_lines and parse the file with a filter, unfortunately I am getting the exact same error. It looks like it is an issue with the first character of the log file having #.
For the record, here is the new attempt, including an updated pattern for parsing the cloudfront logfile due to four new fields:
/etc/logstash/conf.d/cloudfront.conf
input {
s3 {
bucket => "[BUCKETNAME]"
delete => false
interval => 60 # seconds
prefix => "cloudfront/"
region => "us-east-1"
type => "cloudfront"
codec => "gzip_lines"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
grok {
type => "cloudfront"
pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
}
mutate {
type => "cloudfront"
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
type => "cloudfront"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
}
(this question should probably be marked as duplicate, but until then I copy my answer to the same question on ServerFault)
I had the same issue, changing from
codec > "gzip_lines"
to
codec => "plain"
in the input fixed it for me. Looks like S3 input automatically uncompress gzip files. https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13
FTR here is the full config that is working for me:
input {
s3 {
bucket => "[BUCKET NAME]"
delete => false
interval => 60 # seconds
prefix => "CloudFront/"
region => "us-east-1"
type => "cloudfront"
codec => "plain"
secret_access_key => "[SECRETKEY]"
access_key_id => "[KEYID]"
sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
use_ssl => true
}
}
filter {
if [type] == "cloudfront" {
if ( ("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
drop {}
}
grok {
match => { "message" => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}" }
}
mutate {
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "listener_timestamp", "%{date} %{time}" ]
}
date {
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
}
date {
locale => "en"
timezone => "UCT"
match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
}