I am collecting logs from a NXlog server & sending it to my Logstash server{ELK Stack on a AWS machine}. It was able to send the logs to ES perfectly but then it just stooped sending logs to ES with the following errors:
{:timestamp=>"2015-10-13T06:41:25.526000+0000", :message=>"Got error to send bulk of actions: localhost:9200 failed to respond", :level=>:error}
{:timestamp=>"2015-10-13T06:41:25.531000+0000", :message=>"Failed to flush outgoing items",
{:timestamp=>"2015-10-13T06:41:26.538000+0000", :message=>"Got error to send bulk of actions: Connection refused", :level=>:error}
Is this a security group issue? or something else?
Moreover my logstash-output file looks like :
output {
stdout { }
elasticsearch { host => "localhost" protocol => "http" port => "9200" }
}
I got the answer to this , i am having version 1.5 , which has a bug :(
https://github.com/elastic/logstash/issues/2894
Related
I am using AWS - ECS service and I have 5 running tasks on the cluster that has initiated as awsvpc network mode.
The problem is that the task is supposed to send request to Twilio for the SMS code but the request to Twilio is being timed out.
const twilioClient = require('twilio')(accountSid, authToken)
try {
await twilioClient.messages.create({
body: `${code}`,
from: phoneNumber,
to: userInput.phone
})
} catch (err) {
console.log('Twilio Error: ', err)
return false
}
The error below shows the error I have logged on CloudWatch.
Twilio Error: {
Error: ETIMEDOUT
at Timeout._onTimeout (/srv/node_modules/request/request.js:849:19)
ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10) code: 'ETIMEDOUT', connect: true
}
The problem is that the same code works in case of default network mode for Task on AWS ECS.
I am using EC2 mode, not Fargate mode.
Looking forward to the right help on this.
Cheers.
I'm trying to send logs of my C++ application to logstash using log4cplus library.
I have read the log4cplus documentation and used below configurations to configure SocketAppender.
log4cplus.rootLogger=INFO, SA
log4cplus.appender.SA=log4cplus::SocketAppender
log4cplus.appender.SA.port=5044
log4cplus.appender.SA.host=127.0.0.1
log4cplus.appender.SA.serverName=MyServer
log4cplus.appender.SA.layout=log4cplus::PatternLayout
log4cplus.appender.SA.layout.ConversionPattern=%m%n
In the code i have initialized the logger and tried to send the message to the logger.
PropertyConfigurator config(configFile);
config.configure();
std::string msg = "test msg";
Logger root = Logger::getRoot();
LOG4CPLUS_INFO(root,msg);
But i was not getting the expected message on logstash server. I was getting some garbage data as shown below.
{
"#version" => "1",
"host" => "localhost",
"message" => "\u0000\u0000\u0000q\u0003\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0004root\u0000\u0000N
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\btest
msg\u0000\u0000\u0000\u000F140382836238144Z{\u0014N\u0000\u0004\u0003\xC0\u0000\u0000\u0000\u0013../src/property.cpp\u0000\u0000\u00002\u0000\u0000\u0000\u0015int
main(int, char**)",
"#timestamp" => 2018-02-07T14:59:26.284Z,
"port" => 47148 }
I have read the documentation of log4cplus and tried several configuration changes and nothing worked. I could send the log to logstash server using netcat command. So atleast im sure that my logstash configurations are correct. I have configured the logstash with below conf file.
input {
tcp {
port => 5044
}
}
filter{
}
output {
stdout {
codec => rubydebug
}
}
Can anyone tell me what i'm doing wrong with log4cplus ? Is it possible to use log4cplus for sending logs to logstash server ?
log4cplus::SocketAppender is specific to Log4cplus. If you want to log into Logstash, you will have to create your own appender to do that.
I want to have ELK stack of which i have installed elasticsearch and Kibana on one machine and logstash on one machine below is my logstash file name as logstash.conf at this location /etc/logstash/conf.d with following configuration
input {
stdin {}
file {
type => syslog
path => "/u01/workspace/data/tenodata/logs/teno.log"
start_position => end
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["172.3.9.5:9200"]
}
}
but some how it is not able to connect to elasticsearch
can some help me on this and also what is the location of elasticsearch log
path.logs property in the elasticsearch.yml file (found in the config folder of wherever you installed ES, possibly /etc/elasticsearch) will point to where you can find logs. (for me its /var/log/elasticsearch)
I would check connectivity from the machine, maybe run curl 172.3.9.5:9200 and see if it spits back anything. If it just hangs try looking at your security groups to make sure traffic is allowed.
If that's good, make sure that elasticsearch is set to listen for connections from the outside world. By default its set to only bind to localhost, you can edit the network.host property in the elasticsearch.yml and set it to 0.0.0.0 to see if that works. (just fyi if your using the ec2-discovery plugin you would set it to _ec2_
I'm working on an elasticsearch project where I want to get data from Amazon s3.for this,I'm using logstash.To configure,
output{
elasticsearch{
host => 'host_'
cluster => 'cluster_name'
}
}
is the usual approach.
But,I'm using Amazon elasticsearch service. It has only end-point and Domain ARN. How should I specify host name in this case?
In the simplest case where your ES cluster on AWS is open to the world, you can have a simple elasticsearch output config like this:
For Logstash 2.0:
output {
elasticsearch{
hosts => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com:80'
}
}
don't forget the port number at the end
make sure to use the hosts setting (not host)
For Logstash 1.5.x:
output {
elasticsearch{
host => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com'
port => 80
protocol => 'http'
}
}
the port number is a separate setting named port
make sure to use the host setting (not hosts), i.e. opposite than with 2.0
I'm using Logstash 1.4.1 with Elasticsearch (installed as EC2 cluster) 1.1.1 and Elasticsearch AWS plugin 2.1.1.
To try if the Logstash is properly talking to Elasticsearch, I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => <ES_cluster_IP> } }'
and I get -
log4j, [2014-06-10T18:30:17.622] WARN: org.elasticsearch.discovery: [logstash-ip-xxxxxxxx-20308-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
But when I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch_http { host => <ES_cluster_IP> } }'
it works fine with the below warning -
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
I don't understand why can't I use elasticsearch instead of elasticsearch_http even when versions are compatible.
I'd take care to set the protocol option to one of "http", "transport" and "node". The documentation on this is contradictory - on the one hand it states that it's optional and there is no default, while at the end it says the default differs depending upon code set:
The ‘node’ protocol will connect to the cluster as a normal
Elasticsearch node (but will not store data). This allows you to use
things like multicast discovery. If you use the node protocol, you
must permit bidirectional communication on the port 9300 (or whichever
port you have configured).
The ‘transport’ protocol will connect to the host you specify and will
not show up as a ‘node’ in the Elasticsearch cluster. This is useful
in situations where you cannot permit connections outbound from the
Elasticsearch cluster to this Logstash server.
The ‘http’ protocol will use the Elasticsearch REST/HTTP interface to
talk to elasticsearch.
All protocols will use bulk requests when talking to Elasticsearch.
The default protocol setting under java/jruby is “node”. The default
protocol on non-java rubies is “http”
The problem here is that the protocol setting has some pretty significant impact on how you connect to Elasticsearch and how it will operate, yet it's not clear what it will do when you don't set protocol. Better to pick one and set it -
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
In the Logstash elasticsearch plugin page has mention:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use
any other version of Elasticsearch, you should set protocol => http in this plugin.
So it is not version incompatibility.
Elasticsearch use 9300 for multicast and communicate with other clients. So, it is probably your logstsah can't talk to your elasticsearch cluster. Please check your server configuration whether the firewall has block port 9300.
Aet the protocol in elasticsearch.yml:
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}