Configure output as elasticsearch in logstash - amazon-web-services

I'm working on an elasticsearch project where I want to get data from Amazon s3.for this,I'm using logstash.To configure,
output{
elasticsearch{
host => 'host_'
cluster => 'cluster_name'
}
}
is the usual approach.
But,I'm using Amazon elasticsearch service. It has only end-point and Domain ARN. How should I specify host name in this case?

In the simplest case where your ES cluster on AWS is open to the world, you can have a simple elasticsearch output config like this:
For Logstash 2.0:
output {
elasticsearch{
hosts => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com:80'
}
}
don't forget the port number at the end
make sure to use the hosts setting (not host)
For Logstash 1.5.x:
output {
elasticsearch{
host => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com'
port => 80
protocol => 'http'
}
}
the port number is a separate setting named port
make sure to use the host setting (not hosts), i.e. opposite than with 2.0

Related

An Issue with an AWS EC2 instance WebSocket connection failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT

As I tried to run the chat app from localhost connected to MySQL database which had been coded with PHP via WebSocket it was successful.
Also when I tried to run from the PuTTY terminal logged into SSH credentials, it was displaying as Server Started with the port# 8080
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
PHP Fatal error: Uncaught React\Socket\ConnectionException: Could not bind to tcp://0.0.0.0:8080: Address already in use in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php:29
Stack trace:
#0 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/cboden/ratchet/src/Ratchet/Server/IoServer.php(70): React\Socket\Server->listen(8080, '0.0.0.0')
#1 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server/websocket_server.php(121): Ratchet\Server\IoServer::factory(Object(Ratchet\Http\HttpServer), 8080)
#2 {main}
thrown in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php on line 29
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$
So I tried to change the port#8080 to port# 8282, it was successful
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
Keeping the shell script running, open a couple of web browser windows, and open a Javascript console or a page with the following Javascript:
var conn = new WebSocket('ws://0.0.0.0:8282');
conn.onopen = function(e) {
console.log("Connection established!");
};
conn.onmessage = function(e) {
console.log(e.data);
};
From the browser console results:
WebSocket connection to 'ws://5.160.195.94:8282/' failed: Error in
connection establishment: net::ERR_CONNECTION_TIMED_OUT
websocket_server.php
<?php
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use MyApp\Chat;
require dirname(__DIR__) . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
8282
);
$server->run();
I even tried to assign Public IP and Private IP, but with no good it resulted in the same old result?
This was the composer files generated after executing and adding src folder $composer require cboden/ratchet
composer.json(On AmazonWebServer)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.1"
}
}
composer.json(On localhost)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.3"
}
}
How am I suppose to resolve/overcome while connecting it from the WebSocket especially from the hosted server with the domain name such as
http://ec3-193-123-96.eu-central-1.compute.amazonaws.com/
var conn = new WebSocket('ws://localhost:8282');
From the Security Group
Under Inbound tab
Under Outbound tab
When it comes to a connectivity issue with an EC2 there are few things you need to check to find the root cause.
SSH into the EC2 instance that the application is running and make sure you can access it from within the EC2 instance. If it works then its a network related issue that we need to solve.
If step 1 was successful. You have now identified it is a network issue to solve this you need to check the following.
Check if an Internet Gateway is created and attached to your VPC.
Next check if your subnets routing table has its default route pointing to the internet gateway. check this link to complete this and the above step.
Check your subnets Network ACLs rules to see if ports are not blocked
finally, you would want to check your Instances Security group as you have shown.
If you need access via a EC2 dns you will need to provision your ec2 instance in a public subnet and assign an elastic IP
If an issue still exists check if the EC2 status checks pass, or try provisioning a new instance.

What all configuration is required for logstash to work

I want to have ELK stack of which i have installed elasticsearch and Kibana on one machine and logstash on one machine below is my logstash file name as logstash.conf at this location /etc/logstash/conf.d with following configuration
input {
stdin {}
file {
type => syslog
path => "/u01/workspace/data/tenodata/logs/teno.log"
start_position => end
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["172.3.9.5:9200"]
}
}
but some how it is not able to connect to elasticsearch
can some help me on this and also what is the location of elasticsearch log
path.logs property in the elasticsearch.yml file (found in the config folder of wherever you installed ES, possibly /etc/elasticsearch) will point to where you can find logs. (for me its /var/log/elasticsearch)
I would check connectivity from the machine, maybe run curl 172.3.9.5:9200 and see if it spits back anything. If it just hangs try looking at your security groups to make sure traffic is allowed.
If that's good, make sure that elasticsearch is set to listen for connections from the outside world. By default its set to only bind to localhost, you can edit the network.host property in the elasticsearch.yml and set it to 0.0.0.0 to see if that works. (just fyi if your using the ec2-discovery plugin you would set it to _ec2_

Logstash Unable to push logs to ES

I am collecting logs from a NXlog server & sending it to my Logstash server{ELK Stack on a AWS machine}. It was able to send the logs to ES perfectly but then it just stooped sending logs to ES with the following errors:
{:timestamp=>"2015-10-13T06:41:25.526000+0000", :message=>"Got error to send bulk of actions: localhost:9200 failed to respond", :level=>:error}
{:timestamp=>"2015-10-13T06:41:25.531000+0000", :message=>"Failed to flush outgoing items",
{:timestamp=>"2015-10-13T06:41:26.538000+0000", :message=>"Got error to send bulk of actions: Connection refused", :level=>:error}
Is this a security group issue? or something else?
Moreover my logstash-output file looks like :
output {
stdout { }
elasticsearch { host => "localhost" protocol => "http" port => "9200" }
}
I got the answer to this , i am having version 1.5 , which has a bug :(
https://github.com/elastic/logstash/issues/2894

aws opsworks multiple nodejs apps?

I have one OpsWorks Nodejs Stack. I setup multiple nodejs apps. The problem now is that all nodejs server.js scripts listens on port 80 for amazon life check but the port can be used only by one.
I dont know how to solve this. I have read amazon documentation but could not find the solution. I read that I could try to change deploy recipe variables to set this life check to different port but it didn't work. Any help?
I battled with this issue for a while and eventually found a very simple solution.
The port is set in the deploy cookbook's attributes...
https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/deploy/attributes/deploy.rb
by the line...
default[:deploy][application][:nodejs][:port] = deploy[:ssl_support] ? 443 : 80
you can override this using the stack's custom json, such as:
{
"deploy" : {
"app_name_1": {
"nodejs": {
"port": 80
}
},
"app_name_2": {
"nodejs": {
"port": 3000
}
}
},
"mongodb" : {
...
}
}
Now the monitrc files at /etc/monit.d/node_web_app-.monitrc should reflect their respective ports, and monit should keep them alive!
My solution was to implement life check node service that is listening on port 80. When amazon life check request is made to that service it responds and execute its own logic to check for health of all services. It works great.

Version incompatibility issue with logsatsh and elasticsearch?

I'm using Logstash 1.4.1 with Elasticsearch (installed as EC2 cluster) 1.1.1 and Elasticsearch AWS plugin 2.1.1.
To try if the Logstash is properly talking to Elasticsearch, I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => <ES_cluster_IP> } }'
and I get -
log4j, [2014-06-10T18:30:17.622] WARN: org.elasticsearch.discovery: [logstash-ip-xxxxxxxx-20308-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
But when I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch_http { host => <ES_cluster_IP> } }'
it works fine with the below warning -
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
I don't understand why can't I use elasticsearch instead of elasticsearch_http even when versions are compatible.
I'd take care to set the protocol option to one of "http", "transport" and "node". The documentation on this is contradictory - on the one hand it states that it's optional and there is no default, while at the end it says the default differs depending upon code set:
The ‘node’ protocol will connect to the cluster as a normal
Elasticsearch node (but will not store data). This allows you to use
things like multicast discovery. If you use the node protocol, you
must permit bidirectional communication on the port 9300 (or whichever
port you have configured).
The ‘transport’ protocol will connect to the host you specify and will
not show up as a ‘node’ in the Elasticsearch cluster. This is useful
in situations where you cannot permit connections outbound from the
Elasticsearch cluster to this Logstash server.
The ‘http’ protocol will use the Elasticsearch REST/HTTP interface to
talk to elasticsearch.
All protocols will use bulk requests when talking to Elasticsearch.
The default protocol setting under java/jruby is “node”. The default
protocol on non-java rubies is “http”
The problem here is that the protocol setting has some pretty significant impact on how you connect to Elasticsearch and how it will operate, yet it's not clear what it will do when you don't set protocol. Better to pick one and set it -
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
In the Logstash elasticsearch plugin page has mention:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use
any other version of Elasticsearch, you should set protocol => http in this plugin.
So it is not version incompatibility.
Elasticsearch use 9300 for multicast and communicate with other clients. So, it is probably your logstsah can't talk to your elasticsearch cluster. Please check your server configuration whether the firewall has block port 9300.
Aet the protocol in elasticsearch.yml:
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}