I want to have ELK stack of which i have installed elasticsearch and Kibana on one machine and logstash on one machine below is my logstash file name as logstash.conf at this location /etc/logstash/conf.d with following configuration
input {
stdin {}
file {
type => syslog
path => "/u01/workspace/data/tenodata/logs/teno.log"
start_position => end
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["172.3.9.5:9200"]
}
}
but some how it is not able to connect to elasticsearch
can some help me on this and also what is the location of elasticsearch log
path.logs property in the elasticsearch.yml file (found in the config folder of wherever you installed ES, possibly /etc/elasticsearch) will point to where you can find logs. (for me its /var/log/elasticsearch)
I would check connectivity from the machine, maybe run curl 172.3.9.5:9200 and see if it spits back anything. If it just hangs try looking at your security groups to make sure traffic is allowed.
If that's good, make sure that elasticsearch is set to listen for connections from the outside world. By default its set to only bind to localhost, you can edit the network.host property in the elasticsearch.yml and set it to 0.0.0.0 to see if that works. (just fyi if your using the ec2-discovery plugin you would set it to _ec2_
Related
As I tried to run the chat app from localhost connected to MySQL database which had been coded with PHP via WebSocket it was successful.
Also when I tried to run from the PuTTY terminal logged into SSH credentials, it was displaying as Server Started with the port# 8080
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
PHP Fatal error: Uncaught React\Socket\ConnectionException: Could not bind to tcp://0.0.0.0:8080: Address already in use in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php:29
Stack trace:
#0 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/cboden/ratchet/src/Ratchet/Server/IoServer.php(70): React\Socket\Server->listen(8080, '0.0.0.0')
#1 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server/websocket_server.php(121): Ratchet\Server\IoServer::factory(Object(Ratchet\Http\HttpServer), 8080)
#2 {main}
thrown in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php on line 29
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$
So I tried to change the port#8080 to port# 8282, it was successful
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
Keeping the shell script running, open a couple of web browser windows, and open a Javascript console or a page with the following Javascript:
var conn = new WebSocket('ws://0.0.0.0:8282');
conn.onopen = function(e) {
console.log("Connection established!");
};
conn.onmessage = function(e) {
console.log(e.data);
};
From the browser console results:
WebSocket connection to 'ws://5.160.195.94:8282/' failed: Error in
connection establishment: net::ERR_CONNECTION_TIMED_OUT
websocket_server.php
<?php
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use MyApp\Chat;
require dirname(__DIR__) . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
8282
);
$server->run();
I even tried to assign Public IP and Private IP, but with no good it resulted in the same old result?
This was the composer files generated after executing and adding src folder $composer require cboden/ratchet
composer.json(On AmazonWebServer)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.1"
}
}
composer.json(On localhost)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.3"
}
}
How am I suppose to resolve/overcome while connecting it from the WebSocket especially from the hosted server with the domain name such as
http://ec3-193-123-96.eu-central-1.compute.amazonaws.com/
var conn = new WebSocket('ws://localhost:8282');
From the Security Group
Under Inbound tab
Under Outbound tab
When it comes to a connectivity issue with an EC2 there are few things you need to check to find the root cause.
SSH into the EC2 instance that the application is running and make sure you can access it from within the EC2 instance. If it works then its a network related issue that we need to solve.
If step 1 was successful. You have now identified it is a network issue to solve this you need to check the following.
Check if an Internet Gateway is created and attached to your VPC.
Next check if your subnets routing table has its default route pointing to the internet gateway. check this link to complete this and the above step.
Check your subnets Network ACLs rules to see if ports are not blocked
finally, you would want to check your Instances Security group as you have shown.
If you need access via a EC2 dns you will need to provision your ec2 instance in a public subnet and assign an elastic IP
If an issue still exists check if the EC2 status checks pass, or try provisioning a new instance.
I'm trying to send logs of my C++ application to logstash using log4cplus library.
I have read the log4cplus documentation and used below configurations to configure SocketAppender.
log4cplus.rootLogger=INFO, SA
log4cplus.appender.SA=log4cplus::SocketAppender
log4cplus.appender.SA.port=5044
log4cplus.appender.SA.host=127.0.0.1
log4cplus.appender.SA.serverName=MyServer
log4cplus.appender.SA.layout=log4cplus::PatternLayout
log4cplus.appender.SA.layout.ConversionPattern=%m%n
In the code i have initialized the logger and tried to send the message to the logger.
PropertyConfigurator config(configFile);
config.configure();
std::string msg = "test msg";
Logger root = Logger::getRoot();
LOG4CPLUS_INFO(root,msg);
But i was not getting the expected message on logstash server. I was getting some garbage data as shown below.
{
"#version" => "1",
"host" => "localhost",
"message" => "\u0000\u0000\u0000q\u0003\u0001\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0004root\u0000\u0000N
\u0000\u0000\u0000\u0000\u0000\u0000\u0000\btest
msg\u0000\u0000\u0000\u000F140382836238144Z{\u0014N\u0000\u0004\u0003\xC0\u0000\u0000\u0000\u0013../src/property.cpp\u0000\u0000\u00002\u0000\u0000\u0000\u0015int
main(int, char**)",
"#timestamp" => 2018-02-07T14:59:26.284Z,
"port" => 47148 }
I have read the documentation of log4cplus and tried several configuration changes and nothing worked. I could send the log to logstash server using netcat command. So atleast im sure that my logstash configurations are correct. I have configured the logstash with below conf file.
input {
tcp {
port => 5044
}
}
filter{
}
output {
stdout {
codec => rubydebug
}
}
Can anyone tell me what i'm doing wrong with log4cplus ? Is it possible to use log4cplus for sending logs to logstash server ?
log4cplus::SocketAppender is specific to Log4cplus. If you want to log into Logstash, you will have to create your own appender to do that.
I'm working on an elasticsearch project where I want to get data from Amazon s3.for this,I'm using logstash.To configure,
output{
elasticsearch{
host => 'host_'
cluster => 'cluster_name'
}
}
is the usual approach.
But,I'm using Amazon elasticsearch service. It has only end-point and Domain ARN. How should I specify host name in this case?
In the simplest case where your ES cluster on AWS is open to the world, you can have a simple elasticsearch output config like this:
For Logstash 2.0:
output {
elasticsearch{
hosts => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com:80'
}
}
don't forget the port number at the end
make sure to use the hosts setting (not host)
For Logstash 1.5.x:
output {
elasticsearch{
host => 'search-xxxxxxxxxxxx.us-west-2.es.amazonaws.com'
port => 80
protocol => 'http'
}
}
the port number is a separate setting named port
make sure to use the host setting (not hosts), i.e. opposite than with 2.0
I'm using Logstash 1.4.1 with Elasticsearch (installed as EC2 cluster) 1.1.1 and Elasticsearch AWS plugin 2.1.1.
To try if the Logstash is properly talking to Elasticsearch, I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => <ES_cluster_IP> } }'
and I get -
log4j, [2014-06-10T18:30:17.622] WARN: org.elasticsearch.discovery: [logstash-ip-xxxxxxxx-20308-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
But when I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch_http { host => <ES_cluster_IP> } }'
it works fine with the below warning -
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
I don't understand why can't I use elasticsearch instead of elasticsearch_http even when versions are compatible.
I'd take care to set the protocol option to one of "http", "transport" and "node". The documentation on this is contradictory - on the one hand it states that it's optional and there is no default, while at the end it says the default differs depending upon code set:
The ‘node’ protocol will connect to the cluster as a normal
Elasticsearch node (but will not store data). This allows you to use
things like multicast discovery. If you use the node protocol, you
must permit bidirectional communication on the port 9300 (or whichever
port you have configured).
The ‘transport’ protocol will connect to the host you specify and will
not show up as a ‘node’ in the Elasticsearch cluster. This is useful
in situations where you cannot permit connections outbound from the
Elasticsearch cluster to this Logstash server.
The ‘http’ protocol will use the Elasticsearch REST/HTTP interface to
talk to elasticsearch.
All protocols will use bulk requests when talking to Elasticsearch.
The default protocol setting under java/jruby is “node”. The default
protocol on non-java rubies is “http”
The problem here is that the protocol setting has some pretty significant impact on how you connect to Elasticsearch and how it will operate, yet it's not clear what it will do when you don't set protocol. Better to pick one and set it -
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
In the Logstash elasticsearch plugin page has mention:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use
any other version of Elasticsearch, you should set protocol => http in this plugin.
So it is not version incompatibility.
Elasticsearch use 9300 for multicast and communicate with other clients. So, it is probably your logstsah can't talk to your elasticsearch cluster. Please check your server configuration whether the firewall has block port 9300.
Aet the protocol in elasticsearch.yml:
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}
i am trying to configue Yarn 2.2.0 with whirr in Amazon EC2. however I am having some problems. I have modified the whirr services to support yarn 2.2.0. As a result I am able to start the jobs and run them successfully. however I am facing n issue in tracking the job progress.
mapreduce.Job (Job.java:monitorAndPrintJob(1317)) - Running job: job_1397996350238_0001
2014-04-20 21:57:24,544 INFO [main] mapred.ClientServiceDelegate (ClientServiceDelegate.java:getProxy(270)) - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
java.io.IOException: Job status not available
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:599)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1327)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1289)
at com.zetaris.hadoop.seek.preprocess.PreProcessorDriver.executeJobs(PreProcessorDriver.java:112)
at com.zetaris.hadoop.seek.JobToJobMatchingDriver.executePreProcessJob(JobToJobMatchingDriver.java:143)
at com.zetaris.hadoop.seek.JobToJobMatchingDriver.executeJobs(JobToJobMatchingDriver.java:78)
at com.zetaris.hadoop.seek.JobToJobMatchingDriver.executeJobs(JobToJobMatchingDriver.java:43)
at com.zetaris.hadoop.seek.JobToJobMatchingDriver.main(JobToJobMatchingDriver.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212
I tried Debugginh the problem is with the ApplicationMaster. It has an hostname and rpc port , in which the hostname is the internal hostname which can only be resolved from within the amazon network. Idealy it should have been a public Amazon DNs name. however I could'nt set it yet. I tried setting parameters like
yarn.nodemanager.hostname
yarn.nodemanager.address
But I couldnt find any change in the ApplicationMaster's hostname or port they are still the private amazon internal hostname. Am I missing anything. Or should I change the /etc/hosts in all node manager nodes so that node managers start with the public address..
But that will be an overkill right.Or is there any way I can configure the ApplicationMaster to take the public ip.So that I can Remotely track the progress
I am doing this all because I need to submit the jobs remotely.I am not willing to compromise this feature. Anyone out there who an guide me
I was successful in configuring the historyserver and I am able to access then from the remote client. I used the configuration to do it.
mapreduce.jobhistory.webapp.address
When i debugged I find the
MRClientProtocol MRClientProxy = null;
try {
MRClientProxy = getProxy();
return methodOb.invoke(MRClientProxy, args);
} catch (InvocationTargetException e) {
// Will not throw out YarnException anymore
LOG.debug("Failed to contact AM/History for job " + jobId +
" retrying..", e.getTargetException());
// Force reconnection by setting the proxy to null.
realProxy = null;
proxy failing to connect because of the private address . And above code snipped is from ClientServiceDelegate
I was able to avoid the issue. Rather than solve this. The problem is with the resolution of ip outside the cloud environment.
Initially I tried updating the whirr-yarn source to make use of public ip for configurations rather than private ip. But still There where issues.So I gave up the task.
What I finally did was to start job form the cloud environment itself. rther than from a host outside the cloud infrastructure. Hope somebody found a better way.
I had the same problem. Solved by adding following lines in mapred-site.yml. It move's your staging directory from default tmp directory to your home directory where your have permission.
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
In addition to this, you need to create a history directory on hdfs:
hdfs dfs -mkdir -p /user/history
hdfs dfs -chmod -R 1777 /user/history
hdfs dfs -chown mapred:hadoop /user/history
I found this link quite useful for configuring a Hadoop cluster.
conf.set("mapreduce.jobhistory.address", "hadoop3.hwdomain:10020");
conf.set("mapreduce.jobhistory.intermediate-done-dir", "/mr-history/tmp");
conf.set("mapreduce.jobhistory.done-dir", "/mr-history/done");