Logstash, EC2 and elasticsearch - amazon-web-services

I have two elasticsearch nodes setup in EC2 and am trying to use logstash with it. I get this error when I run logstash:
log4j, [2014-02-24T10:45:32.722] WARN: org.elasticsearch.discovery.zen.ping.unicast: [Ishihara, Shirow] failed to send ping to [[#zen_unicast_1#][inet[/10.110.65.91:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
That's a snippet of it.
Here is the conf file I am using with logstash:
input {
redis {
host => "10.110.65.91"
# these settings should match the output of the agent
data_type => "list"
key => "logstash"
# We use the 'json' codec here because we expect to read
# json events from redis.
codec => json
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "10.110.65.91"
cluster => searchbuild
}
}
~
I'm running Logstash on .91 (have a second terminal window open) Am I missing something?

I had to change "elasticsearch" to "elasticsearch_http".
Fixed.

Related

AWS Glue - Kafka Connection using SASL/SCRAM

I am trying to create an AWS Glue Streaming job that reads from Kafka (MSK) clusters using SASL/SCRAM client authentication for the connection, per
https://aws.amazon.com/about-aws/whats-new/2022/05/aws-glue-supports-sasl-authentication-apache-kafka/
The connection configuration has the following properties (plus adequate subnet and security groups):
"ConnectionProperties": {
"KAFKA_SASL_SCRAM_PASSWORD": "apassword",
"KAFKA_BOOTSTRAP_SERVERS": "theserver:9096",
"KAFKA_SASL_MECHANISM": "SCRAM-SHA-512",
"KAFKA_SASL_SCRAM_USERNAME": "auser",
"KAFKA_SSL_ENABLED": "false"
}
And the actual api method call is
df = glue_context.create_data_frame.from_options(
connection_type="kafka",
connection_options={
"connectionName": "kafka-glue-connector",
"security.protocol": "SASL_SSL",
"classification": "json",
"startingOffsets": "latest",
"topicName": "atopic",
"inferSchema": "true",
"typeOfData": "kafka",
"numRetries": 1,
}
)
When running logs show the client is attempting to connect to brokers using Kerberos, and runs into
22/10/19 18:45:54 INFO ConsumerConfig: ConsumerConfig values:
sasl.mechanism = GSSAPI
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
...
org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
How can I authenticate the AWS Glue job using SASL/SCRAM? What properties do I need to set in the connection and in the method call?
Thank you

Logstash AWS solving code 403 trying to reconnect

I'm trying to push documents from local to elastic server in AWS, and when trying to do so I get 403 error and Logstash keeps on trying to establish connection with the server like so:
[2021-05-09T11:09:52,707][TRACE][logstash.inputs.file ][main] Registering file input {:path=>["~/home/ubuntu/json_try/json_try.json"]}
[2021-05-09T11:09:52,737][DEBUG][logstash.javapipeline ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x5033269f run>"}
[2021-05-09T11:09:53,441][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 4s
[2021-05-09T11:09:56,403][INFO ][logstash.outputs.amazonelasticsearch][main] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://my-dom.co:8001/scans, :path=>"/"}
[2021-05-09T11:09:56,461][WARN ][logstash.outputs.amazonelasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-dom.co:8001/scans", :error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '403' contacting Elasticsearch at URL 'https://my-dom.co:8001/scans/'"}
[2021-05-09T11:09:56,849][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-05-09T11:09:56,853][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-05-09T11:09:57,444][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 8s
.
.
.
I'm using the following logstash conf file:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
amazon_es {
hosts => ["https://my-dom.co/scans"]
port => 8001
ssl => true
region => "us-east-1b"
index => "snapshot-%{+YYYY.MM.dd}"
}
}
Also I've exported AWS keys for the SSL to work. Is there anything I'm missing in order for the connection to succeed?
I've been able to solve this by using elasticsearch as my output plugin instead of amazon_es.
This usage will require cloud_id of the target AWS node, cloud_auth for it and also the target index in elastic for the data to be sent to. So the conf file will look something like this:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
elasticsearch {
cloud_id: "node_name:node_hash"
cloud_auth: "auth_hash"
index: "snapshot-%{+YYYY.MM.dd}"
}
}

What is the grok pattern for this jenkins log?

Can you please help me with the grok pattern for this jenkins sample data or log. The log is only a single line.
hudson.slaves.CommandLauncher launch\nSEVERE: Unable to launch the agent for dot-dewsttlas403-ci\njava.io.IOException: Failed to create a temporary file in /opt_shared/iit_slave/jenkins_slave/workspace\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:144)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:109)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:84)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:74)\n\tat hudson.util.TextFile.write(TextFile.java:116)\n\tat jenkins.branch.WorkspaceLocatorImpl$WriteAtomic.invoke(WorkspaceLocatorImpl.java:264)\n\tat jenkins.branch.WorkspaceLocatorImpl$WriteAtomic.invoke(WorkspaceLocatorImpl.java:256)\n\tat hudson.FilePath$FileCallableWrapper.call(FilePath.java:3042)\n\tat hudson.remoting.UserRequest.perform(UserRequest.java:212)\n\tat hudson.remoting.UserRequest.perform(UserRequest.java:54)\n\tat hudson.remoting.Request$2.run(Request.java:369)\n\tat hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n\tSuppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to dot-dewsttlas403-ci\n\t\tat hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743)\n\t\tat hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)\n\t\tat hudson.remoting.Channel.call(Channel.java:957)\n\t\tat hudson.FilePath.act(FilePath.java:1069)\n\t\tat hudson.FilePath.act(FilePath.java:1058)\n\t\tat jenkins.branch.WorkspaceLocatorImpl.save(WorkspaceLocatorImpl.java:254)\n\t\tat jenkins.branch.WorkspaceLocatorImpl.access$500(WorkspaceLocatorImpl.java:80)\n\t\tat jenkins.branch.WorkspaceLocatorImpl$Collector.onOnline(WorkspaceLocatorImpl.java:561)\n\t\tat hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:697)\n\t\tat hudson.slaves.SlaveComputer.setChannel(SlaveComputer.java:432)\n\t\tat hudson.slaves.CommandLauncher.launch(CommandLauncher.java:154)\n\t\tat hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294)\n\t\tat jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)\n\t\tat jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)\n\t\tat java.util.concurrent.FutureTask.run(Unknown Source)\n\t\tat java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\t\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\t\tat java.lang.Thread.run(Unknown Source)\nCaused by: java.io.IOException: No space left on device\n\tat java.io.UnixFileSystem.createFileExclusively(Native Method)\n\tat java.io.File.createTempFile(File.java:2024)\n\tat hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:142)\n\t... 15 more\n
I am only interested to extract the following from the above logs.
agent status, agent name
Expected Result:
agent status: Unable
agent name: dot-dewsttlas403-ci
SEVERE: %{DATA:agent_status} to launch the agent for %{DATA:agent_name}\\n
This should give you the result you are interested in, but it would only work if the structure of the message is the same.
Configuration used:
input {stdin{}}
filter{
grok {
match =>{
"message" => "SEVERE: %{DATA:agent_status} to launch the agent for %{DATA:agent_name}\\n"
}
}
}
output {stdout{codec => json}}
Result:
{
"host": "MY_COMPUTER",
"agent_status": "Unable",
"message": "hudson.slaves.CommandLauncher launch\\nSEVERE: Unable to launch the agent for dot-dewsttlas403-ci\\njava.io.IOException: Failed to create a temporary file in /opt_shared/iit_slave/jenkins_slave/workspace\\n\\tat \r",
"agent_name": "dot-dewsttlas403-ci",
"#timestamp": "2020-01-29T16:54:27.256Z",
"#version": "1"
}
Also to help you next time you're working with logstash-grok:
An online tester for patterns:
http://grokconstructor.appspot.com/do/match
The basic grok patterns: https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

Logstash Output Amazon ES Error

I'm using logstash 2.3.4 and Amazon Elasticsearch Service (2.3) .
My config
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://awsmigration.XXXXXXXX.ap-southeast-1.rds.amazonaws.com:3306/table_receipt?zeroDateTimeBehavior=convertToNull&autoReconnect=true&useSSL=false"
# The user we wish to execute our statement as
jdbc_user => "XXXXXXXX"
jdbc_password => "XXXXXXXX"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/opt/logstash/drivers/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * from Receipt"
jdbc_paging_enabled => true
jdbc_page_size => 200
}
}
output {
#stdout { codec => json_lines }
amazon_es {
hosts => ["search-XXXXXXXX.ap-southeast-1.es.amazonaws.com"]
region => "ap-southeast-1"
index => "slurp_receipt"
document_type => "Receipt"
document_id => "%{uid}"
}
}
After running a command
bin/logstash agent -f db.conf
I got this error :
Attempted to send a bulk request to Elasticsearch configured at '["https://search-XXXXXXXX.ap-southeast-1.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["https://search-slurp-wjgudsrlz66esh6hyrijaagamu.ap-southeast-1.es.amazonaws.com:443"], :region=>"ap-southeast-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"https", :user=>nil, :password=>nil, :port=>443}}, :error_message=>"undefined method `credentials' for nil:NilClass", :error_class=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.1.36/lib/aws-sdk-core/signers/v4.rb:24:in `initialize'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:36:in `signer'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:48:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in `build_response'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in `run_request'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:49:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:45:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in `bulk'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in `bulk'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:321:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:318:in `submit'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:351:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:311:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:232:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :level=>:error}
May i know how to solve this problems?
thank you

Configuring Blackfire on a base virtual box using Chef

I'm trying to give Blackfire.io (by Sensiolabs) a try to profile an existing PHP application running on a Vagrant machine (with PHP 5.3) on Mac.
I'm using Chef to provision my machine with Blackfire, but when running "vagrant provision" I get the following error:
default: STDERR: The server ID parameter is not set. Please run
blackfire-agent -register to configure it.
..which I already did
This is my Vagrant file:
is_windows = (RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/)
Vagrant.configure("2") do |config|
..
config.vm.box = "covex/ubuntu1204-x64"
config.omnibus.chef_version = :latest
config.vm.provision "chef_solo" do |chef|
chef.json = {
:blackfire => {
:'server-id' => "d4860b49-be67-404b-9fa1-b..",
:'server-token' => "c412751f30d6c724033d8408e.."
}
}
chef.add_recipe "blackfire"
end
end
I followed the installation steps on https://blackfire.io/getting-started, except for the Probe paragraph.
Is my Vagrant file wrongly configured, so it can't read the server ID and token? Is the "brew install blackfire-php53" needed for this, if so, is there a way to configure this through my Vagrant file?
Guessing you are using https://supermarket.chef.io/cookbooks/blackfire
You missed the agent node in the config tree
{
"blackfire" => {
"agent" => {
"server-id" => "your server-id",
"server-token" => "your server-token",
}
}
}