Unable to send data to AWS elastic search instance using logstash - amazon-web-services

I am trying to send data to AWS elastic search end point using logstash that is installed on my local machine.
The logstash conf file looks like this
input {
file {
path => "/path/log.txt"
}
}
output {
amazon_es {
hosts => ["https://search-abclostashtrial-5jdfc43oqql7qsrhfgbvwewku.us-east-2.es.amazonaws.com"]
action => "index"
region => "us-east-2"
index => "trial"
ssl => true
}
}
The Elastic search Access policy looks like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-2:0415721453395:domain/abclostashtrial/*"
}
]
}
I am using logstash-output-amazon_es plugin to send the query like
sudo bin/logstash -f /path/logstash/abc.conf
And I get the following error log.
[ERROR] 2019-04-30 20:05:52.900 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[INFO ] 2019-04-30 20:05:53.165 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-04-30 20:05:58.037 [LogStash::Runner] runner - Logstash shut down.
What am I missing here ?

One option to start with is to create an AccessKey that has rights to write to Elasticsearch, and configure that in the output. Example:
amazon_es {
hosts => ["vpc-xxxxxxxxx-es-yyyyyy4pywmwigwi47em.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => 'AKIxxxxxxxxxxx'
aws_secret_access_key => '11xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
index => "production-logindex-%{+YYYY.MM.dd}"
}

Related

AWS / Ansible - grab an S3 item cross-account

I have an ec2 instance in us-east-1. The instance is trying to grab an item from an s3 bucket that is in us-west-1, but still within the same account. The IAM permissions of the instance profile / IAM role attached to the instance has the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"kms:Decrypt"
],
"Resource": [
"arn:aws:s3:::cross-region-bucket/",
"arn:aws:s3:::cross-region-bucket/*"
],
"Effect": "Allow"
}
]
}
I've also tried attaching admin access but still no luck. I am using ansible to grab an s3 object from this cross-region bucket:
- name: Download object from cross-region s3 bucket
aws_s3:
bucket: "cross-region-bucket"
object: "object.txt"
dest: "/local/user/object.txt"
mode: get
region: "us-west-1"
This seems to work just fine when the bucket is in the same region, but now that I am trying this cross region s3 get, I am getting the following error:
fatal: [X.X.X.X]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"X.X.X.X\". Make sure this host can be reached over ssh: ", "unreachable": true}
If I set the region in the aws_s3 task to 'us-east-1' (where the instance is), then I get the following error:
fatal: [X.X.X.X]: FAILED! => {"boto3_version": "1.18.19", "botocore_version": "1.21.19", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) cross-region-bucket.: Connect timeout on endpoint URL: \"https://cross-region-bucket.s3.us-west-1.amazonaws.com/\""}
Not sure what is blocking me from accessing the cross-region bucket at this point. Any suggestions?

AWS ElasticSearch Logstash 403 Forbidden Access

MySQL and elasticsearch are hosted on aws. And the logstash is running on an ec2. I am not using a VPC.I can connect to MySQL locally or on my ec2.
Modifying question a bit. Here is the new logstash file in my ec2. My es2 instance is not SSL certified, is that a problem?
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from foo"
type => "foo"
tags => ["foo"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from cat"
type => "cat"
tags => ["cat"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from rest"
type => "rest"
tags => ["rest"]
}
}
output {
stdout { codec => json_lines }
if "foo" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "foo"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "foo-%{+YYYY.MM.dd}"
}
}
if "cat" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "cat"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "cat-%{+YYYY.MM.dd}"
}
}
if "rest" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "rest"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "rest-%{+YYYY.MM.dd}"
}
}
}
Now the issue I'm getting is a 403 forbidden error.
I did create a user in AWS with AmazonESFullAccess (AWS managed policy) permission. I'm not sure what else to do anymore. I am not using a VPC, I was trying to avoid that. So I want to stick with public access.
I tried creating new ElasticSearch Service based off this guide: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg.html
but I get an error error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://user:xxxxxx#mysql-abcdefghijkl.us-east-1.es.amazonaws.com:9200/]
and the logstash ouput for this instance is:
elasticsearch {
hosts => ["https://es-mysql.us-east-1.es.amazonaws.com/"]
index => "category"
user => "user"
password => "password"
document_type => "cat-%{+YYYY.MM.dd}"
}
Obviously this isn't the preferred method, but I'm really just trying to setup a dev/personal environment.
Also, I am able to login into Kibana with this instance.
ACCESS POLICY
For the first elastic search service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111:user/root"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:11111111:domain/stuffed-es-mysql/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:166216189490:domain/stuffed-es-mysql/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"11.11.11.111",
"111.111.1.111",
"111.111.1.1",
"1.11.111.111",
]
}
}
}
]
}
For the second ElasticSearch Service I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:xxxxxxxxx:domain/es-mysql/*"
}
]
}
SO I resolved the two issues.
Problem one: I was still having a problem with my logstash input connection to my RDS (which wasn't the problem posted but I thought I'd share anyway) :
Problem:
The RDS MySQL instance was the wrong version. It was set to the recommended value by AWS (5.7) and I need the latest 8.0.*. This was causing a problem with jdbc not working correctly.
Solution:
Update RDS MySQL instance to 8.0.17. Now my logstash was able to read the input from MySQL RDS.
Problem two: The output of my logstash was not working.
Problem:
Receiving a 403 forbidden error.
Solution:
Removed the https requirement when setting up the ES service. This most likely worked for me because my ec2 is not ssl certified.
The solution to problem two was questioned before in my post. And being new to logstash setup on an ec2, and elasticsearch service (AWS) I was not going with my instincts. But, I removed the https requirement when setting up the ES service. Yes not the greatest idea, but this a dev environment. This fixed the issue. Why? Because my ec2 service is not ssl certified. This makes sense. One of the common problems you get with a 403 error is because the source is sending a request to a SSL certified destination.
I think this is a common problem for people who want to run something on a budget and it's overlooked. Most people want to jump to your Access Policy, IP Policy or Security group and that's understandable.

Why only a root user can upload to S3 bucket running a Java program from a EC2 instance?

We have the problem in our application in uploading data to the S3 bucket crystal-dyn. This uploading works fine with the same program on the Centos-6 instances but fails on RHEL7 instances.
We have EC2 instances with attached role crystal-role. In turn this role has inline policy crystal-policy:
{
"RoleName": "crystal-role",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "AllowDescribeInstances"
},
{
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::crystal-dyn"
],
"Effect": "Allow",
"Sid": "AllowSeeLogBucket"
},
{
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::crystal-dyn/*"
],
"Effect": "Allow",
"Sid": "AllowPutLogs"
},
{
"Action": [
"kms:Encrypt",
"kms:GenerateDataKey"
],
"Resource": [
"arn:aws:kms:us-east-1:566:key/a15912a107bb",
"arn:aws:kms:us-east-1:566201213358:key/158d81e9467a"
],
"Effect": "Allow",
"Sid": "AllowEncrypt"
},
{
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::389203956472:role/allow-cross-account-exec-api-qa2",
"Effect": "Allow",
"Sid": "AllowApiAccess"
}
]
},
"PolicyName": "crystal-policy"
}
This policy should allow Java applications running on instances to upload data to the S3 bucket.
However, this does not happen. So, we created a simple Java program to test uploading. Please, note that I use 3 versions of creating an AmazonS3 client. I run this from the command line:
java -cp ".:lib/*" org.examples.UploadObject
In the lib I have the following jars:
aws-java-sdk-1.10.10.jar aws-java-sdk-s3-1.11.339.jar httpclient-4.5.5.jar jackson-annotations-2.9.5.jar jackson-databind-2.9.5.jar
aws-java-sdk-core-1.11.423.jar commons-logging-1.1.3.jar httpcore-4.4.9.jar jackson-core-2.9.5.jar joda-time-2.9.9.jar
The Java code:
String clientRegion = "us-east-1";
String bucketName = "crystal-dyn";
String stringObjKeyName = "stringToUploadTest";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).withRegion(clientRegion).build();
System.out.println("s3Client=" + s3Client);
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
System.out.println("Uploading String is done");
}
catch(AmazonServiceException e) {
e.printStackTrace();
}
catch(SdkClientException e) {
e.printStackTrace();
}
try {
System.out.println("Uploading to S3 bucket=" + bucketName + " string=" + stringObjKeyName + " Building with No Creds");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(clientRegion).build();
System.out.println("s3Client=" + s3Client);
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
System.out.println("Uploading String is done");
}
catch(AmazonServiceException e) {
e.printStackTrace();
}
catch(SdkClientException e) {
e.printStackTrace();
}
try {
System.out.println("Uploading to S3 bucket=" + bucketName + " string=" + stringObjKeyName + " Building with No Creds and No region");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().build();
System.out.println("s3Client=" + s3Client);
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
System.out.println("Uploading String is done");
}
catch(AmazonServiceException e) {
e.printStackTrace();
}
catch(SdkClientException e) {
e.printStackTrace();
}
This programs works successfully only if it runs from the root user on the EC2 Linux instance, for all 3 versions of creating the AmazonS3 client. For all other users we got the output with the exception stack, see below. Please, note that no user including rood has the .aws directory with credentials.
s3Client=com.amazonaws.services.s3.AmazonS3Client#682b2fa
com.amazonaws.SdkClientException: Unable to load AWS credentials from
any provider in the chain: [EnvironmentVariableCredentialsProvider:
Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#20d525: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#3f56875e: Unable to load credentials from service endpoint]
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1186)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:776)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1755)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:3448)
at org.examples.UploadObject.main(UploadObject.java:20)
Uploading to S3 bucket=dynarch-ac-logs-malachite-dyn
string=stringToUploadTest Building with No Creds
s3Client=com.amazonaws.services.s3.AmazonS3Client#740773a3
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: AF2735F42CCB60D0; S3 Extended Request ID: 7VyScO6XOs00oB/g0k8bqG3X3Ib01n4uT1xg8/2U72TCOKg8YKNIVgQrjjnF6XzUAfoB24wcYZY=), S3 Extended Request ID: 7VyScO6XOs00oB/g0k8bqG3X3Ib01n4uT1xg8/2U72TCOKg8YKNIVgQrjjnF6XzUAfoB24wcYZY=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1660)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1324)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1074)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1755)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:3448)
at org.examples.UploadObject.main(UploadObject.java:33)
Uploading to S3 bucket=dynarch-ac-logs-malachite-dyn
string=stringToUploadTest Building with No Creds and No region
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:436)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:402)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at org.examples.UploadObject.main(UploadObject.java:44)
I am just thinking, instead of removing the rule. Can you try following if it resolves the issue?.(I would like to test it myself if I have your sample project). Usually, I use below steps to setup containers which interacts with S3 without having root permission and use IAM roles.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-install.html
sudo sh -c "echo 'net.ipv4.conf.all.route_localnet = 1' >> /etc/sysctl.conf"
sudo sysctl -p /etc/sysctl.conf
sudo iptables -t nat -A PREROUTING -p tcp -d 169.254.170.2 --dport 80 -j DNAT --to-destination 127.0.0.1:51679
sudo iptables -t nat -A OUTPUT -d 169.254.170.2 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
sudo sh -c 'iptables-save > /etc/iptables/rules.v4'
//OR
sudo sh -c 'iptables-save > /etc/sysconfig/iptables'
It appeared that the problem is in the iptables rule that allows the access only to the root.
iptables -L | grep root
DROP all -- anywhere instance-data.ec2.internal ! owner UID match root
Once this rule is removed uploading is working for all users

Logstash-cloudwatch-input plugin is not sending data to Elasticsearch

I have an EC2 instance setup in a private vpc network with the IAM role shown below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchAccess",
"Effect": "Allow",
"Action": [
"cloudwatch:*"
],
"Resource": "*"
},
{
"Sid": "EC2",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": "*"
}
]
}
And my logstash configuration file has:
input {
stdin{}
cloudwatch {
namespace => "AWS/EC2"
metrics => [ 'CPUUtilization']
filters => { "tag:TAG" => "VALUE" }
region => "us-east-1"
proxy_uri => “http://proxy.company.com:port/”
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["IP:PORT"]
}
}
I start my logstash using:
/path/to/logstash -f /path/to/logstash.conf
The command runs and I can see data from cloudwatch in debug mode:
{:timestamp=>"2016-04-13T17:26:40.685000-0400", :message=>"DPs: {:datapoints=>[{:timestamp=>2016-04-13 21:24:00 UTC, :sample_count=>2.0, :unit=>\"Percent\", :minimum=>0.17, :maximum=>0.25, :sum=>0.42000000000000004, :average=>0.21000000000000002}, {:timestamp=>2016-04-13 21:22:00 UTC, :sample_count=>2.0, :unit=>\"Percent\", :minimum=>0.08, :maximum=>0.17, :sum=>0.25, :average=>0.125}], :label=>\"CPUUtilization\", :response_metadata=>{:request_id=
but logstash doesn't push anything to elasticsearch.
Does anyone have any idea what might be the issue? Or know how I can debug this?
To solve this issue you can use the 1.1.2 version of the plugin or update your cloudwatch.rb and logstash-input-cloudwatch.gemspec:
https://github.com/logstash-plugins/logstash-input-cloudwatch/pull/3/files

S3 putObject not working access denied

AWS S3 is working on my localhost and on my live website, but my development server (which is EXACTLY the same configuration) is throwing the following error: http://xxx.xx.xxx.xxx/latest/meta-data/iam/security-credentials/resulted in a404 Not Found` response: Error retrieving credentials from the instance profile metadata server.
Localhost URL is http://localhost/example
Live URL is http://www.example.com
Development URL is http://dev.example.com
Why would this work on localhost and live but not my development server?
Here is my sample code:
$bucket = 'example'
$s3Client = new S3Client([
'region'=>'us-west-2',
'version'=>'2006-03-01',
'key'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
'secret'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
]);
$uniqueFileName = uniqid().'.txt';
$s3Client->putObject([
'Bucket' => $bucket,
'Key' => 'dev/'.$uniqueFileName,
'Body' => 'this is the body!'
]);
Here is policy:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxxxx",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example/*"
}
]
}
The values returned by http://169.254.169.254/latest/meta-data/iam/security-credentials/ are associated with the Role assigned to the EC2 instance when the instance was first launched.
Since you are receiving a 404 Not Found response, it is likely that your Development server does not have a Role assigned. You can check in the EC2 management console -- just click on the instance, then look at the details pane and find the Role value.
If you wish to launch a new server that is "fully" identical, use the Launch More Like This command in the Actions menu. it will also copy the Role setting.