MySQL and elasticsearch are hosted on aws. And the logstash is running on an ec2. I am not using a VPC.I can connect to MySQL locally or on my ec2.
Modifying question a bit. Here is the new logstash file in my ec2. My es2 instance is not SSL certified, is that a problem?
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from foo"
type => "foo"
tags => ["foo"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from cat"
type => "cat"
tags => ["cat"]
}
jdbc {
jdbc_connection_string => "jdbc:mysql://aws.xxxxx.us-east-1.rds.amazonaws.com:3306/stuffed?user=admin&password=pword"
jdbc_user => "admin"
jdbc_password => "pword"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "mysql-connector-java-8.0.19.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from rest"
type => "rest"
tags => ["rest"]
}
}
output {
stdout { codec => json_lines }
if "foo" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "foo"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "foo-%{+YYYY.MM.dd}"
}
}
if "cat" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "cat"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "cat-%{+YYYY.MM.dd}"
}
}
if "rest" in [tags] {
amazon_es {
hosts => ["https://es1.stuffed-es-mysql.us-east-1.es.amazonaws.com"]
index => "rest"
region => "us-east-1"
aws_access_key_id => "id"
aws_secret_access_key => "key"
document_type => "rest-%{+YYYY.MM.dd}"
}
}
}
Now the issue I'm getting is a 403 forbidden error.
I did create a user in AWS with AmazonESFullAccess (AWS managed policy) permission. I'm not sure what else to do anymore. I am not using a VPC, I was trying to avoid that. So I want to stick with public access.
I tried creating new ElasticSearch Service based off this guide: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg.html
but I get an error error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://user:xxxxxx#mysql-abcdefghijkl.us-east-1.es.amazonaws.com:9200/]
and the logstash ouput for this instance is:
elasticsearch {
hosts => ["https://es-mysql.us-east-1.es.amazonaws.com/"]
index => "category"
user => "user"
password => "password"
document_type => "cat-%{+YYYY.MM.dd}"
}
Obviously this isn't the preferred method, but I'm really just trying to setup a dev/personal environment.
Also, I am able to login into Kibana with this instance.
ACCESS POLICY
For the first elastic search service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111:user/root"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:11111111:domain/stuffed-es-mysql/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:166216189490:domain/stuffed-es-mysql/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"11.11.11.111",
"111.111.1.111",
"111.111.1.1",
"1.11.111.111",
]
}
}
}
]
}
For the second ElasticSearch Service I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:xxxxxxxxx:domain/es-mysql/*"
}
]
}
SO I resolved the two issues.
Problem one: I was still having a problem with my logstash input connection to my RDS (which wasn't the problem posted but I thought I'd share anyway) :
Problem:
The RDS MySQL instance was the wrong version. It was set to the recommended value by AWS (5.7) and I need the latest 8.0.*. This was causing a problem with jdbc not working correctly.
Solution:
Update RDS MySQL instance to 8.0.17. Now my logstash was able to read the input from MySQL RDS.
Problem two: The output of my logstash was not working.
Problem:
Receiving a 403 forbidden error.
Solution:
Removed the https requirement when setting up the ES service. This most likely worked for me because my ec2 is not ssl certified.
The solution to problem two was questioned before in my post. And being new to logstash setup on an ec2, and elasticsearch service (AWS) I was not going with my instincts. But, I removed the https requirement when setting up the ES service. Yes not the greatest idea, but this a dev environment. This fixed the issue. Why? Because my ec2 service is not ssl certified. This makes sense. One of the common problems you get with a 403 error is because the source is sending a request to a SSL certified destination.
I think this is a common problem for people who want to run something on a budget and it's overlooked. Most people want to jump to your Access Policy, IP Policy or Security group and that's understandable.
Related
I basically need users to only be allowed to CRUD their own data in my app. Their data is stored in dynamodb and Im using cognito with amplify for a react native project.
I've searched for a lot of answers and most of them mention using the identityID as the primary/hash key in the item for dynamodb table. So I implemented it like so.
import { API, Auth } from "aws-amplify";
const createItemDB = async () => {
API.post("nutritionAPI", "/items", {
body: {
userID: `${(await Auth.currentUserCredentials()).identityId}`,
dateID: "february23",
},
headers: {
Authorization: `Bearer ${(await Auth.currentSession())
.getIdToken()
.getJwtToken()}`,
},
})
.then((result) => {
// console.log(result)
})
.catch((err) => {
console.log(err);
});
};
However, im definitely missing an important part of this integration and I cant find an answer. What do I do next? As of now, if User2 were to know the userID (primary key) and the dateID (sort key) of User1, they can delete that item in the table with the following code.
const deleteRow = async (primaryKey, sortKey) => {
API.del("nutritionAPI", "/items/object/" + primaryKey + "/" + sortKey, {})
.then((result) => console.log(result))
.catch((err) => console.log(err));
};
I've blindly followed some guide on this topic and created a Federated Identities Pool and linked it with my user pool, but not sure how it plays a part yet. Any help is appreciated. thanks.
I'm just putting hunterhacker's comment as an answer, because it really helped me.
Basically, IAM policy condition supports DynamoDB fine-grained access control, as explained here.
Therefore, just need to make sure that the IAM Role associated with the Cognito users has this policy condition:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToOnlyItemsMatchingUserID",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:123456789012:table/xyz"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}" <==== This part here!
]
}
}
}
]
}
The article even explains methods of limiting attributes/columns access, but it is not needed in this case.
I have an ec2 instance in us-east-1. The instance is trying to grab an item from an s3 bucket that is in us-west-1, but still within the same account. The IAM permissions of the instance profile / IAM role attached to the instance has the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"kms:Decrypt"
],
"Resource": [
"arn:aws:s3:::cross-region-bucket/",
"arn:aws:s3:::cross-region-bucket/*"
],
"Effect": "Allow"
}
]
}
I've also tried attaching admin access but still no luck. I am using ansible to grab an s3 object from this cross-region bucket:
- name: Download object from cross-region s3 bucket
aws_s3:
bucket: "cross-region-bucket"
object: "object.txt"
dest: "/local/user/object.txt"
mode: get
region: "us-west-1"
This seems to work just fine when the bucket is in the same region, but now that I am trying this cross region s3 get, I am getting the following error:
fatal: [X.X.X.X]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"X.X.X.X\". Make sure this host can be reached over ssh: ", "unreachable": true}
If I set the region in the aws_s3 task to 'us-east-1' (where the instance is), then I get the following error:
fatal: [X.X.X.X]: FAILED! => {"boto3_version": "1.18.19", "botocore_version": "1.21.19", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) cross-region-bucket.: Connect timeout on endpoint URL: \"https://cross-region-bucket.s3.us-west-1.amazonaws.com/\""}
Not sure what is blocking me from accessing the cross-region bucket at this point. Any suggestions?
I am trying to send data to AWS elastic search end point using logstash that is installed on my local machine.
The logstash conf file looks like this
input {
file {
path => "/path/log.txt"
}
}
output {
amazon_es {
hosts => ["https://search-abclostashtrial-5jdfc43oqql7qsrhfgbvwewku.us-east-2.es.amazonaws.com"]
action => "index"
region => "us-east-2"
index => "trial"
ssl => true
}
}
The Elastic search Access policy looks like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-2:0415721453395:domain/abclostashtrial/*"
}
]
}
I am using logstash-output-amazon_es plugin to send the query like
sudo bin/logstash -f /path/logstash/abc.conf
And I get the following error log.
[ERROR] 2019-04-30 20:05:52.900 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[INFO ] 2019-04-30 20:05:53.165 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2019-04-30 20:05:58.037 [LogStash::Runner] runner - Logstash shut down.
What am I missing here ?
One option to start with is to create an AccessKey that has rights to write to Elasticsearch, and configure that in the output. Example:
amazon_es {
hosts => ["vpc-xxxxxxxxx-es-yyyyyy4pywmwigwi47em.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => 'AKIxxxxxxxxxxx'
aws_secret_access_key => '11xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
index => "production-logindex-%{+YYYY.MM.dd}"
}
AWS S3 is working on my localhost and on my live website, but my development server (which is EXACTLY the same configuration) is throwing the following error: http://xxx.xx.xxx.xxx/latest/meta-data/iam/security-credentials/resulted in a404 Not Found` response: Error retrieving credentials from the instance profile metadata server.
Localhost URL is http://localhost/example
Live URL is http://www.example.com
Development URL is http://dev.example.com
Why would this work on localhost and live but not my development server?
Here is my sample code:
$bucket = 'example'
$s3Client = new S3Client([
'region'=>'us-west-2',
'version'=>'2006-03-01',
'key'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
'secret'=>'xxxxxxxxxxxxxxxxxxxxxxxx',
]);
$uniqueFileName = uniqid().'.txt';
$s3Client->putObject([
'Bucket' => $bucket,
'Key' => 'dev/'.$uniqueFileName,
'Body' => 'this is the body!'
]);
Here is policy:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxxxx",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example/*"
}
]
}
The values returned by http://169.254.169.254/latest/meta-data/iam/security-credentials/ are associated with the Role assigned to the EC2 instance when the instance was first launched.
Since you are receiving a 404 Not Found response, it is likely that your Development server does not have a Role assigned. You can check in the EC2 management console -- just click on the instance, then look at the details pane and find the Role value.
If you wish to launch a new server that is "fully" identical, use the Launch More Like This command in the Actions menu. it will also copy the Role setting.
I am trying to use aws change-resource-record-sets to add an alias. The idea is to allow access to a Cloudfront distribution via URL on our domain (e.g. mydomainname.mycompany.co.uk rather than mydomainname.cloudfront.net where mydomainname=something like d4dzc6m38sq0mk)
After working through various other JSON errors, which I solved, I am still getting a problem.
A client error (InvalidChangeBatch) occurred: RRSet with DNS name
mydomainname.cloudfront.net. is not permitted in zone mycompany.co.uk.
What have I got wrong?
JSON:
{
"Comment": "Recordset for mydomainname",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "mydomainname",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z2FDTNDATAQYW2",
"DNSName": "mydomainname.cloudfront.net.",
"EvaluateTargetHealth": false
}
}
}
]
}
EDITED to clarify the HostedZoneID.
You need to pass complete name in the NAME parameter. for your example you need to pass this:
"Name" : "mydomainname.cloudfront.net."
If "The idea is to allow access to a Cloudfront distribution via URL on our domain..." then try a CNAME instead of an alias...
aws route53 change-resource-record-sets --hosted-zone-id Z3A********TC8 --change-batch file://~/tmp/awsroute53recordset.json
awsroute53recordset.json
{
"Comment": "Allow access to a Cloudfront distribution via URL on our domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "cdn.mycompany.co.uk",
"Type": "CNAME",
"TTL": 3600,
"ResourceRecords": [
{
"Value": "d4dzc6m38sq0mk.cloudfront.net"
}
]
}
}
]
}
You have to add the 'Change' => node.
'Comment' => 'Created Programmatically',
'Changes' => [
'Change'=>[
'Action' => 'CREATE',
'ResourceRecordSet' => [
'Name' => $domainName.'.',
'Type' => 'A',
'AliasTarget' => [
'HostedZoneId' => '*ZoneID*',
'DNSName' => '*DNSName*',
'EvaluateTargetHealth' => false
]
]
],