Unable to connect to S3 while creating Elasticsearch snapshot repository - amazon-web-services

I am trying to register a respository on AWS S3 to store ElasticSearch snapshots.
I am following guide and ran the very first command listed in the doc.
But I am getting the error Access Denied while executing that command.
The role that is being used to perform operations on S3 is the AmazonEKSNodeRole.
I have assigned the appropriate permissions to the role to perform operations on the S3 bucket.
Also, here is another doc which suggests to use kibana for ElasticSearch version > 7.2 but I am doing the same via cURL requests.
Below is trust Policy of the role through which I am making the request to register repository in the S3 bucket.
Also, below are the screenshots of the permissions of the trusting and trusted accounts respectively -

Related

AWS DataSync: Unable to connect to S3 endpoint

I am trying to sync two S3 buckets in different accounts. I have successfully configured the locations and created a task. However, when I run the task I get a Unable to connect to S3 endpoint error. Can anyone help?
This could have been related to the datasync's IAM role's policy (datasync IAM role) not having permission to the target S3 bucket
verify your policy and trust relationship using the below documentation
https://docs.aws.amazon.com/datasync/latest/userguide/using-identity-based-policies.html
Also turn on cloudwatch logs (like shown in the image) and view detailed log in cloudwatch. If it is permission related, add the missing policy in the Datasync role.

How to collect logs from EC2 instance and store it in S3 bucket?

Step 1: Created an Amazon S3 Bucket
Step 2: Created an IAM User with Full Access to Amazon S3 and CloudWatch Logs
Step 3: Granted Permissions on an Amazon S3 Bucket
What should I do next?
A few things.
You're probably better off using an IAM instance profile. That way, your credentials are not static IAM user credentials.
If you want to only copy the logs to S3, I'd suggest setting up some scheduled job to use the AWS CLI to copy the directory with your logs to S3.
Alternatively, I'd suggest you install and configure the CloudWatch agent on your instance. From there, you can copy logs to S3 using the methodology outlined here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasks.html

Spark credential chain ordering - S3 Exception Forbidden

I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider".
When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden
read command:
val myData = sparkSession.read.parquet("s3a://myBucket/myKey")
I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that.
Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set.
If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used
Further Reading: Troubleshooting S3A

Trouble integrating EMR with S3

I am having trouble integrating EMR with S3 i.e to implement EMRFS
EMR Version: emr-5.4.0
When I run hdfs dfs -ls s3://pathto/bucket/ I get following error
ls: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXXX),
S3 Extended Request ID: XXXXX**
Please guide what is that, what I am missing ?
I have done following steps
Created a KMS Key for EMR
Added EMR_EC2_DefaultRole as key users in newly creates KMS Key
Created a S3 Server Side Encryption Security Config policy for EMR
Created new Inline policy for role/EMR_EC2_DefaultRole and EMR_DefaultRole for S3 bucket access
Created a EMR cluster manually with new EMR Security policy and following configuration classification
"fs.s3.enableServerSideEncryption": "true",
"fs.s3.serverSideEncryption.kms.keyId":"KEYID"
EMR, by default, will use instance profile credentials(EMR_EC2_DefaultRole) to access your S3 bucket. The error means this role does not have necessary permissions to access S3 bucket.
You will need to verify the IAM Role policy of that role to allow necessary S3 actions on both bucket and objects (Like s3:list*). Also check if you have any explicit Deny's etc.
http://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
The access could also be denied because of a Bucket policy on set on the S3 bucket you are trying to access.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
Your EMR cluster could be using an VPC endpoint for S3 to access S3 rather than Internet/NAT. In that case, you'll also need to verify VPC endpoint policies as well.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3

What's wrong with my AWS CLI configuration?

I'm attempting to use the AWS CLI tool to upload a file to Amazon Glacier. I installed awscli using pip:
sudo pip install awscli
I created a new AWS IAM group example with AmazonGlacierFullAccess permissions.
I created a new AWS IAM user example and added the user to the example group. I also created a new access key for the user.
I created a new AWS Glacier Vault example and edited the policy document to allow the example user to allow actions glacier:* with no conditions.
I then ran aws configure and added the "Access Key ID" and "Secret Access Key" as well as the default region.
When I run:
aws glacier list-vaults
I get the error:
aws: error: argument --account-id is required
If I add the account ID:
aws --account-id=[example user account ID] glacier list-vaults
I get this error:
A client error (UnrecognizedClientException) occurred when calling the ListVaults operation: No account found for the given parameters
I figured I might have gotten something in the group assignment wrong, so I added the AdministratorAccess policy directly to the example user. Now I can run commands such as aws s3 ls, but I still cannot aws glacier list-vaults without getting the aws: error: argument --account-id is required error.
Have I missed something in my AWS configuration? How can I further troubleshoot this issue?
Looks like for AWS Glacier you need the account ID List Vaults (GET vaults)
You can get your account id (12 digits) from Support page - Top right on your AWS dashboard.