How do i get permissions attached with AWS policies - amazon-web-services

Is there any way to get AWS permissions attached with a policy.
Currently, I am getting a list of attached policies through boto3 "list_policies."
My target is to get permissions attached with each policy.

Using boto3 you can get access to the policy document, which you can parse to get the permissions.
So, iterate the list_policies response and call get_policy_version with the Arn and DefaultVersionId to get the policy document.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.list_policies
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.get_policy_version

Related

AWS DataSync: Unable to connect to S3 endpoint

I am trying to sync two S3 buckets in different accounts. I have successfully configured the locations and created a task. However, when I run the task I get a Unable to connect to S3 endpoint error. Can anyone help?
This could have been related to the datasync's IAM role's policy (datasync IAM role) not having permission to the target S3 bucket
verify your policy and trust relationship using the below documentation
https://docs.aws.amazon.com/datasync/latest/userguide/using-identity-based-policies.html
Also turn on cloudwatch logs (like shown in the image) and view detailed log in cloudwatch. If it is permission related, add the missing policy in the Datasync role.

error 403 while creating emr cluster using my reducer and mapper?

I am trying to use my bucket to give the arguments for the EMR to create a cluster for it is giving me "All access to this object has been disabled (Service: Amazon S3; Status Code: 403; Error Code: AllAccessDisabled;"
I have used my Reducer and Mapper python files and my bucket's permission is public too
is there something wrong with my mapper and reducer files or am I missing a trick here
Make sure you've assigned your EMR cluster an IAM role that has adequate S3 access permissions. IAM enables you to grant permissions to users, groups, or resources (like your EMR cluster, in this instance) to be able to access other services or resources in AWS (like S3, which is currently giving you an access denied error).
To do this through EMRFS:
Navigate to the EMR console
click Security configurations (on left menu)
Scroll down to IAM roles for EMRFS
Enable Use IAM roles for EMRFS requests to Amazon S3
Add role mapping
Select desired IAM role (Admin)
Select whatever basis for access you prefer (User, group, or S3 bucket name prefix)
Here's a pic of what it looks like in console:
More on this available in the docs here: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-roles.html
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-emrfs-iam-roles.html

spark s3 access without configuring keys and with only IAM role

I have a HDP cluster on AWS and I have one s3(in other account) also, my hadoop version is Hadoop 3.1.1.3.0.1.0-187
Now I want to read from the s3 (which is in different account) and process, then write the result to my s3(same account as cluster).
But as per the HDP guide Here tells, I can configure only one keys of either my account or other account.
But in my case I want to configure two account keys, so How to do do that ?
Due to some security reason, other account can not change the bucket policy to add IAM role which is created in my account , Hence I tried to access like below
Configured the keys of other account
Added IAM role(which has access policy for my bucket) of my account
but Still I got below error when I tried to access my account s3 from spark write
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3
What you need is to use the EC2 instance profile role. It is an IAM role that is attached to your instance: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
You first create a role with permissions that allow s3 access. Then you attach that role to your HDP cluster(EC2 autoscaling group and EMR can both achieve that).No IAM access key configuration needed on your side, although AWS still does that for you in the background. This is the s3 "outbound" access part.
The 2nd step is to set up the bucket policy to allow cross-account access: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
You will need to do this for each bucket in your different accounts. This is basically the "inbound" s3 access permission part.
You will encounter 400 if any part of your access(i.e., your instance profile role's permission, S3 bucket ACL, bucket policy, public access block setting and etc..) is denied in the permission chain. There are much more layers on the "inbound" side. So to start to get things working, if you are not IAM expert, try to start with a very open policy(use '*' wildcard) and then narrow things down.
If I've understood right
you want your EC2 VMs to access an S3 bucket to which the IAM role doesn't have access
your have a set of AWS login details for the external S3 bucket (login and password)
HDP3 has an default auth chain of, in order
per-bucket secrets. fs.s3a.bucket.NAME.access.key, fs.s3a.bucket.NAME.secret.key
config-wide secrets fs.s3a.access.key, fs.s3a.secret.key
env vars AWS_ACCESS_KEY and AWS_SECRET_KEY
the IAM Role (it does an HTTP GET to the 169.something server which serves up a new set of IAM role credentials at least once an hour)
What you need to try here is set up some per-bucket secrets for only the external source (either in a JCEKS file on all nodes in core-site.xml, or in the spark default. For example, if the external bucket was s3a://external, you'd have
spark.hadoop.fs.s3a.bucket.external.access.key AKAISOMETHING spark.hadoop.fs.s3a.bucket.external.secret.key SECRETSOMETHING
HDP3/Hadoop 3 can handle >1 secret in the same JCEKS file without problems. HADOOP-14507. my code. Older versions let you put username:secret in the URI, but that's such a security troublespot (everything logs those URIs as they aren't viewed as sensistive), that feature has been cut from Hadoop now. Stick to the JCEKs file with a per-bucket secret, falling back to IAM role for your own data
Note you can fiddle with the authentication list for ordering and behaviour: if you add use the TemporaryAWSCredentialsProvider then it'll support session keys as well, which is often handy.
<property>
<name>fs.s3a.aws.credentials.provider</name>
<value>
org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
</value>
</property>

IAM policy on S3 bucket

I always get confused in two but I wanted to add a IAM policy on S3 bucket.
Basically I have created an output bucket for Amazon transcriptions but it seems I need to add IAM role to allow Transcription job to write to the bucket. I think if I can attach AmazonTranscribeFullAccess to S3 bucket, it will work but I am unable to attach this policy. Could you please advise how can I add this policy on the new bucket?
There are a few concepts you will want to dig deeper into to understand the difference between IAM policies and S3 bucket policies. A detailed guide is: https://docs.aws.amazon.com/AmazonS3/latest/dev/how-s3-evaluates-access-control.html
You can attach IAM policies to Users, Groups and Roles, and you can attach bucket policies to S3 buckets.
Try adding S3 access to the user/role that you are using to run the transcribe job.

Checking AWS S3 write permissions in Boto3

In Boto3, I can write to a bucket with put_object. This could fail due to path-wise permissions issues. I want to check if I have write permissions without actually writing data to a particular S3 path.
You can Test IAM Policies with the IAM Policy Simulator, which is also accessible via API.
Boto3 has simulate_principal_policy():
Simulate how a set of IAM policies attached to an IAM entity works with a list of API operations and AWS resources to determine the policies' effective permissions. The entity can be an IAM user, group, or role. If you specify a user, then the simulation also includes all of the policies that are attached to groups that the user belongs to.