AWS Config insufficient delivery policy error - amazon-web-services

I am currently trying to enable AWS Config notifications on multiple accounts. I have enabled monitoring on each individual account with its own S3 bucket and SNS topic, but it would make more sense to have one centralized bucket and topic. I am trying to implement this with little success.
I have created an s3 bucket and target ARN but when I try and apply the changes I get an error of insufficient delivery policy
Note I am doing this through the AWS console and not with code.

To do this, you need two pieces:
The Identity and Access Management (IAM) Role being used must have permissions to deliver data to the common S3 bucket and SNS Topic. You'll need to go to the IAM Management Console, and edit the role being used by Config in each account and update the S3 bucket/SNS names in "PutObject", "GetBucketACL" and "sns:Publish"
You also need to allow the S3 bucket and SNS Topic to receive data from this new role. See
a. http://docs.aws.amazon.com/awscloudtrail/latest/userguide/aggregating_logs_accounts_bucket_policy.html
b. http://docs.aws.amazon.com/sns/latest/dg/AccessPolicyLanguage_UseCases_Sns.html

Related

Amplify S3 Documentation "Using Amazon S3" - How to update Auth_Role policies?

I'm trying to understand the AWS Amplify documentation section "Using Amazon S3". It says:
If you set up your Cognito resources manually, the roles will need to be given permission to access the S3 bucket.
There are two roles created by Cognito: an Auth_Role that grants signed-in-user-level bucket access and an Unauth_Role that allows unauthenticated access to resources. Attach the corresponding policies to each role for proper S3 access. Replace {enter bucket name} with the correct S3 bucket.
And then the docs provide JSON examples of Policies for Auth_Role and Unauth_Role. What's confusing me is that when I go into my Roles in my IAM console, I have the following:
amplify--dev-153155-authRole (contains AppSync resources)
amplify--dev-153155-authRole-idp (log group resources)
amplify--dev-153155-unauthRole (empty)
Neither of which contain anything like the JSON examples. The "...authRole" Policy contains actions/resources concerning AppSync, but nothing to do with S3. Likewise for the other two. I expected to find permissions to allow my Amplify app to get/store S3 items, otherwise how is it able to currently do it?
So my questions are:
How do I create and attach the policies provided in the above documentation? Do I simply paste the JSON into new policies in my IAM console, and attach them to the Auth_Role?
Where are the default permissions stored? I have set up an amplify app and added S3 with amplify add storage. I can connect to the S3 bucket to add and retrieve files - so presumably there must be existing Polices. But my Auth_Role contains no Policies that reference S3?

AWS DataSync: Unable to connect to S3 endpoint

I am trying to sync two S3 buckets in different accounts. I have successfully configured the locations and created a task. However, when I run the task I get a Unable to connect to S3 endpoint error. Can anyone help?
This could have been related to the datasync's IAM role's policy (datasync IAM role) not having permission to the target S3 bucket
verify your policy and trust relationship using the below documentation
https://docs.aws.amazon.com/datasync/latest/userguide/using-identity-based-policies.html
Also turn on cloudwatch logs (like shown in the image) and view detailed log in cloudwatch. If it is permission related, add the missing policy in the Datasync role.

I want my lambda code to directly upload files into an Amazon S3 bucket of a different account

So I have a lambda function that triggers an Amazon SageMaker processing job and this job currently writes a few files to my Amazon S3 bucket. I have mentioned my output_uri ='s3://outputbucket-in-my-acc/' Now I want the same files to be directly uploaded to a different AWS account and not in my account. How do i achieve this? I want no traces of the file to be stored in my account.
I found a similar solution here but this copies the file into the different account while the original files are still present in the source account:
AWS Lambda put data to cross account s3 bucket
Your Lambda Function (Account A) needs to assume a role in the other account (Account B) which has the permissions to write to the s3 location. For that you need to establish trust between the accounts with a cross account role.
Afterwards you assume the role in Account B from your Lambda function's code and execute the S3 command.
Find an example with boto3 here: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/
The SageMaker APIs for job creation typically (always?) include a RoleARN which will be the IAM role that SageMaker assumes to do work on your behalf. That IAM role must have the necessary permissions so that Amazon SageMaker can successfully complete its task (e.g. have PutObject permission to the relevant S3 bucket) and must have the necessary trust policy allowing the SageMaker service (sagemaker.amazonaws.com) to assume the role.

Creating IAM Role on S3/Lambda

Everywhere I can see IAM Role is created on EC2 instance and given Roles like S3FullAccess.
Is it possible to create IAM Role on S3 instead of EC2? And attach that Role to S3 bucket?
I created IAM Role on S3 with S3FULLACCESS. Not able to attach that to the existing bucket or create a new bucket with this Role. Please help
IAM (Identity and Access Management) Roles are a way of assigning permissions to applications, services, EC2 instances, etc.
Examples:
When a Role is assigned to an EC2 instance, credentials are passed to software running on the instance so that they can call AWS services.
When a Role is assigned to an Amazon Redshift cluster, it can use the permissions within the Role to access data stored in Amazon S3 buckets.
When a Role is assigned to an AWS Lambda function, it gives the function permission to call other AWS services such as S3, DynamoDB or Kinesis.
In all these cases, something is using the credentials to call AWS APIs.
Amazon S3 never requires credentials to call an AWS API. While it can call other services for Event Notifications, the permissions are actually put on the receiving service rather than S3 as the requesting service.
Thus, there is never any need to attach a Role to an Amazon S3 bucket.
Roles do not apply to S3 as it does with EC2.
Assuming #Sunil is asking if we can restrict access to data in S3.
In that case, we can either Set S3 ACL on the buckets or the object in it OR Set S3 bucket policies.

Cannot set S3 as api gateway AWS service

I'm trying to setup a Amazon API Gateway proxy which would be connected to s3 bucket to just proxy each file/object from the bucket to the API Gateway endpoint. (I need this because i need some files to be passed through other HTTP verbs, and s3 does not allow POST method).
The thing is that I cannot select 'S3' as aws service
Can someone provide me some guidance?
To allow the API to invoke required Amazon S3 actions, you must have appropriate IAM policies attached to an IAM role. The next section describes how to verify and to create, if necessary, the required IAM role and policies.
For your API to view or list Amazon S3 buckets and objects, you can use the IAM-provided AmazonS3ReadOnlyAccess policy in the IAM role.
Please read documentation here to know full setup
It should be under a name Simple Storege Service (S3)