I am trying to create Firehose streams that can receive data from different regions in Account A, through AWS Lambda, and output into a redshift table in Account B. To do this I created an IAM role on Account A:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I gave it the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"redshift:*"
],
"Resource": "*"
}
]
}
On Account B I created a role with this trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "11111111111"
}
}
}
]
}
I gave that role the following access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::b-bucket",
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-account-logs",
"arn:aws:s3:::b-account-logs/*"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "redshift:*",
"Resource": "arn:aws:redshift:us-east-1:cluster:account-b-cluster*"
}
]
}
I also edited the access policy on the S3 buckets to give access to my Account A role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/AccountAXAccountBPolicy"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::b-bucket","arn:aws:s3:::b-bucket/*"]
}
]
}
However, none of this works. When I try to create the the stream in Account A it does not list the buckets in Account B nor the redshift cluster. Is there any way to make this work?
John's answer is semi correct. I would recommend that the account owner of the Redshift Cluster creates the FireHose Stream. Creating through CLI requires you to supply the user name and password. Having the cluster owner create the stream and sharing IAM Role permissions on the stream is safer for security and in case of credential change. Additionally, you cannot create a stream that accesses a database outside of the region, so have the delivery application access the correct stream and region.
Read on to below to see how to create the cross account stream.
In my case both accounts are accessible to me and to lower the amount of changes and ease of monitoring I created the stream on Account A side.
The above permissions are right however, you cannot create a Firehose Stream from Account A to Account B through AWS Console. You need to do it through AWS Cli:
aws firehose create-delivery-stream --delivery-stream-name testFirehoseStreamToRedshift
--redshift-destination-configuration 'RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole", ClusterJDBCURL="jdbc:redshift://<cluster-url>:<cluster-port>/<>",
CopyCommand={DataTableName="<schema_name>.x_test",DataTableColumns="ID1,STRING_DATA1",CopyOptions="csv"},Username="<Cluster_User_name>",Password="<Cluster_Password>",S3Configuration={RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole",
BucketARN="arn:aws:s3:::b-bucket",Prefix="test/",CompressionFormat="UNCOMPRESSED"}'
You can test this by creating a test table on the other AWS Account:
create table test_schema.x_test
(
ID1 INT8 NOT NULL,
STRING_DATA1 VARCHAR(10) NOT NULL
)
distkey(ID1)
sortkey(ID1,STRING_DATA1);
You can send test data like this:
aws firehose put-record --delivery-stream-name testFirehoseStreamToRedshift --record '{"DATA":"1,\"ABCDEFGHIJ\""}'
This with the permissions configuration above should create the cross account access for you.
Documentation:
Create Stream - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html
Put Record - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html
No.
Amazon Kinesis Firehose will only output to Amazon S3 buckets and Amazon Redshift clusters in the same region.
However, anything can send information to Kinesis Firehose by simply calling the appropriate endpoint. So, you could have applications in any AWS Account and in any Region (or anywhere on the Internet) send data to the Firehose and then have it stored in a bucket or cluster in the same region as the Firehose.
Related
I am trying to export logs from one of my CloudWatch log groups into Amazon S3, using AWS console.
I followed the guide from AWS documentation but with little success. My organization does not allow me to manage IAM roles/policies, however I was able to find out that my role is allowed all log-related operations (logs:* on all resources within the account).
Currently, I am stuck on the following error message:
Could not create export task. PutObject call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
My bucket policy is set in the following way:
{
[
...
{
"Sid": "Cloudwatch Log Export 1",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "Cloudwatch Log Export 2",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Prior to editing bucket policy, my error message had been
Could not create export task. GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
but editing the bucket policy fixed that. I would expect allowing PutObject to do the same, but this has not been the case.
Thank you for help.
Ensure when exporting the data you configure the following aptly
S3 bucket prefix - optional This would be the object name you want to use to store the logs.
While creating the policy for PutBucket, you must ensure the object/prefix is captured adequately. See the diff for the PutBucket statement Resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
- "Resource": "arn:aws:s3:::my-exported-logs/*",
+ "Resource": "arn:aws:s3:::my-exported-logs/**_where_i_want_to_store_my_logs_***",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
Please check this guide Export log data to Amazon S3 using the AWS CLI
Policy's looks like the document that you share but slight different.
Assuming that you are doing this in same account and same region, please check that you are placing the right region ( in this example is us-east-2)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
I think that bucket owner full control is not the problem here, the only chance is the region.
Anyway, take a look to the other two examples in case that you were in different accounts/ using role instead user.
This solved my issue, that was the same that you mention.
One thing to check is your encryption settings. According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
Exporting log data to Amazon S3 buckets that are encrypted by AWS KMS is not supported.
Amazon S3-managed keys (SSE-S3) bucket encryption might solve your problem. If you use SSE-KMS, Cloudwatch can't access your encryption key in order to properly encrypt the objects as they are put into the bucket.
I had the same situation and what worked for me is to add the bucket name itself as a resource in the Allow PutObject Sid, like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLogsExportGetBucketAcl",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "AllowLogsExportPutObject",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": [
"my-bucket",
"my-bucket/*"
]
}
]
}
I also believe that all the other answers are relevant, especially using the time in milliseconds.
My AWS has 2 different Users: admin, s3_readonly
I am the main admin and have 1 cluster in Redshift(cluster1).
Now I am trying to schedule a query that just calls those procedures every hour (CALL <procedure_name>)
For this task, I have followed the official documentation from AWS (Scheduling a query on the Amazon Redshift console - Amazon Redshift) and to be exact this document steps (Scheduling SQL queries on your Amazon Redshift data warehouse | AWS Big Data Blog).
So I created new IAM role RedshiftScheduler, which has Redshift Customizable option and have attached AmazonRedshiftDataFullAccess to it. Then I edited the Trust relationship and added:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "S2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCOUNT_ID>:user/admin"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "S1",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I then went back to my AWS user (admin) and attached a new policy granted with Assume Role permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<ACCOUNT_ID>:role/RedshiftScheduler"
}
]
}
Now, I logged in to the Redshift cluster via AWS service. Used Temporary credentials to connect to cluster1 and user as dbuser. However, when I try to schedule the query it throws an error
To view the schedule history of this schedule, add sts:AssumeRole for IAM role arn:aws:iam::<ACCOUNT_ID>:role/RedshiftScheduler to your IAM role. You also need to add your IAM user ARN to the role’s trust policy.
You need to add your IAM user ARN to the role’s trust policy like this
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account #>:user/<admin username"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
after
{
"Sid": "S1",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
I want to use Glue Crawler to crawl data from an S3 bucket. This S3 bucket is in another AWS account. Let's call is Account A. My Glue Crawler is in Account B.
I have created a Role in Account B and called it AWSGlueServiceRole-Reporting
I have attached the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::AccountAbucketname"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::AccountABucketName/Foldername/*"
]
}
]
}
And also AWSGlueServiceRole policy.
In Account A that has the S3 bucket, I've attached the following bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/AWSGlueServiceRoleReporting”
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::AccountABucketName"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/AWSGlueServiceRoleReporting”
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::AccountABucketName/FolderName/*"
}
]
}
I'm able to run a Glue Crawler in Account B on this S3 bucket and it created Glue Tables. But when I try to query them in Athena, I get Access Denied.
Can anybody help me how to query it in Athena??
When Amazon Athena queries run, they use the permissions of the user that is running the query.
Therefore, you will need to modify the Bucket Policy on the bucket in Account A to permit access by whoever is running the query in Amazon Athena:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::AccountB:role/AWSGlueServiceRoleReporting",
"arn:aws:iam::AccountB:user/username"
]
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::AccountABucketName"
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::AccountB:role/AWSGlueServiceRoleReporting",
"arn:aws:iam::AccountB:user/username"
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::AccountABucketName/FolderName/*"
}
]
}
The user will also need sufficient S3 permissions (on their IAM User) to access that S3 bucket. (For example, having s3:ListBucket and s3:GetObject on S3 buckets. They likely already have this, but it is worth mentioning.)
This is different to AWS Glue, which uses an IAM Role. Athena does not accept an IAM Role for running queries.
I'm trying to upload an image from a .NET webservice to an amazon s3 bucket.
By using this public policy on the bucket i can do that:
{
"Id": "Policyxxxxxxxx",
"Version": "yyyy-MM-dd",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::(bucketName)/*",
"Principal": "*"
}
] }
But when i try to give access only to my user/credentials like this:
{
"Id": "Policyxxxxxxxx",
"Version": "yyyy-MM-dd",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::(bucketName)/*",
"Principal": {
"AWS": [
"arn:aws:iam::(accountID):user/(userName)"
]
}
}
]
}
i get "Accces Denied".
So what im doing wrong with the policy?
If you wish to grant access to an Amazon S3 bucket to a particular IAM User, you should put the policy on the IAM User itself rather than using a bucket policy.
For example, see: Create a single IAM user to access only specific S3 bucket
I am trying to use the credentials provider to access an aws S3 bucket from my IoT device. I implemented all the steps in this blogpost: https://aws.amazon.com/blogs/security/how-to-eliminate-the-need-for-hardcoded-aws-credentials-in-devices-by-using-the-aws-iot-credentials-provider/ ; however, when I use the credentials provided by the service to access S3 I get 'AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records.' (Java SDK)
My role has the following access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::*/*"
}
]
}
and this rust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "credentials.iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I used the credentials provider endpoint from here:
aws iot describe-endpoint --endpoint-type iot:CredentialProvider
The device certificate and keys work fine to access the MQTT message broker.
edit the system time and server time differ for 1 hour, hence the token looks as if it is expired when I get it ("expiration" field in the token is the same time as current system time). This should not make any difference should it? Is there a way to directly use the role, instead of an alias to test this assumption?
This is how I access s3 in java:
final AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
securityToken.getCredentials().getAccessKeyId(),
securityToken.getCredentials().getSecretAccessKey()
)
)
).withRegion(Regions.US_EAST_1)
.build();
final ObjectMetadata object = s3.getObject(new GetObjectRequest(
"iot-raspberry-test", "updateKioskJob.json"
), new File("/downloads/downloaded.json"));
This is the policy attached to the certificate of my thing:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "iot:AssumeRoleWithCertificate",
"Resource": "arn:aws:iot:us-east-1:myaccountid:rolealias/s3-access-role-alias"
}
}
What could I be missing?
Thanks in advance!
The first policy is not complete:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::*/*"
}
]
}
Load it in the simulator and you can see that it won't work. S3 needs listing access (not GetObject).
See following example:
{
"Version": "2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bucket-name"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::bucket-name/*"
}
]
}