AWS IoT credentials provider credentials not working for S3 - amazon-web-services

I am trying to use the credentials provider to access an aws S3 bucket from my IoT device. I implemented all the steps in this blogpost: https://aws.amazon.com/blogs/security/how-to-eliminate-the-need-for-hardcoded-aws-credentials-in-devices-by-using-the-aws-iot-credentials-provider/ ; however, when I use the credentials provided by the service to access S3 I get 'AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records.' (Java SDK)
My role has the following access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::*/*"
}
]
}
and this rust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "credentials.iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I used the credentials provider endpoint from here:
aws iot describe-endpoint --endpoint-type iot:CredentialProvider
The device certificate and keys work fine to access the MQTT message broker.
edit the system time and server time differ for 1 hour, hence the token looks as if it is expired when I get it ("expiration" field in the token is the same time as current system time). This should not make any difference should it? Is there a way to directly use the role, instead of an alias to test this assumption?
This is how I access s3 in java:
final AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(
securityToken.getCredentials().getAccessKeyId(),
securityToken.getCredentials().getSecretAccessKey()
)
)
).withRegion(Regions.US_EAST_1)
.build();
final ObjectMetadata object = s3.getObject(new GetObjectRequest(
"iot-raspberry-test", "updateKioskJob.json"
), new File("/downloads/downloaded.json"));
This is the policy attached to the certificate of my thing:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "iot:AssumeRoleWithCertificate",
"Resource": "arn:aws:iot:us-east-1:myaccountid:rolealias/s3-access-role-alias"
}
}
What could I be missing?
Thanks in advance!

The first policy is not complete:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::*/*"
}
]
}
Load it in the simulator and you can see that it won't work. S3 needs listing access (not GetObject).
See following example:
{
"Version": "2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bucket-name"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::bucket-name/*"
}
]
}

Related

amazon sp api golang sdk Permission Denied

I'm trying to use SPI API programatically using Golang.
https://github.com/amazinsellers/amazon-sp-api-sdk-go and also tried with official https://github.com/amzapi/selling-partner-api-sdk and always have permission denied on the sample code on both github...
I followed this guide for all the IAM setup on my AWS Account but still not working..
https://spapi.cyou/en/guides/SellingPartnerApiDeveloperGuide.html#terminology
Here is my response json:
{"AssumeRoleResponse": {"-xmlns": "https://sts.amazonaws.com/doc/2011-06-15/", "AssumeRoleResult": {"AssumedRoleUser": {"AssumedRoleId": "AR***:SPAPISession", "Arn": "a
rn:aws:sts::123***:assumed-role/MyStsRoleName/SPAPISession"}, "Credentials": {"SessionToken": "FwoGZXIvYXdzEPb//////////wEaDJyejpfUNUYyux***=", "Expiration": "2022-03-14T13:41:14Z", "AccessKeyId": "A***", "SecretAccessKey": "Dr***"}}, "ResponseMetadata": {"RequestId": "b000db8e-e0f0-4150-b2fe-808d8212d599"}}}
{"code":"Unauthorized","details":"","message":"Access to requested resource is denied."}
My Dev App on Seller Central is on Draft status I passed the role I created on it like following :
arn:aws:iam::123***:role/MyStsRoleName
and on my code:
SPClientID = "amzn1.application-oa2-client.123***"
SPClientSecret = "26***"
SPRefreshToken = "Atzr|**"
SPAccessKeyID = "AKI***"
SPSecretKey = "Xre***"
SPRegion = "eu"
SPRoleArn = "arn:aws:iam::123***:role/MySTSRoleName"
Is it possible to get some help, since 2 days blocked on that part and I found nothing that could help me fix that. I tried a lot of thing on IAM params nothing works.
Here is my inlined policy on my IAM User:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123***:role/MySTSRoleName"
}
]
}
The Policy arn:aws:iam::123***:policy/SellingPartnerAPI is like :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "execute-api:*",
"Resource": "*"
}
]
}
Edit:
My role trusted entities:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123***:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
Thanks a lot

Deploy Lambda with code source from another accounts s3 bucket

I store my Lambda zip files in an S3 bucket in Account A. In Account B I have my Lambda. I am trying to have my Lambda use the zip file in Account A's bucket but I keep getting:
Your access has been denied by S3, please make sure your request credentials have permission to GetObject for bucket/code.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
I have followed guides I have found online but I am still facing issues.
Here is my current config:
Account A's S3 Bucket Policy:
{
"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountBID:role/MyLambdaRole"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
Account B's Lambda Execution Role Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
]
}
]
}
The principal in your bucket policy is the role that AWS Lambda uses during execution, which is not used when deploying your function. You could easily just allow the entire B account principal in the bucket policy and then use IAM policies in account B to allow access to the bucket that way.
A bucket policy allowing an entire account looks like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "ProductAccountAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX-account-number:root"
]
},
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
This means that the IAM policies in account B depend on how you do your deployment. Meaning that whatever credentials are used for the deployment need to have S3 permissions for that bucket.

What's the right way to write an amazon s3 bucket policy?

I'm trying to upload an image from a .NET webservice to an amazon s3 bucket.
By using this public policy on the bucket i can do that:
{
"Id": "Policyxxxxxxxx",
"Version": "yyyy-MM-dd",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::(bucketName)/*",
"Principal": "*"
}
] }
But when i try to give access only to my user/credentials like this:
{
"Id": "Policyxxxxxxxx",
"Version": "yyyy-MM-dd",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::(bucketName)/*",
"Principal": {
"AWS": [
"arn:aws:iam::(accountID):user/(userName)"
]
}
}
]
}
i get "Accces Denied".
So what im doing wrong with the policy?
If you wish to grant access to an Amazon S3 bucket to a particular IAM User, you should put the policy on the IAM User itself rather than using a bucket policy.
For example, see: Create a single IAM user to access only specific S3 bucket

AWS Firehose cross region/account policy

I am trying to create Firehose streams that can receive data from different regions in Account A, through AWS Lambda, and output into a redshift table in Account B. To do this I created an IAM role on Account A:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I gave it the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"redshift:*"
],
"Resource": "*"
}
]
}
On Account B I created a role with this trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "11111111111"
}
}
}
]
}
I gave that role the following access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::b-bucket",
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-account-logs",
"arn:aws:s3:::b-account-logs/*"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "redshift:*",
"Resource": "arn:aws:redshift:us-east-1:cluster:account-b-cluster*"
}
]
}
I also edited the access policy on the S3 buckets to give access to my Account A role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/AccountAXAccountBPolicy"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::b-bucket","arn:aws:s3:::b-bucket/*"]
}
]
}
However, none of this works. When I try to create the the stream in Account A it does not list the buckets in Account B nor the redshift cluster. Is there any way to make this work?
John's answer is semi correct. I would recommend that the account owner of the Redshift Cluster creates the FireHose Stream. Creating through CLI requires you to supply the user name and password. Having the cluster owner create the stream and sharing IAM Role permissions on the stream is safer for security and in case of credential change. Additionally, you cannot create a stream that accesses a database outside of the region, so have the delivery application access the correct stream and region.
Read on to below to see how to create the cross account stream.
In my case both accounts are accessible to me and to lower the amount of changes and ease of monitoring I created the stream on Account A side.
The above permissions are right however, you cannot create a Firehose Stream from Account A to Account B through AWS Console. You need to do it through AWS Cli:
aws firehose create-delivery-stream --delivery-stream-name testFirehoseStreamToRedshift
--redshift-destination-configuration 'RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole", ClusterJDBCURL="jdbc:redshift://<cluster-url>:<cluster-port>/<>",
CopyCommand={DataTableName="<schema_name>.x_test",DataTableColumns="ID1,STRING_DATA1",CopyOptions="csv"},Username="<Cluster_User_name>",Password="<Cluster_Password>",S3Configuration={RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole",
BucketARN="arn:aws:s3:::b-bucket",Prefix="test/",CompressionFormat="UNCOMPRESSED"}'
You can test this by creating a test table on the other AWS Account:
create table test_schema.x_test
(
ID1 INT8 NOT NULL,
STRING_DATA1 VARCHAR(10) NOT NULL
)
distkey(ID1)
sortkey(ID1,STRING_DATA1);
You can send test data like this:
aws firehose put-record --delivery-stream-name testFirehoseStreamToRedshift --record '{"DATA":"1,\"ABCDEFGHIJ\""}'
This with the permissions configuration above should create the cross account access for you.
Documentation:
Create Stream - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html
Put Record - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html
No.
Amazon Kinesis Firehose will only output to Amazon S3 buckets and Amazon Redshift clusters in the same region.
However, anything can send information to Kinesis Firehose by simply calling the appropriate endpoint. So, you could have applications in any AWS Account and in any Region (or anywhere on the Internet) send data to the Firehose and then have it stored in a bucket or cluster in the same region as the Firehose.

Giving an IAM User full access

Should an IAM User say called User1 be given full access like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Could it also be used to create Amazon API calls? Is this a security risk or should I create another user just to access the Amazpn API Gateway?
You should never give an IAM user full privileges. So many things could go wrong, and yes it may very well be a security risk.
If you need to manage (create, configure, or deploy) your API in API Gateway with this IAM user, you can give the user this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"apigateway:*"
],
"Resource": "arn:aws:apigateway:*::/*"
}
]
}
Or, if you only need to invoke the API, you can use this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": "arn:aws:execute-api:*:*:*"
}
]
}