AWS IoT Rule - Update multiple columns in DynamoDB - amazon-web-services

I have been able to use the AWS documentation to insert an mqtt message into a single column in a table. I would like to be able to update (not insert) multiple columns in the table. I used the DynamoDbv2 action in my IoT Rule, and I changed the IAM role to UpdateItem, but nothing is happening.
Is there a way to see where/when these errors are occuring?
Should I create a Lambda function to handle this instead? Is there an example of this?
Thanks.

First of all please keep in mind that DynamoDbV2 is using internally PutItem, so your are not able to only update attributes, but whole object will be overwritten.
If you want to update item you need to implement Lambda function and manually implement data update.
For IAM role, your trust relationship should contain:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

You can use the DynamoDbV2 rule action to update multiple attributes (or columns if you will) in your DynamodDB table. Your role you provide to the Iot Rule needs to allow dynamodb:PutItem on the table in question. The role of course needs to have the IoT service in it's trust policy (also know as the assume role policy document in some places).
To help troubleshoot any issues you have turn on IoT logging and set the level to Debug. Then you can view any errors in the AWS Cloudwatch Logs.
https://docs.aws.amazon.com/iot/latest/developerguide/iot-rule-actions.html

Related

How to protect AWS tagged resources via SCP?

I have a sensitive number of assets (Lambda, S3 Bucket, IAM...) I'd like to protect in case someone tries to erase a Bucket Policy, Delete a Function or do any harm to these resources. All of them are tagged as <<MY_KEY>>:<<MY_VALUE>>. The thing is that I'd like to do it in an Organization level since I have more than one AWS Account. I'm using this policy in an SCP.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyActionsOnTaggedResources",
"Effect": "Deny",
"Action": [
"s3:PutBucketPolicy",
"s3:PutBucketTagging",
"s3:DeleteBucketPolicy",
"s3:PutAccessPointPolicyForObjectLambda",
"s3:PutBucketPublicAccessBlock",
"s3:DeleteAccessPointPolicyForObjectLambda",
"s3:PutMultiRegionAccessPointPolicy",
"s3:PutBucketAcl",
"s3:PutBucketPolicy",
"s3:DeleteAccessPointPolicy",
"s3:DeleteBucketPolicy",
"s3:PutAccessPointPolicy",
"s3:BypassGovernanceRetention",
"lambda:DeleteFunction",
"lambda:DeleteCodeSigningConfig",
"lambda:DeleteFunctionCodeSigningConfig",
"lambda:AddLayerVersionPermission",
"lambda:RemoveLayerVersionPermission",
"lambda:EnableReplication",
"lambda:AddPermission",
"lambda:DisableReplication",
"lambda:DeleteLayerVersion",
"lambda:DeleteFunctionEventInvokeConfig",
"lambda:PublishVersion",
"lambda:CreateAlias",
"lambda:RemovePermission",
"iam:DeleteRole",
"iam:DeleteInstanceProfile",
"iam:DeletePolicy",
"iam:DeleteRolePolicy",
"iam:DeleteUserPolicy",
"iam:DeleteGroupPolicy",
"iam:UpdateAssumeRolePolicy",
"iam:PutRolePermissionsBoundary",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:DeleteRolePermissionsBoundary",
"iam:CreatePolicy",
"iam:DetachRolePolicy",
"iam:DeleteRolePolicy",
"iam:CreatePolicyVersion",
"iam:DeletePolicyVersion"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/<<MY_KEY>>": "<<MY_VALUE>>"
},
"StringNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::*:role/<<MY_ROLE>>"
]
}
}
}
]
}
For the sake of testing, whenever I put a role that is not my role, I am still able to modify the resources. Where is my mistake?
It turns out that #John Rotenstein is right. S3 API Calls does not support ResourceTag as a Condition.
Since this was a urgent demand at work, I ended up opening a Support Case at AWS and they replied this:
I understand you trying to restrict actions on an S3 bucket using the ResourceTag condition key.
Unfortunately, you cannot currently use the AWS:ResourceTag condition key to control access to the s3 bucket, please refer the following documentation[1]. In the documentation, you can see that only the resource type that currently supports the aws:ResourceTag condition key is "storagelensconfiguration". There is an existing feature request with the s3 service team to add support for the AWS:ResourceTag condition key which I have +1'd on your behalf. I am unable to provide an ETA for when the feature might get released since I have no visibility over the processes of the service team. However, all new feature announcements will be made available on our What's new with AWS page[2].
When it comes to controlling access to s3 with the use of tags, we do have the examples in the following AWS Documentation[3] which uses the tags applied to specific objects to control access. It makes use of the condition keys, s3:ExistingObjectTag/<tag-key>, s3:RequestObjectTagKeys and s3:RequestObjectTag/<tag-key> to control access to certain S3 actions however it requires the individual objects to be tagged, it will not work with tags at the bucket level. I would suggest reading through the above linked documentation[3] and see if the solution described in it will meet your organizations needs.
I hope you find the above information helpful, please let me know if you have any additional questions.
[1] Actions, resources, and condition keys for Amazon S3 - https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html
[2] What's New with AWS? - https://aws.amazon.com/new/
[3] Tagging and access control policies - https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html
Can you modify this StringNotEquals to StringNotLike and try that? As you are using a wildcard (*) in the Condition, StringNotEquals won't work. The rest of the policy looks sound.
String condition operators
I also recommended using the Access Analyzer to validate policies. This will catch similar errors when building policies. See Access Analyzer.

Is there a complete list of AWS resource actions?

Is there a list of AWS resource actions anywhere? For example, if I look at one of the AWS policies for sqs read only access I see a list of actions. But I can't find the FULL list of actions for this resource despite searching for what seems like forever. Some of the API reference pages refer to the necessary action permission (like for create queue) but not all. I had a custom policy and found out I needed the GetQueueUrl action. So in summary, I just want to know if there is ANYWHERE that AWS lists out all the actions for each service?
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ListDeadLetterSourceQueues",
"sqs:ListQueues"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
i think this shoudl work for you :-
go to IAM
under policies -> create policy.
choose a service -> under action -> expand all
you can see all the actions associated with that service through console.
also you can use this https://docs.amazonaws.cn/en_us/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html, select your service and you will be able to see actioned defined for that service in a tabular form
See Actions, resources, and condition keys for Amazon SQS. It contains the full list of Actions.
For all AWS services, start with Actions, resources, and condition keys for AWS services.

AWS GraphQL Appsync - unable to assume role

I'm running a tech stack of react -> graphQL -> appsync -> lambda -> go
When I run my graphQL query from the client I recieve this error back:
Unable to assume role arn:aws:iam::<SOMENUMBER>:role/service-role/MyRoleForMyLambda.
In fact this was all running fine until I accidentally changed the function ARN and roles on my Datasource to other ones. I changed them back but now Appsync seems to be unable to find the role and function ARN. I tried creating a completely new Datasource but I have the same issue. Often the function ARN and/or roles don't appear in the dropdown and I enter them manually. Sometimes it lets me save without errors - other times when attempting to save the Datasource I get the helpful error message "Error". Sometimes after saving when I go to look at them again the function ARN field is blank unless I click on the 'not in drop down' link.
I don't think the problem is with my role itself as it appears that appsync can't even assume the role to start with. I've read about trust policies as a solution but I don't know where to put them.
Any help much appreciated.
In your IAM console, you need to add the Appsync service as a trusted entity to the role you are trying to assume
Click edit trust relationship and enter the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

How to run DynamoDB web service?

Have been reading aws guidelines for a while, but still cannot run a dynamoDB webservice. Already have a working codebase, just need to run a webservice and get access keys and endpoint url. The only button amazon shows is Create table, which I do not need as I create them from code.
First thing, the endpoints to access dynamoDB are regional and don't depend on the table name, you can find them here https://docs.aws.amazon.com/general/latest/gr/rande.html
Second thing, if you create tables and access them from the same application (running on AWS) you should use AWS roles and make sure you give the right permissions to the IAM policy associated with the role, if you're creating the tables from one specific service and accessing them from different services than you need to make sure every service has the right role and, again IAM policy associated to the role, finally if you're accessing them with different users you need to make sure those users have an associeted IAM policy that gives them access.
If you don't want to create a big amount of policies and you don't want to modified them when you create new resources you can use a prefix for the name of your tables such as app_a_... and in the policies give access to the right subset of resources using the same prefix for example
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "accessToAppAtables",
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:<REGION>:<ACCOUNT_ID>:table/app_a_*"
}
]
}
Refer this document.

IAM Policy using Condition ec2:ResourceTag not working

I have n x EC2 instances that I wish to limit ec2 actions to instances with the same key/value tag (I.E. platform=dev).
I'm looking at doing this using an IAM Policy attached to the group their default IAM user is in.
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/tag:platform": "dev"
}
}
}
]}
I set this up as per the online AWS docs: Example Policies for Working With the AWS CLI or an AWS SDK
I check it in the Policy Simulator and it works as expected (pass in a dev and it's allowed, otherwise denied).
Then on one of the servers with the tag key/pair of platform=dev, I run aws ec2 describe-instances I get the response:
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
but if I remove the Condition it works. I don't understand what I'm doing wrong. Any help would be gratefully received!
The problem is that not every API Action & Resource will accept the ec2:ResourceTag/tag in the condition.
I think you're probably granting overly-broad permissions (Action: ec2:*), so figure out what actions your instances will need do, and then decide how to restrict them.
The list of actions, resources and conditions keys can be found at Supported Resource-Level Permissions for Amazon EC2 API Actions.
I have ran into this issue before, it had something to do with combining wildcards and conditions. What solved it for us was being more explicit on the action (e.g ["ec2:DescribeInstances"]), and on the resource as well (arn:aws:ec2:region:accountid:instance/*).