On AWS we've implemented functionality that AWS lambda pushes message to AWS queue;
However during this implementation I had to manuall grant permissions to AWS lambda to add message to particular queue. And this apporach with manual clicks not so good for prod deployment.
Any suggestions how to automate process of adding permissions between AWS services (mainly lambda and SQS) and cretate "good" deployment package for prod env ?
Each Lambda function has an attached role, which you can specify permissions for in the IAM dashboard. If you give the Lambda functions' role the permission to push to an SQS queue, you're good to go. For example, attach this JSON as a custom role (see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSExamples.html):
{
"Version": "2012-10-17",
"Id": "Queue1_Policy_UUID",
"Statement":
{
"Sid":"Queue1_SendMessage",
"Effect": "Allow",
"Principal": {
"AWS": "111122223333"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:444455556666:queue1"
}
}
You can use asterisks to give permission to multiple queues, like:
"Resource": "arn:aws:sqs:us-east-1:444455556666:production-*"
To give sendMessage permission to all queues that start with production-.
Related
I have a ubuntu ec2 with cloudwatch agent running. The agent is able to push the logs to Cloudwatch as expected. But I am unable to export the logs to S3.
The instance policy has SSMManagedInstanceCore and CloudwatchAgentServerPolicy as described in the documentation.
At this point, I am not sure what policy needs to be assigned.
I also added log policy to write to S3 bucket.
All this is being done in terraform.
Can someone help me solve this pls?
Thanks.
You can add inline policy to your instance role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/<your-bucket-name>/*"
}
]
}
Depending on the bucket setup, other permissions may be required, e.g. for KMS encryption.
UPDATE
If you want to automatically export your logs from CloudWatch Logs to S3 you have to setup Subscription Filter with Amazon Kinesis Data Firehose. This is fully independent from your instance role and the instance itself.
I want to make an IAM Role for my Django app. How can I do this both from AWS side and Django side? Also, I have heard that this is best practice, but don't really understand why it is important. Could someone explain? Thanks!
Update for Marcin:
session = boto3.Session(
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
)
s3 = session.resource('s3')
Update 2 for Marcin:
client = boto3.client(
'ses',
region_name='us-west-2',
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
client.send_raw_email(RawMessage=raw_message)
The default instance role that EB is using is aws-elasticbeanstalk-ec2-role. One way to customize it by adding inline policies to it in IAM console.
Since you require S3, SES and SNS you can add permissions to them in the inline policy. Its not clear which actions do you require (read only for S3, publish message for SNS only?), or if you have specific resources in mind (e.g. only one given bucket or single sns topic), you can start by adding full access to the services. But please note that giving full access is a bad practice and does not follow grant least privilege rule.
Nevertheless, en example of an inline policy with full access to S3, SES and SNS is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sns:*",
"ses:*",
"s3:*"
],
"Resource": "*"
}
]
}
The following should be enough:
s3 = boto3.resource('s3')
In an attempt to further tighten the security of our solution we are now looking at the used SNS topics and SQS queues. All our components live in the same AWS account.
For starters we want to restrict the access to the SQS queues based on IP. So only requests coming from our NAT Gateway IP will be allowed. We don't allow anonymous access to our SQS queues.
But there seems no way to achieve this as the creator of the SQS queues - the AWS account id - has access per default. So you can't create an effective permission for another user in the same AWS account id. As this newly created user, user2, will fall under the same AWS account id, with the same set of permissions.
Am I correct in my understanding that all users in the same AWS account id have access per default to all created SQS queues as long as their IAM policy permits it? And is my assumption right that the same behavior goes for the SNS topics?
Below is the policy I would like the implement. Beside this policy I have no other policies active for this SQS q. But it is not honoring the source IP condition. I still can connect from everywhere when I use a correct AWS access key/secret combination. Only when I set the AWS principal to * - everyone - the policy seems effective.
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:eu-west-1:4564645646464564:madcowtestqueue/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid1589365989662",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::4564645646464564:user/user2"
},
"Action": [
"SQS:DeleteMessage",
"SQS:SendMessage",
"SQS:ReceiveMessage"
],
"Resource": "arn:aws:sqs:eu-west-1:143631359317:madcowtestqueue",
"Condition": {
"IpAddress": {
"aws:SourceIp": "1.1.1.1"
}
}
}
]
}
Reference:
Using identity-based policies with Amazon SQS - Amazon Simple Queue Service
Using identity-based policies with Amazon SNS - Amazon Simple Notification Service
Amazon SQS
Amazon SQS has the ability to define Amazon SQS policies. These policies can be used in addition to IAM policies to grant access to a queue.
For example, a policy can be added that permits anonymous access to a queue, which is useful for external applications to send messages to the queue.
Interestingly, these policies can also be used to control access to the queue by IP address.
To test this, I did the following:
Created an Amazon SQS queue
Used an Amazon EC2 instance to send a message to the queue -- Successful
Added the following policy to the SQS queue:
{
"Version": "2012-10-17",
"Id": "Queue1_Policy_UUID",
"Statement": [
{
"Sid": "Queue1_AnonymousAccess_AllActions_IPLimit_Deny",
"Effect": "Deny",
"Principal": "*",
"Action": "SQS:SendMessage",
"Resource": "arn:aws:sqs:ap-southeast-2:xxx:queue",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "54.1.2.3/32"
}
}
}
]
}
The IP address is that of my Amazon EC2 instance.
I then tried send a message to the queue again from the EC2 instance -- Successful
I then ran the identical command from my own computer -- Not successful
Therefore, it would appear that the SQS policy can override the permissions granted via IAM.
(Be careful... I added a policy that Denied sqs:* on the queue, and I wasn't able to edit the policy or delete the queue! I had to use the root account to delete it.)
Amazon SNS
I managed to achieve the same result with Amazon SNS using this access policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:ap-southeast-2:xxx:topic",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "54.1.2.3/32"
}
}
}
]
}
I'm trying to lock down a user to a specific VPC in AWS and following How to Help Lock Down a User’s Amazon EC2 Capabilities to a Single VPC | AWS Security Blog.
It is mentioned that we need to create an IAM role with name VPCLockDown of type AWS Service
and add the services for which the role needs access to. like ec2, lambda etc.
I was trying to create this role programatically using boto3.
I checked the create_role documentation for creating a role using boto3.
However, they haven't mentioned anything to specify the type of role and the services that I can specify that the role should have access to.
Is there any way to specify these items while creation of the IAM role using boto3
Edit1:
I tried creating a service_linked_role as per Sudarshan Rampuria's answer like
response = iam.create_service_linked_role(
AWSServiceName='ec2.amazonaws.com',
)
But getting the following error:
An error occurred (AccessDenied) when calling the
CreateServiceLinkedRole operation: Cannot find Service Linked Role
template for ec2.amazonaws.com
You can use create_service_linked_role() function boto3 to link a role to a service.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.create_service_linked_role
Here is a policy that allows a specific IAM User to launch an instance (RunInstances), but only in a given VPC:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EC2RunInstancesVPC",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:ap-southeast-2:111111111111:subnet/*",
"Condition": {
"StringEquals": {
"ec2:vpc": "arn:aws:ec2:ap-southeast-2:111111111111:vpc/vpc-abcd1234" <--- Change this
}
}
},
{
"Sid": "RemainingRunInstancePermissions",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:ap-southeast-2:111111111111:instance/*",
"arn:aws:ec2:ap-southeast-2:111111111111:volume/*",
"arn:aws:ec2:ap-southeast-2::image/*",
"arn:aws:ec2:ap-southeast-2::snapshot/*",
"arn:aws:ec2:ap-southeast-2:111111111111:network-interface/*",
"arn:aws:ec2:ap-southeast-2:111111111111:key-pair/*",
"arn:aws:ec2:ap-southeast-2:111111111111:security-group/*"
]
}
]
}
You might need to change the Region. (I tested it in the Sydney region.)
For anyone trying to do this for Lambda, we get the similar error mentioned by the question author under "Edit". Lambda doesn't have a service linked role. You can see from the AWS Lambda documentation that "create-role" is used for creating lambda execution role.
You can also see here that only Lambda#Edge has service linked role.
One just needs to use use boto3 create-role with a policy document
response = iam_client.create_role(
RoleName="some-role-name",
AssumeRolePolicyDocument='{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}',
Description='Lambda role'
)
I am trying to create a IAM policy to be applied to a SQS queue. The policy should restrict access to the queue to a single Cognito federated identity.
I found this reference from amazon on how to achieve this but am having trouble applying the policy to the SQS queue.
Here is the policy I am trying to apply.
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-west-2:604080725100:Test2.fifo/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid1528133390193",
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "SQS:*",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "us-east-1:ff1b33f4-7f66-47a5-b7ff-9696b0e1fb52",
"cognito-identity.amazonaws.com:sub": ["us-east-1:4a6d7e43-4522-41fb-9248-b5b79933b8e9"]
}
}
}
]
}
The online UI for editing the policy shows in the review screen:
Allow None
All SQS Actions (SQS:*)
StringEquals
cognito-identity.amazonaws.com:aud: "us-east-1:ff1b33f4-7f66-47a5-b7ff-9696b0e1fb52"
cognito-identity.amazonaws.com:sub: "us-east-1:4a6d7e43-4522-41fb-9248-b5b79933b8e9"
Once I press apply the following error is given:
Failed to save changes to the policy document. Reason: com.amazonaws.services.sqs.model.AmazonSQSException: We encountered an internal error. Please try again.
I am not sure what is wrong with the policy. I am looking for any help fixing the policy or a different policy that achieve limiting the SQS queue to a single Cognito identity.
Unfortunately SQS only supports a subset of the condition keys and the cognito user id is not one of them. I have read an article which solved this problem by creating random queue names which are readable by all users, but practically unguessable.