How to make an IAM Role for a Django Application? - django

I want to make an IAM Role for my Django app. How can I do this both from AWS side and Django side? Also, I have heard that this is best practice, but don't really understand why it is important. Could someone explain? Thanks!
Update for Marcin:
session = boto3.Session(
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
)
s3 = session.resource('s3')
Update 2 for Marcin:
client = boto3.client(
'ses',
region_name='us-west-2',
aws_access_key_id=my_key,
aws_secret_access_key=my_secret
client.send_raw_email(RawMessage=raw_message)

The default instance role that EB is using is aws-elasticbeanstalk-ec2-role. One way to customize it by adding inline policies to it in IAM console.
Since you require S3, SES and SNS you can add permissions to them in the inline policy. Its not clear which actions do you require (read only for S3, publish message for SNS only?), or if you have specific resources in mind (e.g. only one given bucket or single sns topic), you can start by adding full access to the services. But please note that giving full access is a bad practice and does not follow grant least privilege rule.
Nevertheless, en example of an inline policy with full access to S3, SES and SNS is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sns:*",
"ses:*",
"s3:*"
],
"Resource": "*"
}
]
}
The following should be enough:
s3 = boto3.resource('s3')

Related

AWS - Give readonly permissions for all services

Is there a way in AWS to give readonly permissions to all services via a central policy? Currently, am forced to do this per service, like for IAM below -
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"iam:Get*",
"iam:List*",
"iam:Generate*"
],
"Resource": "*"
}
}
Having to do this for each and every resource is error prone and tedious. How can we define a policy to give read-only for all services.
Thanks
You can use the AWS managed policy named ReadOnlyAccess:
the ReadOnlyAccess AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources.

How to get AWS Glue crawler to assume a role in another AWS account to get data from that account's S3 bucket?

There's some CSV data files I need to get in S3 buckets belonging to a series of AWS accounts belonging to a third-party; the owner of the other accounts has created a role in each of the accounts which grants me access to those files; I can use the AWS web console (logged in to my own account) to switch to each role and get the files. One at a time, I switch to the role for each of the accounts and then get the files for that account, then move on to the next account and get those files, and so on.
I'd like to automate this process.
It looks like AWS Glue can do this, but I'm having trouble with the permissions.
What I need it to do is create permissions so that an AWS Glue crawler can switch to the right role (belonging to each of the other AWS accounts) and get the data files from the S3 bucket of those accounts.
Is this possible and if so how can I set it up? (e.g. what IAM roles/permissions are needed?) I'd prefer to limit changes to my own account if possible rather than having to ask the other account owner to make changes on their side.
If it's not possible with Glue, is there some other easy way to do it with a different AWS service?
Thanks!
(I've had a series of tries but I keep getting it wrong - my attempts are so far from being right that there's no point in me posting the details here).
Yes, you can automate your scenario with Glue by following these steps:
Create an IAM role in your AWS account. This role's name must start with AWSGlueServiceRole but you can append whatever you want. Add a trust relationship for Glue, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach two IAM policies to your IAM role. The AWS managed policy named AWSGlueServiceRole and a custom policy that provides the access needed to all the target cross account S3 buckets, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::examplebucket1",
"arn:aws:s3:::examplebucket2",
"arn:aws:s3:::examplebucket3"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::examplebucket1/*",
"arn:aws:s3:::examplebucket2/*",
"arn:aws:s3:::examplebucket3/*"
]
}
]
}
Add S3 bucket policies to each target bucket that allows your IAM role the same S3 access that you granted it in your account, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket1"
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket1/*"
}
]
}
Finally, create Glue crawlers and jobs in your account (in the same regions as the target cross account S3 buckets) that will ETL the data from the cross account S3 buckets to your account.
Using the AWS CLI, you can create named profiles for each of the roles you want to switch to, then refer to them from the CLI. You can then chain these calls, referencing the named profile for each role, and include them in a script to automate the process.
From Switching to an IAM Role (AWS Command Line Interface)
A role specifies a set of permissions that you can use to access AWS
resources that you need. In that sense, it is similar to a user in AWS
Identity and Access Management (IAM). When you sign in as a user, you
get a specific set of permissions. However, you don't sign in to a
role, but once signed in as a user you can switch to a role. This
temporarily sets aside your original user permissions and instead
gives you the permissions assigned to the role. The role can be in
your own account or any other AWS account. For more information about
roles, their benefits, and how to create and configure them, see IAM
Roles, and Creating IAM Roles.
You can achieve this with AWS lambda and Cloudwatch Rules.
You can create a lambda function that has a role attached to it, lets call this role - Role A, depending on the number of accounts you can either create 1 function per account and create one rule in cloudwatch to trigger all functions or you can create 1 function for all the accounts (be cautious to the limitations of AWS Lambda).
Creating Role A
Create an IAM Role (Role A) with the following policy allowing it to assume the role given to you by the other accounts containing the data.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509358389000",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"",
"",
....
"
]// all the IAM Role ARN's from the accounts containing the data or if you have 1 function for each account you can opt to have separate roles
}
]
}
Also you will need to make sure that a trust relationship with all the accounts are present in Role A's Trust Relationship policy document.
Attach Role A to the lambda functions you will be running. you can use serverless for development.
Now your lambda function has Role A attached to it and Role A has sts:AssumeRole permissions over the role's created in the other accounts.
Assuming that you have created 1 function for 1 account in you lambda's code you will have to first use STS to switch to the role of the other account and obtain temporary credentials and pass these to S3 options before fetching the required data.
if you have created 1 function for all the accounts you can have the role ARN's in an array and iterate over it, again when doing this be aware of the limits of AWS lambda.

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.

AWS security group rules deployment (lambda->SQS)

On AWS we've implemented functionality that AWS lambda pushes message to AWS queue;
However during this implementation I had to manuall grant permissions to AWS lambda to add message to particular queue. And this apporach with manual clicks not so good for prod deployment.
Any suggestions how to automate process of adding permissions between AWS services (mainly lambda and SQS) and cretate "good" deployment package for prod env ?
Each Lambda function has an attached role, which you can specify permissions for in the IAM dashboard. If you give the Lambda functions' role the permission to push to an SQS queue, you're good to go. For example, attach this JSON as a custom role (see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSExamples.html):
{
"Version": "2012-10-17",
"Id": "Queue1_Policy_UUID",
"Statement":
{
"Sid":"Queue1_SendMessage",
"Effect": "Allow",
"Principal": {
"AWS": "111122223333"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:444455556666:queue1"
}
}
You can use asterisks to give permission to multiple queues, like:
"Resource": "arn:aws:sqs:us-east-1:444455556666:production-*"
To give sendMessage permission to all queues that start with production-.

Auth0 and S3 Buckets

I have a question.
I'm using Auth0 and AWS SDK to access to some buckets on S3. I have a question. Is there any way to restrict the access to S3 Buckets without he use of bucket policies? Maybe using metadata provided by Auth0.
Thanks for all
Maybe you're looking for something like this https://github.com/auth0/auth0-s3-sample to restrict access for users on their resources.
IAM Policy for user buckets (restricted to these users only)
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowEverythingOnSpecificUserPath",
"Effect": "Allow",
"Action": [
"*"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET/dropboxclone/${saml:sub}",
"arn:aws:s3:::YOUR_BUCKET/dropboxclone/${saml:sub}/*"]
},
{
"Sid": "AllowListBucketIfSpecificPrefixIsIncludedInRequest",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::YOUR_BUCKET"],
"Condition":{
"StringEquals": { "s3:prefix":["dropboxclone/${saml:sub}"] }
}
}
]
}
But, if the users have shareable group folders it may become more tricky, I'm looking into it myself a.t.m.
Checkout this pdf: https://www.pingidentity.com/content/dam/pic/downloads/resources/ebooks/en/amazon-web-services-ebook.pdf?id=b6322a80-f285-11e3-ac10-0800200c9a66 on page 11 LEVERAGE OPENID CONNECT FOR AWS APIS the use case is similar.
So, an option could be to do the following
[USER] -> [Auth0] -> [AWS (Federation/SAML)] -> [exchange temporary AWS credentials] -> [use temp. credentials to access S3]
Hope it helped and even if it is quite a long time ago you asked this question, other users maybe benefit. If you've found a better solution, please share it.
To restrict access to S3 buckets you must use Amazon IAM.
When using Auth0, you're basically exchanging your Auth0 token for an Amazon Token. Then, with that Amazon token you're calling S3. That means that in order to restrict access to certain parts of S3 you're going to have to change the permissions on Amazon's token, which means you'll need to play with IAM.
Makes sense?
Cheers!