When building lambdas for example using cloudformation. It is easy to start allowing a little too much by allowing * on resources and eventually ending up hardening/tightening your security. Is it somehow possible to view which permissions actually are in use? And by that way, figuring out what the minimal set of permissions that is needed.
This is a popular request. One option is to leverage Netflix's Aardvark and RepoKid. Another is to ensure that CloudTrail Logs are enabled and then find a way to query them (for example using Athena).
Have you tried:
AWS Policy Simulator
I have not seen anything exactly as you described, but I believe this tool would actually in the end give you what you need and also make you more and more familiar with all of the policies in IAM.
Related
I need to recreate a new User Pool with exactly the same settings as another one and I am wondering what is the best way to do it, or if it is a standard way that I am not aware of. (maybe a faster way than using the AWS console)
My guess is, using AWS CLI :
Get user pool details: describe-user-pool
Then create a new one with the same details : create-user-pool
Any thoughts?
You should first import the resource to CloudFormation, then copy the template and deploy it as a new stack. This will give you better control over the desired configuration of the resources. Ensure you set the retention policy to retain. Losing a user pool is no fun.
It seems there is still no support for importing Cognito user pools into CloudFormation. My recommendation remains that you should be maintaining your infrastructure as code, particularly if you wish to replicate it across environments. How you accomplish it is a little more convoluted but you should just iterate on your CFN template until the configuration matches. Or if you are up for it, terraform has tooling to help you import resources
So! To answer my own question after some time gaining related experience.
The best way to go is like #AndrewGillis said:
Hold your Infrastructure As Code.
My preference I use Terraform.
I'm trying to solve a problem with AWS IAM policies.
I need to allow certain users to only delete/modify resources that are tagged with their particular username (This I've solved) while also being able to create any new aws resource.
The part I haven't solved is need to be able to create resources without ability modifying any existing resources (unless they have the right tag).
Is there an existing AWS policy example that allows a user to create any resource (without granting delete/modify)? Is there a way to allow this without having to list every single aws offering and continuously update it for new offerings?
AdministratorAccess will give all rights to create all services.
See https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator
I managed to solve this problem with a rather ugly solution, but as far as I can tell it's the only solution.
I found a list of all aws actions: https://github.com/rvedotrc/aws-iam-reference
I then parsed out potentially troubling functions like anything with Delete or Terminate in the action name. I used vim/grep for this.
After that I broke that up into multiple aws_iam_group_policy statements. Each statement was attached to a corresponding group. The target users are then added to each of those groups.
Unfortunately, this is pretty ugly and required 5 different groups and policies, but it's the solution I arrived at.
Is there a quick way to find out which regions have any resources in my account? I'm specifically using the AWS .NET SDK but the answer likely applies to other AWS SDKs and the CLI since they all seem to be just wrappers to the REST API. I can obviously run all the List* methods across all regions but I'm thinking there must be a more optimal way to decide whether to query the entire region or not. Maybe something in billing, but it also needs to be relatively up-to-date, maybe within the last 5 minutes or so. Any ideas?
There is no single way to list all resources in an AWS account or in multiple regions.
Some people say that Resource Groups are a good way to list resources, but I don't think they include "everything" in an account.
AWS Config does an excellent job of keeping track of resources and their history, but it is also limited in the types of resources it tracks.
My favourite way to list resources is to use nccgroup/aws-inventory: Discover resources created in an AWS account. It's a simple HTML/JavaScript file that makes all the 'List' calls for you and shows them in a nicely formatted list.
There are similar questions around but none seem to quite answer me directly (or I'm too new with AWS to connect the dots myself.) Apologies if this was easily searchable. I've been trying for many days now.
I want to create a policy that I can assign to IAM users for my Glacier that will allow any IAM user the right to create a vault and then allow them access to most rights for the vaults that they've created. (basically all but delete)
The use case/scenario is this: I have multiple Synology NASes spread at multiple sites. I presently have them all backing up to the glacier account each using their own IAM creds. So far so good.
The problem becomes when they need to do a restore (or even just a list vaults) they see all vaults in the account. I do not want them to see other NAS's vaults/backups as it can be confusing and is irrelevant to that site.
So far I'm simply doing all Glacier ops myself but this will not scale for us. (We intend to add about 25 more NASes/sites, presently running about 8-10)
My assumption is that I should be able to do this somehow with a condition statement and some variant of vaults/${userid} but not quite finding/getting it.
I can't affect anything at vault creation (like adding a tag) because it's the Synology Glacier app creating the vault so no way to mod that.
I've seen some solutions for like EC2 that use post-hoc tagging. I'd prefer not to go that route if I can avoid it as it involves other services we don't use and I know little to nothing about (CloudStream(?), CloudWatch(?) and Lambda I think).
I've also thought of multiple and linked accounts which, if it's the only way then I will, but with no ability to move vaults to the new account - meaning gotta start over for these 8 - it's a less attractive option.
Seems like a policy for this should be easy enough. Hoping it is and it's just a few clicks over my current Policy writing skills.
I'm working on client-side SDK for my product (based on AWS). Workflow is as follows:
User of SDK somehow uploads data to some S3 bucket
User somehow saves command on some queue in SQS
One of the worker on EC2 polls the queue, executes operation and sends notification via SNS. This point seems to be clear.
As you might have noticed, there are quite some unclear points about access management here. Is there any common practice to provide access to AWS services (S3 and SQS in this case) for 3rd-party users of such SDK?
Options which I see at the moment:
We create IAM-user for users of the SDK which have access to some S3 resources and write permission for SQS.
We create additional server/layer between AWS and SDK which is writing messages to SQS instead of users as well as provides one-time short-living link for SDK to write data directly to S3.
First one seems to be OK, however I'm hesitant that I'm missing some obvious issues here. Second one seems to have a problem with scalability - if this layer will be down, whole system won't work.
P.S.
I tried my best to explain the situation, however I'm afraid that question might still lack some context. If you want more clarification - don't hesitate to write a comment.
I recommend you look closely at Temporary Security Credentials in order to limit customer access to only what they need, when they need it.
Keep in mind with any solution to this kind of problem, it depends on your scale, your customers, and what you are ok exposing to your customers.
With your first option, letting the customer directly use IAM or temporary credentials exposes knowledge to them that AWS is under the hood (since they can easily see requests leaving their system). It has the potential for them to make their own AWS requests using those credentials, beyond what your code can validate & control.
Your second option is better since it addresses this - by making your server the only point-of-contact for AWS, allowing you to perform input validation / etc before sending customer provided data to AWS. It also lets you replace the implementation easily without affecting customers. On availablily/scalability concerns, that's what EC2 (and similar services) are for.
Again, all of this depends on your scale and your customers. For a toy application where you have a very small set of customers, simpler may be better for the purposes of getting something working sooner (rather than building & paying for a whole lot of infrastructure for something that may not be used).