Versioning aws policies - amazon-web-services

Currently I log into IAM and edit policies by hand for my S3 bucket. When I change something in the editor, I have no idea what the policy was before unless I exit the editor by canceling and then go back and view it. So there's no way to tell exactly what I've changed. So editing is kind of painful, especially considering that I sometimes find myself changing something and then testing the change, with no trivial way to roll back to where I started.
Another problem created by the lack of version control is there's no log of why or when a particular permission was modified. For example, I would really like to know that the reason we need the ListBucket permission on our bucket is because that was required to get file uploads to work. You know, the kind of thing you might put in a git commit message.
Now that you understand and care deeply about my motivations, I would like to know how best to get my policies into git. To the extent possible, I'd like the only way to change the permissions to be through code that is written by me, with the presumption being that any time you make a change, you commit to the repository. This is not perfect security of course, but it does provide an accounting of what changed when, and gives us a single place where we make changes.
Here's my proposal:
Create an IAM user called policy_editor
Revoke policy editing privileges from all users
Give policy_editor policy editing privileges
Do not give policy_editor a password (thus have to use api credentials to change policies)
My questions are:
Is this possible? (Ideally even the root user wouldn't have permission to edit policies, so that wouldn't happen by accident)
Is this a good idea?
Is there a better solution?
Is there a tool that does this already?
Thanks!

Is this possible?
Yes, the API is flexible enough to do that. Writing automation around IAM pays off in spades.
By "root user", do you mean the AWS access keys directly on the account? Step 1 is to delete those creds (directly on the account) and only use IAM users for everything.
http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html
Is this a good idea?
Yes, automation is good.
Is there a better solution?
Well, here are some related ideas:
Use CloudTrail to log all IAM changes.
If you disable your IAM-changing privs, create a second user (with MFA enabled) for emergencies.
For some "dangerous" commands, use automation instead. (i.e. give them a web form where they can delete a bucket, but your code verifies it's OK to delete beforehand.)
Avoid adding privs directly to people. Always use groups to organize permissions. Don't be afraid to spend some time figuring out what logical permission groups would be. For example, you could have a "debugging production" group.
Don't get too fine-grained (at least not at first). There is a trade-off between security and bureaucracy here. If people have to ping you for every little permission, they will start requesting privs "just in case".
Use the conditionals: You can say "you can delete any bucket that doesn't have 'production' in the name". Or "You can terminate instances, but it requires MFA".
Review your policies regularly. People move around between teams, so people often end up with permissions they don't need. If your groups are well-named, you can make the managers review the permissions needed for their underlings.
Is there a tool that does this already?
Not that I know of. It's pretty easy via API calls, so someone is going to write it.
(This guy started a project: https://github.com/percolate/iamer )

Related

Can someone help me understand how IAM Identity Center works (specific scenario)?

I have multiple AWS accounts, for example, app-dev, app-prod, app-it, etc. There's also the management account, app-root.
There are also multiple groups already present, for example, developers, developers-prod, administrators, etc.
I also have multiple permission sets, for example, DeveloperAccess, AWSAdministratorAccess, DeveloperITAccess, etc. Opening, for example, the DeveloperAccess permission set, I can see an inline policy that includes things like s3, dynamo, rds, etc. Going to the "Accounts" tab, I see all my development accounts (app-dev, app-prod, app-stg), but I also see the app-root account.
Now, I don't want my developers to have access to the management account. I tried removing the management account from there, but that changed nothing on their end. I guess I just don't understand the underlying connection between accounts, groups and permission sets. Documentation wasn't clear enough for me. What I'm trying to do is to make sure my management account isn't accessible to anyone, basically, I want a special group/permission set for it, and I also want to remove app-root from every other group/permission set. I don't know how to do that.

How to send parameters to "Open in Cloud Shell" URL?

I want to create a button that will open GCP cloud shell and run code that create some resources in the account.
I am trying to use "Open in Cloud Shell" (https://cloud.google.com/shell/docs/open-in-cloud-shell) URL and adding my GIT repo to the URL, but the problem is that my code should get different arguments in every run. There is a way to send arguments with this URL? Or maybe there is another solution for running code with arguments in GCP cloud shell via URL?
This is NOT a direct answer to your original question however it might be useful for an overall answer. If we don't like this answer, simply let me know and we'll delete it.
From you clarification in the comments, what I now sense is that you want to create GCP resources that the user can work with. For example, a PubSub topic. We'll use that as an illustration. The first thing I want to do is disavow us of the notion that there is anything "special" about a resource and the identity that it used to create that resource other than the identity must have authority to create it. For example, if user "john" creates a topic, that doesn't mean that the topic is "owned" by john. A GCP resource "just exists" after it is created. In order for a user to "use" a resource, it (the resource) must authorize the sets of users to work with it. This is where GCP IAM comes into play. Separate your goal into two parts.
Upon request, a new GCP topic is created
Once the GCP topic is created, you grant permissions on the topic to be worked with by named identities (users/groups)
Don't think "The user who creates the topic is immediately the one who can work with it".
For example, you may wish to grant your users the ability to subscribe to a topic but may not want those users to be able to "manipulate" topics such as creation/update/delete.
I am assuming that the solution you are working against is for end users rather than internal developers?
Off the top of my head, I'm tempted to suggest that you review the following very short video:
How to authenticate calls to your Google Cloud Run service
This is just a teaser but it does give us a clue. It alludes to the notion that a request from an authenticated (to Google) user can be received by a Cloud Run instance and Cloud Run can then know who the user is. With that in mind, in the code of your Cloud Run, you can then make a "yes/no" decision as to whether to proceed. If yes to proceed, then Cloud Run (which is indeed running as a single user and we won't change that) creates the topic and then assigns subscription (or publication or other) permissions to the topic on behalf of the identity that came in with the request.

How exactly iam-user-unused-credentials-check works?

I've recently implemented some compliance in a company but one rule messed up my mind.
It is about iam-user-unused-credentials-check that, at least according to docs, should enter non-compliant if IAM user has used neither password not access key for configured amount of time.
Well, I do have a user that used his login credentials to access web console and it's stil
marked as non-compliant? I ran manually re-evaluate couple of minutes after that fact but still resource is non-compliant :/.
Should I give it more time and re-eveluate again in few hours? Or did I misunderstood what this rule does or how does it to that?
Ok, I understand what happened.
With this particular rule, AWS Config does not rely on configuration changes it records but on AWS Cloud Trail to see an activity. That's one.
The other one was the timeout that had to pass to pick up a trail (heh). After a while, resource just went compliant.

Google Cloud Cloud/Key activity logging

I have just recently started to work with Google Cloud and I am trying to wrap my head around some of its inner workings, mainly the audit logging part.
What I want do is get the log activity from when my keys are used for anything and also when someone actually logged into the Google Console Cloud (it could be the Key Vault or the Key Ring, too).
I have been using power shell to extract these logs using gcloud read logging and this is where I start to doubt whether I have the right place. I will explain:
I have created new keys and I see in the Activity Panel this action, and I can already extract this through gcloud read logging resource.type=cloudkms_cryptokey (there could be a typo on the command line, since I am writing it from the top of my head, sorry for that!).
Albeit I have this information, I am rather curious if this is the correct course of action here. I saw the CreateCryptoKey and SetIamPolicy methods on my logs, alright, but am I going to see all actions related to these keys? By reading the GCloud docs, I feel as though I am only getting some of the actions?
As I have said, I am trying to work my way around the GCloud Documentation, but it is such an overwhelming amount of information that I am not really getting the proper answer I am looking for, this is why I thought about resorting to this community.
So, to summarize, am I getting all the information related to my keys the way I am doing right now? And what about the people that have access to the Google Cloud Console page, is there a way to find who accessed it and which part (Crypto Keys page, Crypto Vault page for example)? That's something I have not understood from the docs as well, sadly. Perhaps someone could show me the proper page where I can make references to what I am looking for? Because the Cloud Audit Logging page doesn't feel totally clear to me on this front (and I assume I could be at fault here, these past weeks have been harsh!)
Thanks for anyone that takes some time to answer my question!
Admin activities such as creating a key or setting IAM policy are logged by default.
Data access activities such as listing Cloud KMS resources (key rings, keys, etc.), or performing cryptographic operations (encryption, decryption, etc.) are not logged by default. You can enable data access logging, via the steps at https://cloud.google.com/kms/docs/logging. I'm not sure if that is the topic you are referring to, or https://cloud.google.com/logging/docs/audit/.

"Failed to fetch a list of secrets" on AWS Secrets Manager console?

Has anyone noticed some unpredictable failures on AWS Secret Manager when trying to retrieve secret values? I'm using my own encryption key, and I've found that frequently, I am getting a "Failed to fetch a list of secrets" error on the AWS console after encrypting a secret. This seems to happen if I change the encryption key after an initial encryption, but it has happened without that as well.
I also think I've seen a case where the encryption key changed from a custom key to default without any action from me.
I've also seen an issue where two stacks set up nearly identically have an inconsistency where one can read an encryption key when calling Secrets Manager but one cannot. It looks like an IAM issue, but I haven't found any difference between the two stacks and their IAM settings. I only mention this in case it gives some clue to the issue above.
I am seeing the same thing as well after I changed the encryption key. I don't understand why this is happening. I will open a ticket with AWS and report back.
OK after talking to AWS Support the issue seems to be a bug. If you disabled (or marked it for deletion) your old encryption key than you will experience this issue.
To fix this you will need to cancel the deletion of your old encryption key, AND change its status to "Enabled". After this you will be able to retrieve your secrets using your new encryption key.
Unfortunately, this is the current workaround until AWS has a permanent solution.
Hope this helps.
There is not enough data here to provide a reliable answer. However, since you mention stacks and IAM users, I suspect you may be seeing a propagation issue.
Most AWS services, and IAM in particular, are eventually consistent. If you create a user or add permissions to a user, it can take some time for those user permissions to propagate. Usually this happens in seconds, but can sometimes take minutes. Since these are distributed systems, you could hit a node that has your recent permission updates and then hit a node that does not. A good clue is if this all clears up five or ten minutes after you have created everything.