"Failed to fetch a list of secrets" on AWS Secrets Manager console? - amazon-web-services

Has anyone noticed some unpredictable failures on AWS Secret Manager when trying to retrieve secret values? I'm using my own encryption key, and I've found that frequently, I am getting a "Failed to fetch a list of secrets" error on the AWS console after encrypting a secret. This seems to happen if I change the encryption key after an initial encryption, but it has happened without that as well.
I also think I've seen a case where the encryption key changed from a custom key to default without any action from me.
I've also seen an issue where two stacks set up nearly identically have an inconsistency where one can read an encryption key when calling Secrets Manager but one cannot. It looks like an IAM issue, but I haven't found any difference between the two stacks and their IAM settings. I only mention this in case it gives some clue to the issue above.

I am seeing the same thing as well after I changed the encryption key. I don't understand why this is happening. I will open a ticket with AWS and report back.
OK after talking to AWS Support the issue seems to be a bug. If you disabled (or marked it for deletion) your old encryption key than you will experience this issue.
To fix this you will need to cancel the deletion of your old encryption key, AND change its status to "Enabled". After this you will be able to retrieve your secrets using your new encryption key.
Unfortunately, this is the current workaround until AWS has a permanent solution.
Hope this helps.

There is not enough data here to provide a reliable answer. However, since you mention stacks and IAM users, I suspect you may be seeing a propagation issue.
Most AWS services, and IAM in particular, are eventually consistent. If you create a user or add permissions to a user, it can take some time for those user permissions to propagate. Usually this happens in seconds, but can sometimes take minutes. Since these are distributed systems, you could hit a node that has your recent permission updates and then hit a node that does not. A good clue is if this all clears up five or ten minutes after you have created everything.

Related

How to store sensitive data in AWS

So I'm new to the whole cloud computing infrastructure and I'm trying to grasp the best practices and today it came to my mind. How do I store sensitive data in AWS what services do I need to utilize and what architecture shall I build for it, I wrote a scenario down to further explain my question.
Let's say I have a user registration and I need from every user to input a secret key that I need to access some kind of 3rd party service on their behalf (let's assume that it's the only to access that service and no other way to access it) how do I store it in my database let's say RDS for example without compromising other IAM users from accessing the database but all they say is an encrypted secret key not the plain text.
I searched online and found some saying KMS and some saying Secrets Manager some saying Backend Encrypting and some says Frontend Encrypting which way shall I go with?
Whoever decides to answer this question thanks in advance but please elaborate as much as you can because I'm still trying the get the concepts and trying to leverage the "Cloud" capabilities as much as possible.
Two common approaches would be to encrypt the secret key at either a) the application level, or b) the database level. To encrypt the key inside your application, you would use some reliable encryption method, such as SHA-256 or SHA-512. The key would be encrypted and non accessible even before you write it out to your database as binary content. To encrypt at the database level, there are a number of options, depending on your particular database. If your RDBMS support encrypted columns, then, from your application, you may simply write out the secret key to its column. The database would then automatically handle encrypting on the way in, and also decrypting the secret key on the way out, when you go to read it.

How exactly iam-user-unused-credentials-check works?

I've recently implemented some compliance in a company but one rule messed up my mind.
It is about iam-user-unused-credentials-check that, at least according to docs, should enter non-compliant if IAM user has used neither password not access key for configured amount of time.
Well, I do have a user that used his login credentials to access web console and it's stil
marked as non-compliant? I ran manually re-evaluate couple of minutes after that fact but still resource is non-compliant :/.
Should I give it more time and re-eveluate again in few hours? Or did I misunderstood what this rule does or how does it to that?
Ok, I understand what happened.
With this particular rule, AWS Config does not rely on configuration changes it records but on AWS Cloud Trail to see an activity. That's one.
The other one was the timeout that had to pass to pick up a trail (heh). After a while, resource just went compliant.

Google Cloud Cloud/Key activity logging

I have just recently started to work with Google Cloud and I am trying to wrap my head around some of its inner workings, mainly the audit logging part.
What I want do is get the log activity from when my keys are used for anything and also when someone actually logged into the Google Console Cloud (it could be the Key Vault or the Key Ring, too).
I have been using power shell to extract these logs using gcloud read logging and this is where I start to doubt whether I have the right place. I will explain:
I have created new keys and I see in the Activity Panel this action, and I can already extract this through gcloud read logging resource.type=cloudkms_cryptokey (there could be a typo on the command line, since I am writing it from the top of my head, sorry for that!).
Albeit I have this information, I am rather curious if this is the correct course of action here. I saw the CreateCryptoKey and SetIamPolicy methods on my logs, alright, but am I going to see all actions related to these keys? By reading the GCloud docs, I feel as though I am only getting some of the actions?
As I have said, I am trying to work my way around the GCloud Documentation, but it is such an overwhelming amount of information that I am not really getting the proper answer I am looking for, this is why I thought about resorting to this community.
So, to summarize, am I getting all the information related to my keys the way I am doing right now? And what about the people that have access to the Google Cloud Console page, is there a way to find who accessed it and which part (Crypto Keys page, Crypto Vault page for example)? That's something I have not understood from the docs as well, sadly. Perhaps someone could show me the proper page where I can make references to what I am looking for? Because the Cloud Audit Logging page doesn't feel totally clear to me on this front (and I assume I could be at fault here, these past weeks have been harsh!)
Thanks for anyone that takes some time to answer my question!
Admin activities such as creating a key or setting IAM policy are logged by default.
Data access activities such as listing Cloud KMS resources (key rings, keys, etc.), or performing cryptographic operations (encryption, decryption, etc.) are not logged by default. You can enable data access logging, via the steps at https://cloud.google.com/kms/docs/logging. I'm not sure if that is the topic you are referring to, or https://cloud.google.com/logging/docs/audit/.

How to backup virtual MFA for AWS

To secure AWS account it is good to have virtual MFA device, such as Google Authenticator.
Usually, you can just take a picture of the QR code, and use it on as many devices as you want (as here suggested https://webapps.stackexchange.com/a/66666/188445, sorry, couldn't comment on that answer, don't have the reputation).
However, on AWS it asks two codes to confirm, that makes me think it is device specific. Is any way to make an AWS MFA on two devices or use backup if lose my phone?
First, I'll be that guy and say - don't backup your MFA key. If you lose your device, just jump through the steps of resetting it by contacting support.
While it doesn't necessarily defeat the purpose of increasing the security, and while it's also probably not likely that someone will attempt to steal your key, I don't think you're doing yourself any favors, security-wise.
But that's not what you're asking about.
When you say "on AWS it asks two codes to confirm, that makes me think it is device specific," I'm not sure I follow. Yes, it's device specific, in that you need the specific device that either scanned the QR code, or entered the key in, in order to auth via MFA.
But just because there are two fields, it doesn't mean that there are two different QR codes or MFA keys you need - you just need the one they show you.
After you set up your authenticator, you enter the first code you see into the first field, then wait for that to cycle out, then enter the next one into the second field. Asking for two codes just ensures that your authenticator is working correctly. It's not any different that other services that use an authenticator as MFA - some only ask for the first code that appears, some ask for two. (Personally I think two is better.)

Versioning aws policies

Currently I log into IAM and edit policies by hand for my S3 bucket. When I change something in the editor, I have no idea what the policy was before unless I exit the editor by canceling and then go back and view it. So there's no way to tell exactly what I've changed. So editing is kind of painful, especially considering that I sometimes find myself changing something and then testing the change, with no trivial way to roll back to where I started.
Another problem created by the lack of version control is there's no log of why or when a particular permission was modified. For example, I would really like to know that the reason we need the ListBucket permission on our bucket is because that was required to get file uploads to work. You know, the kind of thing you might put in a git commit message.
Now that you understand and care deeply about my motivations, I would like to know how best to get my policies into git. To the extent possible, I'd like the only way to change the permissions to be through code that is written by me, with the presumption being that any time you make a change, you commit to the repository. This is not perfect security of course, but it does provide an accounting of what changed when, and gives us a single place where we make changes.
Here's my proposal:
Create an IAM user called policy_editor
Revoke policy editing privileges from all users
Give policy_editor policy editing privileges
Do not give policy_editor a password (thus have to use api credentials to change policies)
My questions are:
Is this possible? (Ideally even the root user wouldn't have permission to edit policies, so that wouldn't happen by accident)
Is this a good idea?
Is there a better solution?
Is there a tool that does this already?
Thanks!
Is this possible?
Yes, the API is flexible enough to do that. Writing automation around IAM pays off in spades.
By "root user", do you mean the AWS access keys directly on the account? Step 1 is to delete those creds (directly on the account) and only use IAM users for everything.
http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html
Is this a good idea?
Yes, automation is good.
Is there a better solution?
Well, here are some related ideas:
Use CloudTrail to log all IAM changes.
If you disable your IAM-changing privs, create a second user (with MFA enabled) for emergencies.
For some "dangerous" commands, use automation instead. (i.e. give them a web form where they can delete a bucket, but your code verifies it's OK to delete beforehand.)
Avoid adding privs directly to people. Always use groups to organize permissions. Don't be afraid to spend some time figuring out what logical permission groups would be. For example, you could have a "debugging production" group.
Don't get too fine-grained (at least not at first). There is a trade-off between security and bureaucracy here. If people have to ping you for every little permission, they will start requesting privs "just in case".
Use the conditionals: You can say "you can delete any bucket that doesn't have 'production' in the name". Or "You can terminate instances, but it requires MFA".
Review your policies regularly. People move around between teams, so people often end up with permissions they don't need. If your groups are well-named, you can make the managers review the permissions needed for their underlings.
Is there a tool that does this already?
Not that I know of. It's pretty easy via API calls, so someone is going to write it.
(This guy started a project: https://github.com/percolate/iamer )