AWS IAM consistency issue - amazon-iam

I'm using Hashicorp vault for creating users with the AWS Secrets-Engine.
I have an issue using the AWS credentials I get, probably because it takes time for all the AWS servers to be updated with the newly created user, as it stated here
I'm using Hashicorp Vault for creating AWS users in runtime, and use the credentials I get immediately. In practice, there could be a delay of up to a few seconds until I can actually use them. Besides performing some retry mechanism, I wonder if there is a real solution to this issue or at least a more elegant solution

As AWS IAM promises eventual consistency, we cannot do anything better than delay and hope for the best. The bad part is that we don't know for how long should we sleep till the new keys reach all endpoints.
This is the problem with the behavior of IAM, not really a Vault issue. There's kinda workaround like this:
make a new temporary user, generate keys for it, hand the keys over to the Vault requester
2.use a non-temporary user, make a new key pair for it, etc
Didn't test it, but as an idea to try I guess it's ok.

HashiCorp released a change in how they handle the dynamically created IAM users and the Vault provider now accounts for this delay. https://github.com/terraform-providers/terraform-provider-vault/blob/master/CHANGELOG.md#260-november-08-2019 Since this update I rarely run into issues but they occur once in a while.

Related

What is the best way to duplicate an existing Cognito user pool

I need to recreate a new User Pool with exactly the same settings as another one and I am wondering what is the best way to do it, or if it is a standard way that I am not aware of. (maybe a faster way than using the AWS console)
My guess is, using AWS CLI :
Get user pool details: describe-user-pool
Then create a new one with the same details : create-user-pool
Any thoughts?
You should first import the resource to CloudFormation, then copy the template and deploy it as a new stack. This will give you better control over the desired configuration of the resources. Ensure you set the retention policy to retain. Losing a user pool is no fun.
It seems there is still no support for importing Cognito user pools into CloudFormation. My recommendation remains that you should be maintaining your infrastructure as code, particularly if you wish to replicate it across environments. How you accomplish it is a little more convoluted but you should just iterate on your CFN template until the configuration matches. Or if you are up for it, terraform has tooling to help you import resources
So! To answer my own question after some time gaining related experience.
The best way to go is like #AndrewGillis said:
Hold your Infrastructure As Code.
My preference I use Terraform.

When using AWS .Net SDK, what should be the lifecycle of client objects?

I've an application that queries some of my AWS accounts every few hours. Is it safe (from memory, number of connections perspective) to create a new client object for every request ? As we need to sync almost all of the resource types for almost all of the regions, we end up with hundred clients(number of regions multiplied by resource types) per service run.
In general creating the AWS clients are pretty cheap and it is fine to create them and quickly dispose them. The one area I would be careful with when comes to performance is when the SDK has do resolve the credentials like assuming IAM roles to get credentials. It sounds like in your case you are iterating through a bunch of accounts so I'm guessing you are explicitly setting credentials and so that will be okay.

Glacier policy for IAM to have full access to only vaults they've created?

There are similar questions around but none seem to quite answer me directly (or I'm too new with AWS to connect the dots myself.) Apologies if this was easily searchable. I've been trying for many days now.
I want to create a policy that I can assign to IAM users for my Glacier that will allow any IAM user the right to create a vault and then allow them access to most rights for the vaults that they've created. (basically all but delete)
The use case/scenario is this: I have multiple Synology NASes spread at multiple sites. I presently have them all backing up to the glacier account each using their own IAM creds. So far so good.
The problem becomes when they need to do a restore (or even just a list vaults) they see all vaults in the account. I do not want them to see other NAS's vaults/backups as it can be confusing and is irrelevant to that site.
So far I'm simply doing all Glacier ops myself but this will not scale for us. (We intend to add about 25 more NASes/sites, presently running about 8-10)
My assumption is that I should be able to do this somehow with a condition statement and some variant of vaults/${userid} but not quite finding/getting it.
I can't affect anything at vault creation (like adding a tag) because it's the Synology Glacier app creating the vault so no way to mod that.
I've seen some solutions for like EC2 that use post-hoc tagging. I'd prefer not to go that route if I can avoid it as it involves other services we don't use and I know little to nothing about (CloudStream(?), CloudWatch(?) and Lambda I think).
I've also thought of multiple and linked accounts which, if it's the only way then I will, but with no ability to move vaults to the new account - meaning gotta start over for these 8 - it's a less attractive option.
Seems like a policy for this should be easy enough. Hoping it is and it's just a few clicks over my current Policy writing skills.

Access management for AWS-based client-side SDK

I'm working on client-side SDK for my product (based on AWS). Workflow is as follows:
User of SDK somehow uploads data to some S3 bucket
User somehow saves command on some queue in SQS
One of the worker on EC2 polls the queue, executes operation and sends notification via SNS. This point seems to be clear.
As you might have noticed, there are quite some unclear points about access management here. Is there any common practice to provide access to AWS services (S3 and SQS in this case) for 3rd-party users of such SDK?
Options which I see at the moment:
We create IAM-user for users of the SDK which have access to some S3 resources and write permission for SQS.
We create additional server/layer between AWS and SDK which is writing messages to SQS instead of users as well as provides one-time short-living link for SDK to write data directly to S3.
First one seems to be OK, however I'm hesitant that I'm missing some obvious issues here. Second one seems to have a problem with scalability - if this layer will be down, whole system won't work.
P.S.
I tried my best to explain the situation, however I'm afraid that question might still lack some context. If you want more clarification - don't hesitate to write a comment.
I recommend you look closely at Temporary Security Credentials in order to limit customer access to only what they need, when they need it.
Keep in mind with any solution to this kind of problem, it depends on your scale, your customers, and what you are ok exposing to your customers.
With your first option, letting the customer directly use IAM or temporary credentials exposes knowledge to them that AWS is under the hood (since they can easily see requests leaving their system). It has the potential for them to make their own AWS requests using those credentials, beyond what your code can validate & control.
Your second option is better since it addresses this - by making your server the only point-of-contact for AWS, allowing you to perform input validation / etc before sending customer provided data to AWS. It also lets you replace the implementation easily without affecting customers. On availablily/scalability concerns, that's what EC2 (and similar services) are for.
Again, all of this depends on your scale and your customers. For a toy application where you have a very small set of customers, simpler may be better for the purposes of getting something working sooner (rather than building & paying for a whole lot of infrastructure for something that may not be used).

All my AWS datapipelines have stopped working with Validation error

I use AWS data pipelines to automatically back up dynamodb tables to S3 on a weekly basis.
All of my data-pipelines, have stopped working since two weeks ago.
After some investigation, I see that EMR fails with "validation error" and "Terminated with errors No active keys found for user account". As a results all the jobs timeout.
Any ideas what this means?
I ruled out changes to the list of instant types that are allowed to be used with EMR.
Also I tried to read the EMR logs but it looks like it doesn't event get to the point to create logs (or I am looking for them in the wrong place).
AWS account which used to launch EMR has keys ( access key and sec key ) Could you check if those keys are deleted ? You need to login to AWS console and check keys exists for your account.
if not re create keys and use in your code that launches EMR.
Basically #Sandesh Deshmane answered my question correctly.
For future reference and clarity I explain the situation here too:
What happened was that originally I used the root account and console to create the pipelines. Later I decided to follow the best practices and removed my root account keys.
A few days later (my pipelines are scheduled to run weekly) when they all failed I did not make the connection and thought of other problems.
I think one good way to avoid this (if you want to use console) will be to login to console with an IAM account and create the pipelines.
Or you can use command line tools to create them with and IAM credentials.
The real solution now (I think it was not available when the console was first introduced), is to assign the correct IAM role in the first page when you are creating your pipeline in the console. In the "security/access" section change it from default to custom and select the correct roles there.