Right now we create scripts that run through CLI to automate or fetch things from AWS.
But we used AWS access key/ secret access/session token for the same.
these keys and tokens are valid for 1 hour. Hence next hour if we do use them, the script will fail.
But it is also not possible to fetch the temp credentials, update the script, and run those.
So what is the best possible solution in this condition? What should I do that I can get the updated credentials and I can run the script by using those updated credentials (automatically)? Or any other alternatives so that we can still run scripts from our local machines using Boto with AWS credentials?
any help is appreciated.
Bhavesh
I'm assuming that your script runs outside of AWS, otherwise you would simply configure your compute (EC2, Lambda, etc.) to automatically assume an IAM role.
If you have persistent IAM User credentials that allow you to assume the relevant role then use those.
If you don't then take a look at the new IAM Roles Anywhere feature.
Related
When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/
I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.
I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.
According to many advices, we should not configure IAM USER but using IAM Role instead to avoid someone managed to grab the user confidential in .aws folder.
Lets say I don't have any EC2 instances. Can I still able to perform S3 operation via AWS CLI? Says aws s3 ls
MacBook-Air:~ user$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
You are correct that, when running applications on Amazon EC2 instances or as AWS Lambda functions, an IAM role should be assigned that will provide credentials via the EC2 metadata service.
If you are not running on EC2/Lambda, then the normal practice is to use IAM User credentials that have been created specifically for your application, with least possible privilege assigned.
You should never store the IAM User credentials in an application -- there have been many cases of people accidentally saving such files into GitHub, and bad actors grab the credentials and have access to your account.
You could store the credentials in a configuration file (eg via aws configure) and keep that file outside your codebase. However, there are still risks associated with storing the credentials in a file.
A safer option is to provide the credentials via environment variables, since they can be defined through a login profile and will never be included in the application code.
I don't think you can use service roles on your personal machine.
You can however use multi-factor authentication for AWS CLI
You can use credentials on any machine not just EC2.
Follow the steps as described by the documentation for your OS.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Thus far I get access to my AWS resources using Access Key Id and Secret Access Key. But every time I end my session I have to manually enter these keys when typing aws configure
Is there an automated way, perhaps with SSH private key on the local host ?
Generally speaking when you use "aws configure", and enter your credentials, those credentials are saved in the .aws/credentials file in a path on your machine (exactly where will depend on the OS). You shouldn't have to run 'aws configure' again unless your credentials change.
Once that is done - one time - every further execution of an AWS CLI command should just use those stored credentials - you should not have to ever enter them more than once.
Aws provides this feature to handle access and secret keys in the format of profiles..
Let say if you have multiple accounts.. or multiple regions..
You can setup those as profiles with the help of aws configure --profile <profilename>
And when performing operations in one particular account in one particular region..
export DEFAULT_AWS_PROFILE=<profilename>
By doing this it is easy to work with multiple envs.
My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html