PyCharm/ IntelliJ IDEA run configuration assume AWS role with MFA - amazon-web-services

In PyCharm i want to create run/ debug configuration for project that must have access to AWS resources. But first AWS user must assume the role that gives permissions, and assuming the role needs MFA.
Now i first run CLI assume-role command, than copy-paste temporary role credentials to environment variables in the run/ debug configuration. But duration of the assumed role is too short, and this process need to be repeated time-by-time, and it isn't very useful.
So- what is the best way to configure PyCharm/ IntelliJ IDEA in this case?

So best solution i found is:
Run in terminal AWS CLI assume-role command (assume-role descroption). After execution of this command environment variables with temporary role credentials are created: AWS_ROLE_NAME, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
Get values of these variables (for example by using export command).
Set these variables as user environment variables in PyCharm/ IDEA run/debug configuration.
Application will run with desirable role permissions.

Related

How can I specify Account ID in CDK when I'm authorized with temp credentials to the AWS account?

I have seen related questions for this issue.
But my problem in particular is I am assuming a role to retrieve temporary credenitals. It works just fine; however, when running CDK Bootstrap, I receive: Need to perform AWS calls for account <account_id>, but no credentials have been configured.
I have tried using CDK Deploy as well and I receive: Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
How can I specify AWS Account enviornment in CDK when using temp credentials or assuming role?
you need to define the account you are attempting to bootstrap too.
for bootstrap you can use it as part of the command - cdk bootstrap aws://ACCOUNT-NUMBER-1/REGION-1 aws://ACCOUNT-NUMBER-2/REGION-2 ... do note that a given account/region only ever has to be bootstrapped ONCE in its lifetime, no matter how many cdk stacks are going to be deployed there.
AND your credentials need to be part of the default profile. If you are assuming credentials through some sort of enterprise script, please check with them that they store it as part of the default profile. If not, run at least aws config in your bash terminal and get your temp assumed credentials in there
For cdk deploy you need to make sure in your app.py that you have the env defined
env_EU = cdk.Environment(account="8373873873", region="eu-west-1")
env_USA = cdk.Environment(account="2383838383", region="us-west-2")
MyFirstStack(app, "first-stack-us", env=env_USA)
MyFirstStack(app, "first-stack-eu", env=env_EU)
i would however recommend against hardcoding your account numbers ;)
the name of credential and config file should be the same
like:
-credentials
[cdk]
aws_access_key_id = xxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxx
-config
[cdk]
region = "us-east-1"

Fargate tasks not calling aws services without aws configure

I am unable to call aws services from fargate tasks - secrets manager and sns.
I want these services to be invoked from inside the docker image which is hosted on ECR. When I run the pipeline everything loads and run correctly except when the script inside the docker container is invoked, it throws an error. The script makes a call to either secrets manager or sns. The error thrown is -
Unable to locate credentials. You can configure credentials by running "aws configure".
If I do aws configure then the error goes away and every things works smoothly. But I do not want to store the aws credentials anywhere.
When I open task definitions I can see two roles - pipeline-task and ecsTaskExecutionRole
Although, I have given full administrator rights to both of these roles, the pipeline still throws error. Is there any place missing where I can assign roles/policies etc. I want to completely avoid using aws configure.
If the script with the issue is not a PID 1 process ( used to stop and start the container ), it will not automatically read the Task Role (pipeline-task-role). From your description, this sounds like the case.
Add this to your Dockerfile:
RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile
The AWS SDK from the script should know where to pick up the credentials from after that.
I don't know if my problem was the same as yours but I also experienced this kind of problem where I had been set the task role but the container don't get the right permissions. After spending a few days, I discovered that if you set any of the AWS environment variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
or AWS_DEFAULT_REGION
into the task definition, the "Credential provider chain" will stop at the "Static credentials" step, so the SDK your script is using will look for the other credentials within the .aws/.credentials file and as it can't find them it throws Unable to locate credentials.
If you want to know further about the Credential provider chain you could read about it in https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html

AWS user with admin policy denied from CLI commands

SETUP
I created a new aws user via the aws web console, and selected both console and programmatic/cli access
I have added the AdministratorAccess policy directly to it.
I have not enabled MFA for this user
I have verified that my credentials file within the aws directory contains the proper values for aws_access_key_id and aws_secret_access_key
I have verified that my config file within the aws directory does not contain any lines that would overwrite data for the profile
I am verifying I am using the correct profile info by with aws configure list
THE ISSUE
Executing aws ec2 describe-regions returns:
An error occurred (UnauthorizedOperation) when calling the DescribeRegions operation: You are not authorized to perform this operation.
The error is pretty straightforward, but I'm not sure what else I can do to authorize this user. I had a coworker follow the same steps and the CLI worked as expected for him.
I researched the steps from This S.O. post but am still scratching my head.
Your AWS CLI is getting credentials from somewhere else. See Configuration Settings and Precedence:
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#config-settings-and-precedence
Make sure it is not getting the credentials from environment variables or from other locations. The AWS CLI looks for credentials and configuration settings in the following order:
Command Line Options – region, output format and profile can be specified as command options to override default settings.
Environment Variables – AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc.
1)The AWS credentials file – ~/.aws/credentials on Linux, OS X, or Unix, or at C:\Users\USERNAME .aws\credentials on Windows.
Can contain multiple named profiles in addition to a default profile.
2)The CLI configuration file – at ~/.aws/config on Linux, OS X, or Unix, or at C:\Users\USERNAME .aws\config on Windows. Can contain a default profile, named profiles, and CLI specific configuration parameters for each.
3)Instance profile credentials – these credentials can be used on EC2 instances with an assigned instance role

can i assume-role over an aws ec2 instance-profile with terraform?

i am running into a situation where i am trying to run a terraform (v0.11.7) script within jenkins within kubernetes (k8s) within an ec2-instance.
the k8s worker is running on an ec2-instance with a particular aws instance-profile
the terraform script is configured via various environment variables, credentials
and config files to be able to assume a specific role for it's purposes
the setup works fine on my macbook, but sadly, in jenkins/k8s/ec2, the ec2 instance-profile is prevailing and the terraform script is failing because it requires it's specific assume-role to complete it's operations.
it's actually failing on the terraform plan step with TF_LOG output showing that the role is derived from the instance-profile.
wondering if anyone has run into this situation and has any related guidance?
looks like the special-sauce for my particular use case is Terraform's aws provider skip_metadata_api_check flag, which apparently inhibits Terraform's behavior around assuming the role of the ec2 instance-profile, and allows it to fallback to it's normal mechanisms for assume-role in my specific setup.
my specific setup involved use of the provider's profile arg in conjunction with the named profile feature involving aws shared credentials and config files.
another option for assume-role in Terraform is the provider's assume_role arg, but i preferred the more abstracted profile option.
i have also found that use of the AWS_PROFILE environment variable in conjunction with the less well documented AWS_SDK_LOAD_CONFIG environment variable of the AWS Go SDK (used by Terraform) will also works as an alternative, and allows omitting the profile argument altogether which may be even more appealing to some

What credentials does Boto3 use when running in AWS CodeBuild?

So I've written a set of deployment scripts that run in CodeBuild and use Boto3 to deploy some dockerised apps to ECS. The problem I'm having is when I want to deploy to our separate production account.
If I'm running the CodeBuild project from the dev account but want to create resources in the production account, it's my understanding that I should set up a role in the target account, allow the codebuild role to assume it, then call:
sts_client.assume_role(
RoleArn=arn_of_a_role_I_set_up,
RoleSessionName=some_name
)
This returns an access key, secret key, and session token. This works and returns what I'd expect.
Then what I want to do is just assign those values to these environment variables:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
This is because according to the documentation here: http://boto3.readthedocs.io/en/latest/guide/configuration.html Boto3 should defer to if you don't explicitly set those variables in the client or session methods.
However, when I do this the resources still get created in the same dev account.
Also, if I call printenv in the first part of my buildspec.yml before my scripts attempt to set the environment variables, those AWS key/secret/token variables aren't present at all.
So when it's running in CodeBuild, where is Boto3 getting its credentials from?
Is the solution just going to be to pass in a key/secret/token to every boto3.client() call to be perfectly sure?
The credentials in the CodeBuild environment are from the service role associated with your CodeBuild project. Boto and botocore will use the "ContainerProvider" automatically to grab those credentials in the CodeBuild environment.