Switch profile/account/role in AWS Lambda - amazon-web-services

I am trying to connect to an AWS AppSync, located in a custom profile/account/role under the root/default account, from an AWS Lambda in the root/default account.
The below python code works fine locally, because I have configured "custom_profile" in my local .aws/config file.
session = boto3.Session(profile_name='custom_profile')
client = session.client('appsync', region_name='<region>')
But is there any way to make this code run in the AWS Lambda in the root account? How can AWS Lambda understand what is "custom_profile"? Where and how can I map "custom_profile" to the respective role ARN?
I saw a probable solution to this problem on this link, but I have not tried it.
Has anyone faced a similar issue and know of an easier solution to this problem than in the link?

The link that you've referenced is the way to go. Permissions that an AWS Lambda function has, are to be defined in a role for that function. This can include permissions to assume a role in another account.
You can then use the Security Token Service (or STS for short) and execute the AssumeRole action. This will provide you with AWS tokens that you can use to authenticate your calls to the other account.
You will also have to configure the account you're executing the lambda function in as a trusted entity in the role you want to assume in the second account.

Related

Access CloudWatch Evidently Cross-Account?

I have a cloudwatch evidently project in one account and have a lambda in another account that wants to call EvaluateFeature on the cross-account Evidently project. Is this possible?
I've been using this client https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/evidently/EvidentlyClient.html
and even though I give it the project arn of the target account's project (in the project parameter of EvaluateFeatureRequest), it seems to re-route the EvaluateFeature request into the account the lambda resides in. Not sure if anyone has done something similar before.
I have a role in the Evidently account that can be assumed by Lambda service principal. With the stsAssumeRole permission also part of the Lambda's role policy. I just can't get EvaluateFeature to direct itself to the correct Arn/project.
Thanks.

Cannot access AWS Lambda console with the error saying 'You do not have sufficient permission. Access denied.'

previously I had been able to deploy my lambda functions without any problems on my own AWS account. Now, I need to deploy them on to a different AWS account where my IAM user has an AdministratorAccess permission.
I've set up a role/policies for invoking lambdas the same way I did for my account. Before I deployed my code with terraform, I checked the console page for AWS lambda , and this error pops up.
Any idea why I still don't have enough permissions to access lambda even with my AdministratorAccess policy attached to my user?. Do I still need to add more policies to my user in order to access Lambda?
I have faced the same issue. You need to contact AWS to unlock your access as your account has been locked due to potentially dangerous activity.
I recommend you to enable MFA and use an IAM user to log in to AWS console instead of root user.
AdministratorAccess is definitely enough to view the Lambda console.
Do you have CLI access setup for this user? You could try running the list-functions CLI command to confirm that you user is setup as expected, as this uses the same API call that the web console is performing for you.
I have faced the same issue, after checking this post I checked my mailbox. AWS asked my to verify my account by sending utility bills picture and address information. I did it and everything is back to normal now.

Cross Account AWS Lambda with IAM credentials

I have a cross account IAM question about running in a Lambda function. (I know people may use STS assume, but this package really isn't worth protecting, and I don't want to go through linking the accounts)
Account “A”.
S3 – package “s3://foo/foo”
IAM credentials “pkg_creds” for bucket "s3://foo"
Account “B”
Lamba function “gogo” runs
In this Lambda function, it attempts to use boto3 and the pkg_creds to
download package “s3://foo/foo”, but if fails with this error:
**The provided token is malformed or otherwise invalid.**
Lambda is read only, but I believe boto3 will not write credentials to ~/.aws if I'm using boto3.client (not session). However, I also set the AWS_CONFIG_FILE to /tmp just in case. It still fails. I suspect what I'm proposing isn't possible because LAMBDA has immutable AWS credentials, where you can't change scopes, even one that is explicitly given to boto3.
Let me know your thoughts. I may try do the job with Faragate, but Lambda function is easier to maintain and deploy.
Thanks in advance!
Lambda isn't using a ~/.aws config file at all, it is using environment variables by default. There are many ways to configure AWS credentials in boto3. You should be able to create a new boto3 client in your Lambda function with explicit AWS credentials like so:
client = boto3.client(
's3',
aws_access_key_id=ACCOUNT_A_ACCESS_KEY,
aws_secret_access_key=ACCOUNT_A_SECRET_KEY
)
And pass ACCOUNT_A_ACCESS_KEY and ACCOUNT_A_SECRET_KEY as environment variables to the function.
User error. I can verify boto3 in a lambda function can use credentials outside of its scope.
after troubleshooting more. the issue was that i was took in the "environmental variable" SESSION which is set on the lambda function, but not on my ec2 instance. so i was always using the lambda session key that seems to overide the explicit key and secret.

How to use IAM role with AWS Java SDK

My use case is as follows:
I need to push some data into AWS SQS queue using JAVA SDK and by help of IAM role (not using credential provider implementation).
Is there any way to do that?
Thanks for help in advance.
It's been a while, but this is not currently the case, it is now possible to use assume role with the Java SDK with a user. You can configure credentials in your .aws/credentials file as follows:
[useraccount]
aws_access_key_id=<key>
aws_secret_access_key=<secret>
[somerole]
role_arn=<the ARN of the role you want to assume>
source_profile=useraccount
Then, when you launch, set an environment variable: AWS_PROFILE=somerole
The SDK will use the credentials defined in useraccount to call assumeRole with the role_arn you provided. You'll of course need to be sure that the user with those credentials has the permissions to assume that role.
Note that if you're not including the full Java SDK in your project (i.e. you're including just the libraries for the services you need), you also need to include the aws-java-sdk-sts library in your classpath for this to work.
It is also possible to do all of this programmatically using STSAssumeRoleSessionCredentialsProvider, but this would require you to directly configure all of the services so it might not be as convenient as the profile approach which should just work for all services.
You can use role based authentication only on EC2 Instances, ECS Containers and Lambda functions. It is not possible to use them locally or on on premise servers.
DefaultAWSCredentialsProviderChain will automatically pick the EC2 Instance Role if it can't find the credentials via any of other methods. You can also create a custom AWSCredentialsProviderChain object with only injecting a instance of InstanceProfileCredentialsProvider to it like here
AWSCredentialsProviderChain myCustomChain = new AWSCredentialsProviderChain(new InstanceProfileCredentialsProvider());
For more info: https://docs.aws.amazon.com/java-sdk/latest/developer-guide/java-dg-roles.html

How can I query my IAM capabilities?

My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html