Can one develop locally with AWS SAM using a role? - amazon-web-services

Using sam local start-api it doesn't appear to correctly take on my exported $AWS_PROFILE via ~/.aws/credentials.
Nor does it seem to take on the role defined in the template.yml.
I have two questions. How do I confirm my PHP project has assumed a role? It's not clear reading https://aws.amazon.com/sdk-for-php/
Secondly is it even possible for local development to assume my $AWS_PROFILE? Or am I supposed to code the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY in my app?

Related

How can I specify Account ID in CDK when I'm authorized with temp credentials to the AWS account?

I have seen related questions for this issue.
But my problem in particular is I am assuming a role to retrieve temporary credenitals. It works just fine; however, when running CDK Bootstrap, I receive: Need to perform AWS calls for account <account_id>, but no credentials have been configured.
I have tried using CDK Deploy as well and I receive: Unable to resolve AWS account to use. It must be either configured when you define your CDK Stack, or through the environment
How can I specify AWS Account enviornment in CDK when using temp credentials or assuming role?
you need to define the account you are attempting to bootstrap too.
for bootstrap you can use it as part of the command - cdk bootstrap aws://ACCOUNT-NUMBER-1/REGION-1 aws://ACCOUNT-NUMBER-2/REGION-2 ... do note that a given account/region only ever has to be bootstrapped ONCE in its lifetime, no matter how many cdk stacks are going to be deployed there.
AND your credentials need to be part of the default profile. If you are assuming credentials through some sort of enterprise script, please check with them that they store it as part of the default profile. If not, run at least aws config in your bash terminal and get your temp assumed credentials in there
For cdk deploy you need to make sure in your app.py that you have the env defined
env_EU = cdk.Environment(account="8373873873", region="eu-west-1")
env_USA = cdk.Environment(account="2383838383", region="us-west-2")
MyFirstStack(app, "first-stack-us", env=env_USA)
MyFirstStack(app, "first-stack-eu", env=env_EU)
i would however recommend against hardcoding your account numbers ;)
the name of credential and config file should be the same
like:
-credentials
[cdk]
aws_access_key_id = xxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxx
-config
[cdk]
region = "us-east-1"

How boto3 is configured on AWS lambda

As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?
there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.
When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)

What credentials does Boto3 use when running in AWS CodeBuild?

So I've written a set of deployment scripts that run in CodeBuild and use Boto3 to deploy some dockerised apps to ECS. The problem I'm having is when I want to deploy to our separate production account.
If I'm running the CodeBuild project from the dev account but want to create resources in the production account, it's my understanding that I should set up a role in the target account, allow the codebuild role to assume it, then call:
sts_client.assume_role(
RoleArn=arn_of_a_role_I_set_up,
RoleSessionName=some_name
)
This returns an access key, secret key, and session token. This works and returns what I'd expect.
Then what I want to do is just assign those values to these environment variables:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
This is because according to the documentation here: http://boto3.readthedocs.io/en/latest/guide/configuration.html Boto3 should defer to if you don't explicitly set those variables in the client or session methods.
However, when I do this the resources still get created in the same dev account.
Also, if I call printenv in the first part of my buildspec.yml before my scripts attempt to set the environment variables, those AWS key/secret/token variables aren't present at all.
So when it's running in CodeBuild, where is Boto3 getting its credentials from?
Is the solution just going to be to pass in a key/secret/token to every boto3.client() call to be perfectly sure?
The credentials in the CodeBuild environment are from the service role associated with your CodeBuild project. Boto and botocore will use the "ContainerProvider" automatically to grab those credentials in the CodeBuild environment.

How can I use environmental variables on AWS Lambda?

I'm writing an application which I want to run as an AWS Lambda function but also adhere to the Twelve-Factor app guidelines. In particular Part III. Config which requires the use of environmental variables for configuration.
However, I cannot find a way to set environmental variables for AWS Lambda instances. Can anyone point me in the right direction?
If it isn't possible to use environmental variables can you please recommend a way to use environmental variables for local development and have them transformed to a valid configuration system that can be accessed using the application code in AWS.
Thanks.
As of November 18, 2016, AWS Lambda supports environment variables.
Environment variables can be specified both using AWS console and AWS CLI. This is how you would create a Lambda with an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Perhaps the 'custom environment variables' feature of node-lambda would address your concerns:
https://www.npmjs.com/package/node-lambda
https://github.com/motdotla/node-lambda
"AWS Lambda doesn't let you set environment variables for your function, but in many cases you will need to configure your function with secure values that you don't want to check into version control, for example a DB connection string or encryption key. Use the sample deploy.env file in combination with the --configFile flag to set values which will be prepended to your compiled Lambda function as process.env environment variables before it gets uploaded to S3."
There is no way to configure env variables for lambda execution since each invocation is disjoint and no state information is stored. However there are ways to achieve what you want.
AWS credentials - you can avoid storing that in env variables. Instead grant the privileges to your LambdaExec role. In fact, AWS recommends using roles instead of AWS credentials.
Database details: One suggestion is to store it in a well known file in a private bucket. Lambda can download that file when it is invoked, read the contents which can contain database details and other information. Since the bucket is private, others cannot access the file. The LambdaExec role needs IAM privileges to access the private bucket.
AWS just added support for configuration of Lambda functions via environment parameters.
Take a look here
We also had this requirement for our Lambda function and we "solved" this by generating a env file on our CI platform (in our case this is CircleCI). This file gets included in the archive that gets deployed to Lambda.
Now in your code you can include this file and use the variables.
The script that I use to generate a JSON file from CircleCI environment variables is:
cat >dist/env.json <<EOL
{
"CLIENT_ID": "$CLIENT_ID",
"CLIENT_SECRET": "$CLIENT_SECRET",
"SLACK_VERIFICATION_TOKEN": "$SLACK_VERIFICATION_TOKEN",
"BRANCH": "$CIRCLE_BRANCH"
}
EOL
I like this approach because this way you don't have to include environment specific variables in your repository.
I know it has been a while, but I didn't see a solution that works from the AWS Lambda console.
STEPS:
In your AWS Lambda Function Code, look for "Environment variables", and click on "Edit";
For the "Key", type "LD_LIBRARY_PATH";
For the "Value", type "/opt/python/lib".
Look at this screenshot for the details.
The #3 assumes that you are using Python as your runtime environment, and also that your uploaded Layer has its "lib" folder in the following structure:
python/lib
This solution works for the error:
/lib/x86_64-linux-gnu/libz.so.1: version 'ZLIB_1.2.9' not found
assuming the correct libray file is put in the "lib" folder and that the environment variable is set like above.
PS: If you are unsure about the #3 path, just look for the error in your console, and you will be able to see where your "lib" folder for your layer is at runtime.

Packer amazon-ebs : AuthFailure

For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171