My Aws Lambda function is written in Java. I am getting data from DynamoDb by giving some static credentials like below;
new BasicAWSCredentials(ACCESSKEY, SECRETKEY)
However, when i try to define my services in Aws Cloudformation. I could not find any way, how can i change these accesskey and secretkey credentials. What is the best way for managing these credentials?, Because they are special keys for each account and embedded in Java code.
Although you could use the context or download a file to pass credentials in runtime, one should not use explicit hard-coded credentials, as that is harder to acquire and rotate securely.
It is easier and safer to use roles, as described in the lambda permission model: http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html
Use explicit credentials only outside AWS (your dev machine for example), and even so do not hard-code them, use environment variables or CLI profiles.
Related
I have a cross account IAM question about running in a Lambda function. (I know people may use STS assume, but this package really isn't worth protecting, and I don't want to go through linking the accounts)
Account “A”.
S3 – package “s3://foo/foo”
IAM credentials “pkg_creds” for bucket "s3://foo"
Account “B”
Lamba function “gogo” runs
In this Lambda function, it attempts to use boto3 and the pkg_creds to
download package “s3://foo/foo”, but if fails with this error:
**The provided token is malformed or otherwise invalid.**
Lambda is read only, but I believe boto3 will not write credentials to ~/.aws if I'm using boto3.client (not session). However, I also set the AWS_CONFIG_FILE to /tmp just in case. It still fails. I suspect what I'm proposing isn't possible because LAMBDA has immutable AWS credentials, where you can't change scopes, even one that is explicitly given to boto3.
Let me know your thoughts. I may try do the job with Faragate, but Lambda function is easier to maintain and deploy.
Thanks in advance!
Lambda isn't using a ~/.aws config file at all, it is using environment variables by default. There are many ways to configure AWS credentials in boto3. You should be able to create a new boto3 client in your Lambda function with explicit AWS credentials like so:
client = boto3.client(
's3',
aws_access_key_id=ACCOUNT_A_ACCESS_KEY,
aws_secret_access_key=ACCOUNT_A_SECRET_KEY
)
And pass ACCOUNT_A_ACCESS_KEY and ACCOUNT_A_SECRET_KEY as environment variables to the function.
User error. I can verify boto3 in a lambda function can use credentials outside of its scope.
after troubleshooting more. the issue was that i was took in the "environmental variable" SESSION which is set on the lambda function, but not on my ec2 instance. so i was always using the lambda session key that seems to overide the explicit key and secret.
I'm writing a lambda function in Python 3.8. The function connects with a dynamodb using boto3:
db = boto3.resource('dynamodb', region_name='foo', aws_access_key_id='foo', aws_secret_access_key='foo')
That is what I have while I am developing on my local machine and need to test the function. But, when I deploy this to lambda, I can just remove the credentials and my function will connect to the dynamodb if I have the proper IAM roles and policies setup in place. For example, this code would work fine when deployed to lambda:
db = boto3.resource('dynamodb', region_name='foo')
The question is, how can I manage this in terms of pushing code to lambda? I am using AWS SAM to deploy to AWS. Right now what I do is once I'm done developing my function, I remove the aws_access_key_id='foo' and aws_secret_access_key='foo' parts manually and then deploy the functions using SAM.
There must be a better way to do this? Could I embed these into my IDE instead? I'm using PyCharm. Would that be a better way? If not, what else?
You should never put credentials in the code like that.
When running the code locally, use the AWS CLI aws configure command to store local credentials in a ~/.aws/config file. The AWS SDKs will automatically look in that file to obtain credentials.
In sam you can invoke your function locally using sam local invoke or sam local start-lambda.
Both of them take --profile parameter:
The AWS credentials profile to use.
This will ensure that your local lambda environment executes with correct credentials without needing to hard code them in your code. Subsequently, you can test your code without modifications which would otherwise be needed when hard coding the key id and secret key.
You can use environment variables.
Environment variables can be configured both in pycharm, as well as in AWS Lambda and AWS SAM.
As stated in the Lambda best practices: "Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable."
You can also use an environment variable to specify which environment is being used, which can then be used to explicitly determine whether credentials are necessary.
Below code creates AWS Credentials where access and secret keys are explicitly supplied.
AWSCredentials credentials = new BasicAWSCredentials(
"<AWS accesskey>",
"<AWS secretkey>"
);
But the issue with this approach is in production I can not use it.
So what will be an equivalent java code that will obtain credential automatically from the aws credentials chain and create a credential object or some EC2 client.
There are several alternatives you can use to store and recover your credentials in production. You should check the official documentation by AWS about 'Using the Default Credential Provider Chain'. Basically, you can use any of these alternatives:
Environment variables.
Java system properties.
The default credential profiles file.
Amazon ECS container credentials.
Instance profile credentials.
Depending on which one you choose it would use a different code. You have guidelines on how to use them in the above link.
My use case is as follows:
I need to push some data into AWS SQS queue using JAVA SDK and by help of IAM role (not using credential provider implementation).
Is there any way to do that?
Thanks for help in advance.
It's been a while, but this is not currently the case, it is now possible to use assume role with the Java SDK with a user. You can configure credentials in your .aws/credentials file as follows:
[useraccount]
aws_access_key_id=<key>
aws_secret_access_key=<secret>
[somerole]
role_arn=<the ARN of the role you want to assume>
source_profile=useraccount
Then, when you launch, set an environment variable: AWS_PROFILE=somerole
The SDK will use the credentials defined in useraccount to call assumeRole with the role_arn you provided. You'll of course need to be sure that the user with those credentials has the permissions to assume that role.
Note that if you're not including the full Java SDK in your project (i.e. you're including just the libraries for the services you need), you also need to include the aws-java-sdk-sts library in your classpath for this to work.
It is also possible to do all of this programmatically using STSAssumeRoleSessionCredentialsProvider, but this would require you to directly configure all of the services so it might not be as convenient as the profile approach which should just work for all services.
You can use role based authentication only on EC2 Instances, ECS Containers and Lambda functions. It is not possible to use them locally or on on premise servers.
DefaultAWSCredentialsProviderChain will automatically pick the EC2 Instance Role if it can't find the credentials via any of other methods. You can also create a custom AWSCredentialsProviderChain object with only injecting a instance of InstanceProfileCredentialsProvider to it like here
AWSCredentialsProviderChain myCustomChain = new AWSCredentialsProviderChain(new InstanceProfileCredentialsProvider());
For more info: https://docs.aws.amazon.com/java-sdk/latest/developer-guide/java-dg-roles.html
I have a lambda function configured through the API Gateway that is supposed to hit an external API via Node (ex: Twilio). I don't want to store the credentials for the functions right in the lambda function though. Is there a better place to set them?
The functionality to do this was probably added to Lambda after this question was posted.
AWS documentation recommends using the environment variables to store sensitive information. They are encrypted (by default) using the AWS determined key (aws/lambda) when you create a Lambda function using the AWS Lambda console.
It leverages AWS KMS and allows you to either: use the key determined by AWS, or to select your own KMS key (by selecting Enable encryption helpers); you need to have created the key in advance.
From AWS DOC 1...
"When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lambda function is invoked, those values are decrypted and made available to the Lambda code.
The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, should you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, then you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen."
The default key certainly does 'give errors when chosen' - which makes me wonder why they put it into the dropdown at all.
Sources:
AWS Doc 1: Introduction: Building Lambda Functions » Environment Variables
AWS Doc 2: Create a Lambda Function Using Environment Variables To Store Sensitive Information
While I haven't done it myself yet, you should be able to leverage AWS KMS to encrypt/decrypt API keys from within the function, granting the Lambda role access to the KMS keys.
Any storage service or database service on AWS will be able to solve your problem here. The question is what are you already using in your current AWS Lambda function? Based on that, and the following considerations:
If you need it fast and cost is not an issue, use Amazon DynamoDB
If you need it fast and mind the cost, use Amazon ElastiCache (Redis or Memcache)
If you are already using some relational database, use Amazon RDS
If you are not using anything and don't need it fast, use Amazon S3
In any case, you need to create some security policy (either IAM role or S3 bucket policy) to allow exclusive access between Lambda and your choice of storage / database.
Note: Amazon VPC support for AWS Lambda is around the corner, therefore any solution you choose, make sure it's in the same VPC with your Lambda function (learn more at https://connect.awswebcasts.com/vpclambdafeb2016/event/event_info.html)
I assume you're not referring to AWS credentials, but rather the external API credentials?
I don't know that it's a great place, but I have found posts on the AWS forums where people are putting credentials on S3.
It's not your specific use-case, but check out this forum thread.
https://forums.aws.amazon.com/thread.jspa?messageID=686261
If you put the credentials on S3, just make sure that you secure it properly. Consider making it available only to a specific IAM role that is only assigned to that Lambda function.
For 2022 we have AWS Secrets Manager for storing sensitive data like Database Credentials, API Tokens, Auth keys, etc.