Amazon Lambda - Alias specific environment variables - amazon-web-services

I am using AWS Lambda and can use Alias feature to point to multiple code promotion stages that we have (e.g. dev, qa, prod etc). I have setup the alias the same name as stages. Most of these functions gets triggered from S3 or SNS which has a different instance for each stage.
How can I setup a alias based environment variable so the function can get the specific info. The env vars setup in the base function(typically dev) gets carried over to all alias which does not work for deployment.
I know how to use stage variables in API gateway but the current use is not via gateway.

I don't believe there is a way to achieve what you are trying to. You would need to publish three versions of your Lambda function each with the correct environment variables and point each of your aliases to the correct version of the function.
You could use the description fields to help describe the versions before you point the aliases to them too, to make the changes more understandable.

I also find it interesting this isn't part of the plan for aliases, however you do have the context available in your code - Context.InvokedFunctionArn
I think the MINDSET is that you may call, for example, a S3 bucket and have a prefix of TEST or DEV or PROD (based on context InvokedFunctionArn you know which alias). Given this context and security based on the ARN you can use bucket policy / IAM to restrict your TEST ARN can only reach TEST s3 prefix files. That solves security between environments.
NOTE: I disagree with this model and think environment variables should be in he aliases, and if not specified in the alias fall back to what is in the version.
Although this works, the complexity around extra conditions on prefix, etc. is something many times will be misconfigured - Having a separate bucket seems much safer and matches the Serverless Application Model documentation better.

EDIT I'll leave this answer here as it may help some people but note that I found AspNetCoreStartupMode.FirstRequest caused longer cold starts by a few seconds.
I added some code to the LambdaEntryPoint to get the alias at Startup which means you can use it to load environment specific config:
public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
private ILambdaContext LambdaContext;
public LambdaEntryPoint()
: base(AspNetCoreStartupMode.FirstRequest)
{
}
[LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
public override async Task<APIGatewayProxyResponse> FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext)
{
LambdaContext = lambdaContext;
return await base.FunctionHandlerAsync(request, lambdaContext);
}
protected override void Init(IWebHostBuilder builder)
{
var alias = LambdaContext?.InvokedFunctionArn.Substring(LambdaContext.InvokedFunctionArn.LastIndexOf(":") + 1);
// Do stuff based on the environment
builder
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile($"appsettings.{alias}.json", optional: true, reloadOnChange: true);
})
.UseStartup<Startup>();
}
}
I've added a gist here: https://gist.github.com/secretorange/710375bc62bbc1f32e05822f55d4e8d3

The lambda context has invoked_function_arn – The Amazon Resource Name (ARN) that's used to invoke the function. Indicates if the invoker specified a version number or alias.
You can then use the alias to find the variables via Systems Manager parameter store, instead of environment variables.

Related

Access current environment name for a NextJS app running on amplify

I have added a few tables on DynamoDB using the amplify add storage command.
But the table has a suffix that is the environment name (dev, prod, etc).
How can I access the environment name on my NextJS backend so I can suffix the DynamoDB table name on my code ?
Or there is another way to achieve what I want ?
Amplify automatically creates DynamoDB tables (and also AppSync queries, etc) to match your current Amplify environment. When you create a new environment (eg, 'dev'), the Amplify will automatically create duplicate 'prod' tables, that will perform the same as you 'dev' tables. I'm guessing in your case, you won't need to access environment variables.
If you are using AppSync/GraphQL to make calls, then you can use Amplify's built in dynamic env features here: https://docs.amplify.aws/cli-legacy/graphql-transformer/function/#usage
For example, you could set up a custom Lambda function to update your DynamoDB. You could then set up an AppSync call to that Lambda in your schema.graphql file.
There are some cases where you may need to access your environment variables. You can either set them up manually in .env.local, or possibly easier to run a query in your NextJS javascript to determine the current domain:
const origin =
typeof window !== "undefined" && window.location.origin
? window.location.origin
: "";
console.log(origin); // "https://dev.<>.amplifyapp.com"
An better solution would be to follow this Amplify documentation, except I've tried it and it doesn't work.
I get this in the left nav panel. I've explored each one and no sign of the described Environment Variables section:
It describes accessing/updating env vars here, but apparently you can only find/use this feature if you've connected your Amplify app to Github first. (It would have been nice if the docs had clarified this!)

Terraform `name` vs `self_link` in GCP

In GCP, when using Terraform, I see I can use name attribute as well as self_link. So, I am wondering if there are cases where I must use any of those.
For example:
resource "google_compute_ssl_policy" "custom_ssl_policy" {
name = "my-ssl-policy"
profile = "MODERN"
min_tls_version = "TLS_1_1"
}
this object, then can be referred as:
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.name
and
ssl_policy = google_compute_ssl_policy.custom_ssl_policy.self_link
I know that object.name returns the Terraform object name, and object.self_link returns GCP's resources's URI.
I have tried with several objects, and it works with both attributes, so I want to know if this is trivial or there are situations where I should use one of them.
Here is the definition from the official documentation:
Nearly every GCP resource will have a name field. They are used as a
short way to identify resources, and a resource's display name in the
Cloud Console will be the one defined in the name field.
When linking resources in a Terraform config though, you'll primarily
want to use a different field, the self_link of a resource. Like name,
nearly every resource has a self_link. They look like:
https://www.googleapis.com/compute/v1/projects/foo/zones/us-central1-c/instances/terraform-instance
A resource's self_link is a unique reference to that resource. When
linking two resources in Terraform, you can use Terraform
interpolation to avoid typing out the self link!
Reference: https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
One example, I can deploy two cloud functions with the same name/same project but in different regions. In this case, if you had to reference both resources in Terraform code, you would be better by using the self_link since it's a unique URI.

How to get bucket name from Bucket object in AWS CDK for python

I've create an S3 bucket for hosting my website. For that I've used the below code from the AWS CDK for python docs
self.bucket = s3.Bucket(
self,
"my-bucket-name",
bucket_name="my-bucket-name",
removal_policy=core.RemovalPolicy.DESTROY,
website_index_document="index.html",
public_read_access=True
)
For a reason, I want to send this bucket object as an argument to another object and get the bucket name from the argument. So, I've tried
self.bucket.bucket_name
self.bucket.bucket_arn
nothing seems working, instead the object returns ${Token[TOKEN.189]}. Could anyone guide me through this?
If the bucket name is hard coded like the example you pasted above, you can always externalize it to the cdk context file. As you've seen, when you access the bucket name from the Bucket construct, it creates a reference to it and that is so if you need it in another resource, cloud formation will depend on the value from the Bucket resource by using the Ref/GetAtt capabilities in CloudFormation. Then it will be guaranteed that the bucket actually exists before it is used downstream.
If you don't care about that and just want the actual bucket name in the cdk app code then put the value in the cdk context json file and use node.try_get_context to retrieve it wherever.
There is a handy method called fromBucketName you can use if it wasn't defined in your current app:
const bucket = aws_s3.Bucket.fromBucketName(this, 'bucketLabel", "nameYouGaveBucket")
Otherwise, I believe you are looking for bucket.bucketName (typescript) or bucket.bucket_name (python).
See typescript docs python docs. This is also available in the CDK wrappers in other languages.
Note that there are similar methods for all sorts of CDK constructs, so you should refer often to the API docs, as there is lots like this you can find easily there.

Region isn't specified and can't be deduced from endpoint

I am getting issues when I implemented Custom Domain on AWS API and generated Android SDK ... now when I make authenticated calls to my API - SDK shows a error as follows:
Region isn't specified and can't be deduced from endpoint
What shall I do to remove this issue. I am sure its due to the custom domain implementation - because if I remove the custom domain mapping and then generate SDK - all calls are work again.
Since you use a custom domain the region isn't part of the endpoint, therefore you have to provide the region to the ApiClientFactory explicitly.
Something like:
ApiClientFactory f = new ApiClientFactory()
.credentialsProvider(credentialsProvider)
.region("us-east-1") // or whatever region you have :)
.endpoint("https://myendpoint");

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.