Make AWS IoT rule trigger Lambda function using specific alias - amazon-web-services

I'm making updates to a Lambda function that is triggered by an AWS IoT rule. In order to be able to quickly revert, use weighted aliases, etc., I would like to update this rule to point to a specific production alias of the function so that I can make updates to $LATEST and test them out. If everything goes well, I would make a new version and make the production alias point to that.
But when I try to update the action to trigger the alias instead of $LATEST, I'm only given choices of versions or $LATEST:
Any ideas? Google yields nothing.

I had to add the trigger on the other side. I had to add the trigger to the production alias and then delete the link to default/$LATEST under the Lambda function itself.

As far aas I know:
You can define an alias via the CLI. It is not possible via the web interface I think.
Have a look here.
{
"topicRulePayload": {
"sql": "SELECT * FROM 'some/topic'",
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23",
"actions": [
{
"lambda": {
"functionArn": "arn:aws:lambda:us-east-1:123456789012:function:${topic()}"
}
}
]
}
}
You can type: "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:${topic():MY_ALIAS}"
But you have to set this up via the CLI.

Related

AWS Eventbridge: scheduling a CodeBuild job with environment variable overrides

When I launch an AWS CodeBuild project from the web interface, I can choose "Start Build" to start the build project with its normal configuration. Alternatively I can choose "Start build with overrides", which lets me specify, amongst others, custom environment variables for the build job.
From AWS EventBridge (events -> Rules -> Create rule), I can create a scheduled event to trigger the codebuild job, and this works. How though in EventBridge do I specify environment variable overrides for a scheduled CodeBuild job?
I presume it's possible somehow by using "additional settings" -> "Configure target input", which allows specification and templating of event JSON. I'm not sure though how how to work out, beyond blind trial and error, what this JSON should look like (to override environment variables in my case). In other words, where do I find the JSON spec for events sent to CodeBuild?
There are an number of similar questions here: e.g. AWS EventBridge scheduled events with custom details? and AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables , but I can't find the specifics for CodeBuild jobs. I've tried the CDK docs at e.g. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_events_targets.CodeBuildProjectProps.html , but am little wiser. I've also tried capturing the events output by EventBridge, to see what the event WITHOUT overrides looks like, but have not managed. Submitting the below (and a few variations: e.g. as "detail") as an "input constant" triggers the job, but the environment variables do not take effect:
{
"ContainerOverrides": {
"Environment": [{
"Name": "SOME_VAR",
"Value": "override value"
}]
}
}
There is also CodeBuild API reference at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax. EDIT: this seems to be the correct reference (as per my answer below).
The rule target's event input template should match the structure of the CodeBuild API StartBuild action input. In the StartBuild action, environment variable overrides have a key of "environmentVariablesOverride" and value of an array of EnvironmentVariable objects.
Here is a sample target input transformer with one constant env var and another whose value is taken from the event payload's detail-type:
Input path:
{ "detail-type": "$.detail-type" }
Input template:
{"environmentVariablesOverride": [
{"name":"MY_VAR","type":"PLAINTEXT","value":"foo"},
{"name":"MY_DYNAMIC_VAR","type":"PLAINTEXT","value":<detail-type>}]
}
I got this to work using an "input constant" like this:
{
"environmentVariablesOverride": [{
"name": "SOME_VAR",
"type": "PLAINTEXT",
"value": "override value"
}]
}
In other words, you can ignore the fields in the sample events in EventBridge, and the overrides do not need to be specified in a "detail" field.
I used the Code Build "StartBuild" API docs at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax to find this format. I would presume (but have not tested) that other fields show here would work similarly (and that the API reference for other services would work similarly when using EventBridge: can anyone confirm?).

AWS .NET Core 3.1 Mock Lambda Test Tool - How to set up 2 or more functions for local testing

So I have a very simple aws-lambda-tools-defaults.json in my project:
{
"profile": "default",
"region": "us-east-2",
"configuration": "Release",
"framework": "netcoreapp3.1",
"function-runtime": "dotnetcore3.1",
"function-memory-size": 256,
"function-timeout": 30,
"function-handler": "LaCarte.RestaurantAdmin.EventHandlers::LaCarte.RestaurantAdmin.EventHandlers.Function::FunctionHandler"
}
It works, I can test my lambda code locally which is great. But I want to be able to test multiple lambdas, not just one. Does anyone else know how to change the JSON so that I can run multiple lambdas in the mock tool?
Thanks in advance,
Simply remove the function-handler attribute from your aws-lambda-tools-defaults.json file and add a template attribute referencing your serverless.template (the AWS CloudFormation template used to deploy your lambda functions to your AWS cloud environment)
{
...
"template": "serverless.template"
...
}
Then, you can test you lambda function locally for example with The AWS .NET Mock Lambda Test Tool. So now you'll see the Function dropdown List has changed from listing the lambda function name you specified in your function-handler
to the list of lambda functions declared in your serverless.template file, and then you can test them all locally! :)
You can find more info in this discussion
Answering after a long time, but might help someone else. To deploy and test multiple lambdas from visual studio, you have to implement serverless.template. Check AWS SAM documentation.
You can start with this one - https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/lambda-build-test-severless-app.html

Can google cloud functions listen to multiple event triggers?

I am currently trying to deploy a Google cloud function using the REST API, in order to listen to a Google cloud storage bucket for changes/deletions.
However, I noticed that I can only specify one EventTrigger
{
"name": string,
"description": string,
"status": enum (CloudFunctionStatus),
"entryPoint": string,
"runtime": string,
...
"sourceUploadUrl": string
// End of list of possible types for union field source_code.
// Union field trigger can be only one of the following:
"httpsTrigger": {
object (HttpsTrigger)
},
"eventTrigger": {
object (EventTrigger)
}
// End of list of possible types for union field trigger.
}
With my options for what to listen to being the following choices
google.storage.object.finalize
google.storage.object.delete
google.storage.object.archive
google.storage.object.metadataUpdate
What if I want to listen to multiple triggers, such as google.storage.object.finalize and google.storage.object.delete, at the same time? Do I need to deploy separate cloud functions for each one? That seems quite inconvenient. Any suggestions or advice would be appreciated.
Yes, you need to deploy multiple functions. Each function can have exactly one trigger.
You could deploy multiple configurations that use the same sources, or use one trigger which has a response to call another function that aggregates different kinds of events.
It can be actually achieved, we do so in our company.
The trick is: put your Cloud Function / Cloud Run as only-authenticated http trigger
Then you get a URL (which only can be touched via authenticated http request).
Then you can both:
Invoke it directly (postman / whatever) if you pass the Bearer token (i.e. get via gcloud auth print-identity-token)
Trigger from pubsub with a PUSH subscription authenticated to that endpoint.
Note that it requires you to give permission from your PubSub subscriber to that resource, but it works! and makes your function/service both sync and async behaviour.
Paul answer is correct if you want to handle only a subset of available events. Or if you want to want to plug directly function on storage events.
However, if you want to catch all, or if you want to choose yourselves in your function the events type to handle, you can 'cheat'.
Indeed, you can publish bucket notification in pubsub, and plug a function on pubsub events.

Versioning using base path mappings in AWS API Gateway

I have a pretty straightforward stack: API Gateway sitting in front of a lambda. Currently my paths looks something like:
/dogs, /dogs/{id}, etc.
All I want to do is add a version to the base path (i.e. api.dogs.com/v1/dogs). I tried doing this by creating a custom domain name with a base path mapping of v1 pointing to my stage in API Gateway.
This routes just fine through API Gateway but has issues once it hits the routing logic in my lambda. My lambda is expecting /dogs but with the base path mapping the path is actually v1/dogs.
What's a good way to approach this? I want to get away from having to deal with versions directly in my code (lambda) if possible.
In the event object your lambda function receives you should actually find all the needed information with and without versioning:
event = {
"resource": "/hi",
"path": "/v1/hi",
"requestContext": {
"resourcePath": "/hi",
"path": "/v1/hi",
....
},
....
}
Just adjust the code in your router logic to access the desired attributes should fix your problem and remove the need to care about versioning again in your code.

Amazon Lambda - Alias specific environment variables

I am using AWS Lambda and can use Alias feature to point to multiple code promotion stages that we have (e.g. dev, qa, prod etc). I have setup the alias the same name as stages. Most of these functions gets triggered from S3 or SNS which has a different instance for each stage.
How can I setup a alias based environment variable so the function can get the specific info. The env vars setup in the base function(typically dev) gets carried over to all alias which does not work for deployment.
I know how to use stage variables in API gateway but the current use is not via gateway.
I don't believe there is a way to achieve what you are trying to. You would need to publish three versions of your Lambda function each with the correct environment variables and point each of your aliases to the correct version of the function.
You could use the description fields to help describe the versions before you point the aliases to them too, to make the changes more understandable.
I also find it interesting this isn't part of the plan for aliases, however you do have the context available in your code - Context.InvokedFunctionArn
I think the MINDSET is that you may call, for example, a S3 bucket and have a prefix of TEST or DEV or PROD (based on context InvokedFunctionArn you know which alias). Given this context and security based on the ARN you can use bucket policy / IAM to restrict your TEST ARN can only reach TEST s3 prefix files. That solves security between environments.
NOTE: I disagree with this model and think environment variables should be in he aliases, and if not specified in the alias fall back to what is in the version.
Although this works, the complexity around extra conditions on prefix, etc. is something many times will be misconfigured - Having a separate bucket seems much safer and matches the Serverless Application Model documentation better.
EDIT I'll leave this answer here as it may help some people but note that I found AspNetCoreStartupMode.FirstRequest caused longer cold starts by a few seconds.
I added some code to the LambdaEntryPoint to get the alias at Startup which means you can use it to load environment specific config:
public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{
private ILambdaContext LambdaContext;
public LambdaEntryPoint()
: base(AspNetCoreStartupMode.FirstRequest)
{
}
[LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
public override async Task<APIGatewayProxyResponse> FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext)
{
LambdaContext = lambdaContext;
return await base.FunctionHandlerAsync(request, lambdaContext);
}
protected override void Init(IWebHostBuilder builder)
{
var alias = LambdaContext?.InvokedFunctionArn.Substring(LambdaContext.InvokedFunctionArn.LastIndexOf(":") + 1);
// Do stuff based on the environment
builder
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile($"appsettings.{alias}.json", optional: true, reloadOnChange: true);
})
.UseStartup<Startup>();
}
}
I've added a gist here: https://gist.github.com/secretorange/710375bc62bbc1f32e05822f55d4e8d3
The lambda context has invoked_function_arn – The Amazon Resource Name (ARN) that's used to invoke the function. Indicates if the invoker specified a version number or alias.
You can then use the alias to find the variables via Systems Manager parameter store, instead of environment variables.