AWS .NET Core 3.1 Mock Lambda Test Tool - How to set up 2 or more functions for local testing - amazon-web-services

So I have a very simple aws-lambda-tools-defaults.json in my project:
{
"profile": "default",
"region": "us-east-2",
"configuration": "Release",
"framework": "netcoreapp3.1",
"function-runtime": "dotnetcore3.1",
"function-memory-size": 256,
"function-timeout": 30,
"function-handler": "LaCarte.RestaurantAdmin.EventHandlers::LaCarte.RestaurantAdmin.EventHandlers.Function::FunctionHandler"
}
It works, I can test my lambda code locally which is great. But I want to be able to test multiple lambdas, not just one. Does anyone else know how to change the JSON so that I can run multiple lambdas in the mock tool?
Thanks in advance,

Simply remove the function-handler attribute from your aws-lambda-tools-defaults.json file and add a template attribute referencing your serverless.template (the AWS CloudFormation template used to deploy your lambda functions to your AWS cloud environment)
{
...
"template": "serverless.template"
...
}
Then, you can test you lambda function locally for example with The AWS .NET Mock Lambda Test Tool. So now you'll see the Function dropdown List has changed from listing the lambda function name you specified in your function-handler
to the list of lambda functions declared in your serverless.template file, and then you can test them all locally! :)
You can find more info in this discussion

Answering after a long time, but might help someone else. To deploy and test multiple lambdas from visual studio, you have to implement serverless.template. Check AWS SAM documentation.
You can start with this one - https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/lambda-build-test-severless-app.html

Related

Make AWS IoT rule trigger Lambda function using specific alias

I'm making updates to a Lambda function that is triggered by an AWS IoT rule. In order to be able to quickly revert, use weighted aliases, etc., I would like to update this rule to point to a specific production alias of the function so that I can make updates to $LATEST and test them out. If everything goes well, I would make a new version and make the production alias point to that.
But when I try to update the action to trigger the alias instead of $LATEST, I'm only given choices of versions or $LATEST:
Any ideas? Google yields nothing.
I had to add the trigger on the other side. I had to add the trigger to the production alias and then delete the link to default/$LATEST under the Lambda function itself.
As far aas I know:
You can define an alias via the CLI. It is not possible via the web interface I think.
Have a look here.
{
"topicRulePayload": {
"sql": "SELECT * FROM 'some/topic'",
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23",
"actions": [
{
"lambda": {
"functionArn": "arn:aws:lambda:us-east-1:123456789012:function:${topic()}"
}
}
]
}
}
You can type: "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:${topic():MY_ALIAS}"
But you have to set this up via the CLI.

AWS Eventbridge: scheduling a CodeBuild job with environment variable overrides

When I launch an AWS CodeBuild project from the web interface, I can choose "Start Build" to start the build project with its normal configuration. Alternatively I can choose "Start build with overrides", which lets me specify, amongst others, custom environment variables for the build job.
From AWS EventBridge (events -> Rules -> Create rule), I can create a scheduled event to trigger the codebuild job, and this works. How though in EventBridge do I specify environment variable overrides for a scheduled CodeBuild job?
I presume it's possible somehow by using "additional settings" -> "Configure target input", which allows specification and templating of event JSON. I'm not sure though how how to work out, beyond blind trial and error, what this JSON should look like (to override environment variables in my case). In other words, where do I find the JSON spec for events sent to CodeBuild?
There are an number of similar questions here: e.g. AWS EventBridge scheduled events with custom details? and AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables , but I can't find the specifics for CodeBuild jobs. I've tried the CDK docs at e.g. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_events_targets.CodeBuildProjectProps.html , but am little wiser. I've also tried capturing the events output by EventBridge, to see what the event WITHOUT overrides looks like, but have not managed. Submitting the below (and a few variations: e.g. as "detail") as an "input constant" triggers the job, but the environment variables do not take effect:
{
"ContainerOverrides": {
"Environment": [{
"Name": "SOME_VAR",
"Value": "override value"
}]
}
}
There is also CodeBuild API reference at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax. EDIT: this seems to be the correct reference (as per my answer below).
The rule target's event input template should match the structure of the CodeBuild API StartBuild action input. In the StartBuild action, environment variable overrides have a key of "environmentVariablesOverride" and value of an array of EnvironmentVariable objects.
Here is a sample target input transformer with one constant env var and another whose value is taken from the event payload's detail-type:
Input path:
{ "detail-type": "$.detail-type" }
Input template:
{"environmentVariablesOverride": [
{"name":"MY_VAR","type":"PLAINTEXT","value":"foo"},
{"name":"MY_DYNAMIC_VAR","type":"PLAINTEXT","value":<detail-type>}]
}
I got this to work using an "input constant" like this:
{
"environmentVariablesOverride": [{
"name": "SOME_VAR",
"type": "PLAINTEXT",
"value": "override value"
}]
}
In other words, you can ignore the fields in the sample events in EventBridge, and the overrides do not need to be specified in a "detail" field.
I used the Code Build "StartBuild" API docs at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax to find this format. I would presume (but have not tested) that other fields show here would work similarly (and that the API reference for other services would work similarly when using EventBridge: can anyone confirm?).

Use CDK deploy time token values in a launch template user-data script

I recently starting porting part of my infrastructure to AWS CDK. Previously, I did some experiments with Cloudformation templates directly.
I am currently facing the problem that I want to encode some values (namely the product version) in a user-data script of an EC2 launch template and these values should only be loaded at deployment time. With Cloudformation this was quite simple, I was just building my JSON file from functions like Fn::Base64 and Fn::Join. E.g. it looked like this (simplified)
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId": "ami-xxx",
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"#!/bin/bash -xe",
{"Fn::Sub": "echo \"${SomeParameter}\""},
]
}
}
}
}
}
}
This way I am able to define the parameter SomeParameter on launch of the cloudformation template.
With CDK we can access values from the AWS Parameter Store either at deploy time or at synthesis time. If we use them at deploy time, we only get a token, otherwise we get the actual value.
I have achieved so far to read a value for synthesis time and directly encode the user-data script as base64 like this:
product_version = ssm.StringParameter.value_from_lookup(
self, f'/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': base64.b64encode(
f'echo {product_version}'.encode('utf-8')).decode('utf-8'),
})
With this code, however, the version gets read during synthesis time and will be hardcoded into the user-data script.
In order to be able to use dynamic values that are only resolved at deploy time (value_for_string_parameter) I would somehow need to tell CDK to write a Cloudformation template similar to what I have done manually before (using Fn::Base64 only in Cloudformation, not in Python). However, I did not find a way to do this.
If I read a value that is only to be resolved at deploy time like follows, how can I use it in the UserData field of a launch template?
latest_string_token = ssm.StringParameter.value_for_string_parameter(
self, "my-plain-parameter-name", 1)
It is possible using the Cloudformation intrinsic functions which are available in the class aws_cdk.core.Fn in Python.
These can be used when creating a launch template in EC2 to combine strings and tokens, e.g. like this:
import aws_cdk.core as cdk
# loads a value to be resolved at deployment time
product_version = ssm.StringParameter.value_for_string_parameter(
self, '/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': cdk.Fn.base64(cdk.Fn.join('\n', [
'#!/usr/bin/env bash',
cdk.Fn.join('=', ['MY_PRODUCT_VERSION', product_version]),
'git checkout $MY_PRODUCT_VERSION',
])),
})
This example could result in the following user-data script in the launch template if the parameter store contains version 1.2.3:
#!/usr/bin/env bash
MY_PRODUCT_VERSION=1.2.3
git checkout $MY_PRODUCT_VERSION

Is it possible to deploy a background Function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"

It's possible to create a topic "my-topic-project-a" in project "project-a" so that it can be publicly visible (this is done by setting the role "pub/sub subscriber" to "allUsers" on it).
Then from project "project-b" I can create a subscription to "my-topic-project-a" and read the events from "my-topic-project-a". This is done using the following gcloud commands:
(these commands are executed on project "project-b")
gcloud pubsub subscriptions create subscription-to-my-topic-project-a --topic projects/project-a/topics/my-topic-project-a
gcloud pubsub subscriptions pull subscription-to-my-topic-project-a --auto-ack
So ok this is possible when creating a subscription in "project-b" linked to "my-topic-project-a" in "project-a".
In my use case I would like to be able to deploy a background function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"
But ... this doesn't seem to be possible since gcloud CLI is not happy when you provide the full topic name while deploying the cloud function:
gcloud beta functions deploy myBgFunctionInProjectB --runtime nodejs8 --trigger-topic projects/project-a/topics/my-topic-project-a --trigger-event google.pubsub.topic.publish
ERROR: (gcloud.beta.functions.deploy) argument --trigger-topic: Invalid value 'projects/project-a/topics/my-topic-project-a': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
is there a way to achieve that or this is actually not possible?
Thanks
So, it seems that is not actually possible to do this. I have found it by checking it in 2 different ways:
If you try to create a function through the API explorer, you will need to fill the location where you want to run this, for example, projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION, and then, provide a request body, like this one:
{
"eventTrigger": {
"resource": "projects/PROJECT_FOR_TOPIC/topics/YOUR_TOPIC",
"eventType": "google.pubsub.topic.publish"
},
"name":
"projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION/functions/NAME_FOR_FUNTION
}
This will result in a 400 error code, with a message saying:
{
"field": "event_trigger.resource",
"description": "Topic must be in the same project as function."
}
It will also say that you missed the source code, but, nonetheless, the API already shows that this is not possible.
There is an already open issue in the Public Issue Tracker for this very same issue. Bear in mind that there is no ETA for it.
I also tried to do this from gcloud, as you tried. I obviously had the same result. I then tried to remove the projects/project-a/topics/ bit from my command, but this creates a new topic in the same project that you create the function, so, it's not what you want.

How to create re-usable blocks in CloudFormation

Scenario:
I have a serverless/cloudformation script that re-deploys the same code with different configurations to AWS as lambdas and exposes each lambda via API Gateway.
So far the only way I've been able to do this is via copious amounts of copy and paste within the same script.. but its starting to drive me up the walls... thus, as I'm a complete newby to AWS and, navigating the AWS docs and internet has yielded pretty bad results, in answering this... I'm trying my luck here.
Within a cloudformation script:
"Resources":{
"LambdaResourceNG":{
"Type":"AWS::Serverless::Function",
"Properties":{
"Handler":"some-handlername::foo::bar",
"Runtime":"dotnetcore2.0",
"Environment":{
"Variables":{
"PictureOptions__OriginalPictureSuffix":{
"Fn::Join":[
"",
[
"_",
"ng",
"_",
{
"Fn::FindInMap":[
"Environments",
{
"Ref":"EnvironmentValue"
},
"PictureOptionsOriginalPictureSuffix"
]
}
]
]
},
},
"Events":{
"Bar":{
"Type":"Api",
"Properties":{
"Path":"/ng/bar",
"Method":"POST"
}
},
"Foo":{
"Type":"Api",
"Properties":{
"Path":"/ng/foo",
"Method":"POST"
}
}
}
}
},
}
Question:
In the script block above.. The resource is called LambdaResourceNG. If I wanted to have another resource...LambdaResourceKE... with all appropriate sections changed to KE. How would I make a "function" that I could re-use within this erm... language?
I've already found out how to use maps to swap out variables based on some env value... but how would one go about creating reusable blocks of code/config?
If the existing CloudFormation nested stacks feature doesn't suffice and you need real programmability then the final CF template can be the output of a higher-level process.
There are tools available to create templates e.g. AWS Cloud Development Kit, Troposphere and cfndsl.
Another option would be to drive the creation of the final template from a CLI. It doesn't have to be particularly sophisticated, just something that includes a template engine (like jinja2 or handlebars). Then you can program the inclusion of reusable template fragments, dynamically inject values into those fragments, iterate over loops as necessary, and emit a final CloudFormation template (or a main template and set of nested templates).
You can nest a CloudFormation Stack within another using the AWS::CloudFormation::Stack resource type. Nested stacks cannot exist without their parent, deleting the parent stack will delete all nested stacks. Note that the TemplateURL must point to S3, and that is where the aws cloudformation package CLI command helps by uploading a local file there and replacing the URL in the template.
Cross-stack references also helps in modularizing templates. For example, a "database network" stack can export the subnet ids and other values for any future database stack to use. Note that modularization goes further than merging text, but declaring and managing the resources lifecycle relationships correctly.
Stacks can even be composed further and across different regions and accounts using StackSets. This may be quite helpful when managing applications provisioned per tenant or sub-organization. This is frequently the case in "self-service IT" that can be achieved using CloudFormation with other services like AWS Service Catalog and AWS Marketplace.
Nested stacks are clumsy in that you don't necessarily want an entire stack just for a single resource. CloudFormation Modules would solve this problem nicely (reference). You can even package multiple resources in a single module.
You can create reusable modules with pre-packaged properties, which:
Reduce boilerplate configuration
Enforce company-wide standards
Modules are deployed to CloudFormation Registry, where they can be versioned and used by anyone in your company. You can use Parameters in the module to pass in properties just like you would a standard AWS resource. You can then create the custom modules like this:
Resources:
LambdaResourceNG:
Type: YourCompany::LambdaApi::FooBarApi
Properties:
ApiName: ng
LambdaResource:
Type: YourCompany::LambdaApi::FooBarApi
Properties:
ApiName: ke
To create a reusable template in Cloudformation. There are couple of things you need to keep in mind
Use Nested stack : using nested stack you can create a small stack for each AWS Service ( i.e VPC,LoadBalancer ), which you can use in other projects
Use Parameters : Use perameters as much as possible
Use Conditions : AWS Cloudformation provide solution to add conditions, Using Conditions we can use same template to perform multiple tasks