Amazon SWF Lambda Functions error - not available in region - amazon-web-services

I'm implementing a workflow with AmazonSWF and one of my activities comes in the form of a lambda function.
Both SWF and Lambda are being run on the London region, where they both work separately. However, my decider after polling for the task, it fails with the cause "LAMBDA_SERVICE_NOT_AVAILABLE_IN_REGION"
I haven't explicitly specified which region I'm working from in code, I assumed it would be the same one that I run the SWF web client in.
Here's the relevant code in my decider:
val attrs = ScheduleLambdaFunctionDecisionAttributes()
.withId("S3ControlWorkflowFunction")
.withName("S3ControlWorkflowFunction")
decisions.add(
Decision()
.withDecisionType(DecisionType.ScheduleLambdaFunction)
.withScheduleLambdaFunctionDecisionAttributes(attrs)
)
My activity worker doesn't do anything at all for the lambda function, but it shouldn't have to right?
I've registered the workflow with my IAM role here:
wf.registerWorkflowType(RegisterWorkflowTypeRequest()
.withDomain(DOMAIN)
.withName(WORKFLOW)
.withVersion(WORKFLOW_VERSION)
.withDefaultChildPolicy(ChildPolicy.TERMINATE)
.withDefaultTaskList(TaskList().withName(TASKLIST))
.withDefaultTaskStartToCloseTimeout("30")
.withDefaultLambdaRole(iamARN.id))

Found the fix.
Turns out calling lambda functions from SWF just isn't supported on region eu-west-2, as well as a few others. However I can't find any reference to this at all in the documentation. Found this forum post which gave me the hint. Migrating all the work I'd done over to eu-west-1 solved the issue. Poor show from Amazon here

Related

AWS - Added Layers are not listed and available in lambdas

I have a Cloud9 instance with a virtual python environment, what is going to published as a layer. Until today it was possible to zip the env libs and publish it to make it available in lambda functions. (= Standard workflow).
Command (as usual, execution was possible without errors):
aws lambda publish-layer-version --layer-name MyLayerName --zip-file fileb://python.zip --compatible-runtimes python3.9
But...
aws lambda list-layers
delivers an empty array. It also not selectable as a custom layer in my lambdas.
Does anybody has an explanation for this issue?
Best,
Felix
I found the problem. The layer had a size >250 MB.
I found this out after I tried to import it as a ZIP via the web interface. An error message came up. There is no error message on the console...

How to integrate AWS-XRay for others AWS services as SQS, S3

at my app, I was able to track all the lambda, APIGateway and DynamoDB requests through AWS-X-Ray.
I am doing the same as the answer in this question:
Adding XRAY Tracing to non-rest functions e.g., SQS, Cognito Triggers etc
However, how would be the case of S3, SQS or other services/non-rest functions ??
I saw some old code that does not even use aws-sdk, the dependencies are import direct like:
import {S3Client, S3Address, RoleService, SQSService} from '#sws/aws-bridge';
So, in these cases, how to integrate/activate AWS-XRay?
Thank you very much in advance!
Cheers,
Marcelo
At the moment Lambda doesn't support continuing traces from triggers other than REST APIs or direct invocation
The upstream service can be an instrumented web application or another Lambda function. Your service can invoke the function directly with an instrumented AWS SDK client, or by calling an API Gateway API with an instrumented HTTP client.
https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html
In every other case it will create its own, new Trace ID and use that instead.
You can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the event. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.

Code pipeline to build a branch on pull request

I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline

Choosing active SES ReceiptRuleSet in CloudFormation / Troposphere

I am creating a ReceipRuleSet with troposphere like this :
ReceiptRuleSet(
title="SesRuleset",
RuleSetName="ses-ruleset"
)
However, when I upload the stack with the generated CloudFormation template, the RuleSet appears as inactive in SES.
Does anyone knows if there is a way to set the created RuleSet as active without having to interact with the online console nor the CLI ?
troposphere maintaner here. I don't actually know a ton about SES, but have you included the ReceiptRuleSet in a ReceiptRule? My guess is that if a RuleSet is not used by a Rule, it's probably inactive, since I can't see anything in either cloudformation or the API that would indicate you can set it to "active".
Unfortunately, this doesn't seem to be supported by Cloudformation. I found the following blog post leveraging a lambda doing an API call to activate the RuleSet after creation: https://binx.io/blog/2019/11/25/how-to-set-the-active-receipt-rule-set-in-ses-using-cloudformation/
This seemed one moving piece too many for me, so I'm currently activating the RuleSet through the console.

AWS SWF Sample Code issue

I am new to Amazon simple workflow service and am following AWS Docs to understand SWF.
As per the documentation, once you execute the GreeterMain class after executing the GreeterWorker class, you should see active workflow execution on AWS console. However thats not the case with me. On executing the GreeeterMain class, the application prints out Hello World but I do not see any active workflows in "My Worfkflow Executions" sections on AWS console. I am not getting any errors as well.
On executing the GreeterWorker class, I can see "Workflow Types" and "Activities Types" section populated with appropriate workflows and activities.
Am I doing something wrong? Can someone please help out.
Thanks.
Ahh.. Found it.... As per doc, you create class with name "GreeterMain" in two different packages. One package is basic code path, second uses AWS SWF. While executing Eclipse was referring to basic code path and not invoking AWS SWF.