Have previously worked with lambda orchestration using AWS step function. This has been working very well. Setting the result_path of each lambda will pass along arguments to subsequent lambda.
However, I now need to run a fargate task and then pass along arguments from that fargate task to subsequent lambdas. I have created a python script that acts as the entrypoint in the container definition. Obviously in a lambda function the handler(event, context) acts as the entrypoint and by defining a return {"return_object": "hello_world"} its easy to pass a long a argument to the next state of the state machine.
In my case though, I have task definition with a container definition created from this Dockerfile:
FROM python:3.7-slim
COPY my_script.py /my_script.py
RUN ln -s /python/my_script.py /usr/bin/my_script && \
chmod +x /python/my_script.py
ENTRYPOINT ["my_script"]
Hence, I am able to invoke the state machine and it will execute my_script as intended. But how do I get the output from this python script and pass it along to another state in the state machine?
I have found some documentation on how to pass along inputs, but no example of passing along outputs.
To get output from an ECS/Fargate task, I think you have to use the Task Token Integration instead of Run Job (Sync) which is usually recommended for Fargate tasks. You can pass the token as a container override ("TASK_TOKEN": "$$.Task.Token"). Then inside your image you need some logic like this:
client = boto3.client('stepfunctions')
client.send_task_success(
taskToken=os.environ["TASK_TOKEN"],
output=output
)
to pass it back.
Related
I'm looking to add CMD override for an AWS Lambda via either terraform or docker. The lambda has type image since it is being run from a container. Trying to add the handler through terraform gives
Message_: "Please don't provide Handler or Runtime or Layer when the intended function PackageType is Image.",
Is there a way to set the _HANDLER to something and be used within the Dockerfile.
CMD "${_HANDLER}"
Ie. $_HANDLER = $LAMBDA_HANDLER
CMD ${_HANDLER} will then point to the lambda handler?
The image_config block allows you to override the command and entrypoint of the container. For example:
image_config {
command = ["app.other_handler"]
}
From what I can tell from the docs, the Google Cloud Life Sciences API (v2beta) allows you to define a "pipeline" with multiple commands in it, but these run sequentially.
Am I correct in thinking there is no way to have some commands run in parallel, and for a group of commands to be dependent on others (that is, to not start running until their predecessors have finished)?
You are correct that you cannot run commands in parallel, or in such a way that the process is dependent upon the completion of some other process.
When you run commands using the commands[] flag. This is exactly the same as passing the CMD parameter to a Docker container(as this is exactly what you are doing). The commands[] flag overrides the CMD arguments passed to the Docker container at runtime. If the container in using an Entrypoint then the commands[] flag will override the Entrypoint argument values for the container
You can review the official here;
Method: projects.locations.pipelines.run
gcloud command-line tool examples
gcloud beta lifesciences
is anyone aware of a method to execute post-deploy functionality. Follwing is a sample of a casual CDK app.
app = core.App()
Stack(app, ...)
app.synth()
What I am looking for is a way to apply some logic after template is deployed. The thing is the app completes before cdk tool starts deploying template.
thanks
Edit: CDK now has https://github.com/cdklabs/cdk-triggers, which allows calling Lambda functions before/after resource/stack creation
You can't do that from CDK at the moment. See https://github.com/awslabs/aws-cdk/issues/2849. Maybe add your +1 there, let them know you'd like to see this feature.
What you can do is wrap cdk deploy in a shell script that will run whatever you need after the CDK is done. Something like:
#!/bin/sh
cdk deploy "$#"
success=$?
if [ $success != 0 ]; then
exit $success
fi
run_post_deploy_with_arguments.sh "$#"
will run deploy with the given arguments, then call a shell scripts passing it the same arguments if deployment was successful. This is a very crude example.
Instead of wrapping the cdk deploy command in a bash script I find it more convenient to add a pre and post deployment script under a cdk_hooks.sh file and call it before and after the CDK deployment command via the cdk.json file. In this way you can keep using the cdk deploy command without calling custom scripts manually.
cdk.json
{
"app": "sh cdk_hooks.sh pre && npx ts-node bin/stacks.ts && sh cdk_hooks.sh post"
,
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true"
}
}
and cdk_hooks.sh
#!/bin/bash
PHASE=$1
case "$PHASE" in
pre)
# Do something
;;
post)
# Do something
;;
*)
echo "Please provide a valid cdk_hooks phase"
exit 64
esac
You can use CustomResource to run some code in a lambda (which you will also need to deploy unfortunately). The lambda will get the event of the custom resource (create, update delete), so you will be able to handle different scenarios (let's say you want to seed some table after deploy, this way you will be able to clean the data at a destroy for instance).
Here is a pretty good post about it.
Personally I couldn't find a more elegant way to do this.
Short answer: you can't. I've been waiting for this feature as well.
What you can do is wrap your deployment in a custom script that performs all your other logic, which also makes sense given that what you want to do is probably not strictly a "deploy thing" but more like "configure this and that now that the deploy is finished".
Another solution would be to rely on codebuild to perform your deploys and define there all your steps and which custom scripts to run after a deploy (I personally use this solution, with a specific stack to deploy this particular codedeploy project).
I am trying to run a Play Framework application on AWS EC2 Containers. I am using sbt-ecr to build and upload the image.
Now I would like to pass different command line parameters to Play, for instance -Dconfig=production.conf.
Usually when I run it locally my command looks like this:
docker run -p 80:9000 myimage -Dconfig.resource=production.conf
The port settings can be configured separately in AWS. How can I set Play's command line parameter for AWS EC2 containers?
Apparently my problem was of a completely different nature and didn't have anything at all to do with the entrypoint or cmd arguments.
The task didn't start because the loggroup which was configured for the container didn't exist.
Here is how to pass parameters to an image on ECS just like on the command line or using the docker CMD instruction. Just put them in the "Command" field in the "Environment" section of the container configuration like so:
-Dconfig.resource=production.conf,-Dhttps.port=9443
In my fabric scripts I have the following problem. I have a main task called autodeploy. Within this task I have some tasks that I only want to run once, locally. all remote tasks should run on each of the hosts of the host list.
env.roledefs ={
'testing': ['t-server-01', 't-server-02']
'staging': ['s-server-01', 's-server-02']
'live': ['l-server-01', 'l-server-02']
}
def localtask1():
# download artifact
def localtask2():
# cleanup locally
def remotetask():
# deploy artifact to all hosts
def autodeploy():
localtask1() # run this task only once, locally
remotetask() # run this task on all hosts
localtask2() # run this task only once
The call is the following. I want to pass the role as an attribute.
fab -R test autodeploy
Use the execute function inside the wrapper function autodeploy, and specify a host list for the remote task.
For the other two you can call them with execute, like for the remote task, or directly. Use the local function inside them and you'll be fine, and not need to have ssh on localhost.
Docs are here for how best ot use the new execute function.
EDIT
Since you mention a different use case in the comments I'll mock up how you'd do that, from bits in the documentation given already, adding the param passing section
code:
#copy above
#redefine this one
def autodeploy(role_from_arg):
localtask1()
execute(remotetask, role=role_from_arg)
localtask2()
#calls like fab autodeploy:testing
Use the runs_once decorator.
#runs_once
def localtask1():
local('command')
You can abuse the hosts decorator to force a single task to run once only, by specifying "localhost" as the host.
Example:
#fabric.decorators.hosts("localhost")
def localtask1():
# download artefact