I am running localstack inside of a docker container with this docker-compose.yml file.
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
To start localstack I run TMPDIR=/private$TMPDIR docker-compose up.
I have created two lambdas. When I run aws lambda list-functions --endpoint-url http://localhost:4574 --region=us-east-1 this is the output.
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "qmDiumefhM0UutYv32By67cj24P/NuHIhKHgouPkDBs=",
"FunctionName": "handler",
"LastModified": "2019-08-08T17:56:58.277+0000",
"RevisionId": "ffea379b-4913-420b-9444-f1e5d51b5908",
"CodeSize": 5640253,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:handler",
"Environment": {
"Variables": {
"DB_NAME": "somedbname",
"IS_PRODUCTION": "FALSE",
"SERVER": "xxx.xxx.xx.xxx",
"DB_PASS": "somepass",
"DB_USER": "someuser",
"PORT": "someport"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
},
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "wbT8YzTsYW4sIOAXLtjprrveq5NBMVUaa2srNvwLxM8=",
"FunctionName": "paymentenginerouter",
"LastModified": "2019-08-08T18:00:28.923+0000",
"RevisionId": "bd79cb2e-6531-4987-bdfc-25a5d87e93f4",
"CodeSize": 6602279,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:paymentenginerouter",
"Environment": {
"Variables": {
"DB_QUERY_LAMBDA": "handler",
"AWS_REGION": "us-east-1"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
}
]
}
Inside the paymentenginerouter code I am attempting to call the handler lambda via:
lambdaParams := &invoke.InvokeInput{
FunctionName: aws.String(os.Getenv("DB_QUERY_LAMBDA")),
InvocationType: aws.String("RequestResponse"),
LogType: aws.String("Tail"),
Payload: payload,
}
result, err := svc.Invoke(lambdaParams)
if err != nil {
resp.StatusCode = 500
log.Fatal("Error while invoking lambda:\n", err.Error())
}
Where invoke is an import: invoke "github.com/aws/aws-sdk-go/service/lambda"
When I run the paymentenginerouter lambda via:
aws lambda invoke --function paymentenginerouter --payload '{ "body": "{\"id\":\"12\",\"internalZoneCode\":\"xxxxxx\",\"vehicleId\":\"xxxxx\",\"vehicleVrn\":\"vehicleVrn\",\"vehicleVrnState\":\"vehicleVrnState\",\"durationInMinutes\":\"120\",\"verification\":{\"Token\":null,\"lpn\":null,\"ZoneCode\":null,\"IsExtension\":null,\"ParkingActionId\":null},\"selectedBillingMethodId\":\"xxxxxx\",\"startTimeLocal\":\"2019-07-29T11:36:47\",\"stopTimeLocal\":\"2019-07-29T13:36:47\",\"vehicleVin\":null,\"orderId\":\"1\",\"parkingActionType\":\"OnDemand\",\"digitalPayPaymentInfo\":{\"Provider\":\"<string>\",\"ChasePayData\":{\"ConsumerIP\":\"xxxx\",\"DigitalSessionID\":\"xxxx\",\"TransactionReferenceKey\":\"xxxx\"}}}"
}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I receive this error:
localstack_1 | 2019/08/08 20:02:28 Error while invoking lambda:
localstack_1 | UnrecognizedClientException: The security token included in the request is invalid.
localstack_1 | status code: 403, request id: bd4e3c15-47ae-44a2-ad6a-376d78d8fd92
Note
I can run the handler lambda without error by calling it directly through the cli:
aws lambda invoke --function handler --payload '{"body": "SELECT TOKEN, NAME, CREATED, ENABLED, TIMESTAMP FROM dbo.PAYMENT_TOKEN WHERE BILLING_METHOD_ID='xxxxxxx'"}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I thought the AWS credentials are setup according the environment variables in localstack but I could be mistaken. Any idea how to get past this problem?
I am quite new to AWS lambdas and an absolute noob when it comes to localstack so please ask for more details if you need them. It's possible I am missing a critical piece of information in my description.
The error you receiving The security token included in the request is invalid. means that your lambda is trying to call out to the real AWS with invalid credentials, rather than going to Localstack.
When running a lambda inside localstack and where the lambda code itself has to call out to any AWS services hosted in localstack, you will need to ensure that any endpoints used are re-directed to localstack.
To do this there is a documented feature in localstack found here:
https://github.com/localstack/localstack#configurations
LOCALSTACK_HOSTNAME: Name of the host where LocalStack services are available.
This is needed in order to access the services from within your Lambda functions
(e.g., to store an item to DynamoDB or S3 from Lambda). The variable
LOCALSTACK_HOSTNAME is available for both, local Lambda execution
(LAMBDA_EXECUTOR=local) and execution inside separate Docker containers
(LAMBDA_EXECUTOR=docker).
Within your lambda code, ensure you use this environment variable to set the hostname (preceded with http:// and suffixed with the port number of that service in localstack, e.g. :4569 for dynamodb). This will ensure that the calls go to the right place.
Example in Go code snippet that would be added to your lambda where you are making a call to DynamoDB:
awsConfig.WithEndpoint("http://" + os.Getenv("LOCALSTACK_HOSTNAME") + ":4569")
Related
I have an ECS cluster that will be created by my cdk stack. Before my ECS service stack deployment I have to run a fargate task to generate the build files and configs for my application. I want to run a standalone task inside an existing Ecs cluster.
There are two questions. I Will try to answer both:
First of all you need to run the Fargate task via CDK
you need to create a Rule which runs your ECS task by schedule (or some else event)
import { Rule, Schedule } from '#aws-cdk/aws-events';
import { EcsTask } from '#aws-cdk/aws-events-targets';
new Rule(this, 'ScheduleRule', {
schedule: schedule,
targets: [
new EcsTask({
cluster,
taskDefinition: task,
}),
],
});
Second one - how I can use the existing cluster
you can find your cluster by attributes
import { Cluster } from '#aws-cdk/aws-ecs';
let cluster = Cluster.fromClusterAttributes(this, 'cluster_id', {
clusterName: "CLUSTER_NAME", securityGroups: [], vpc: iVpc
});
update:
you can trigger your task via some custom event:
new Rule(this, 'EventPatternRule', {
eventPattern: {
"version": "0",
"id": "CWE-event-id",
"detail-type": "CodePipeline Pipeline Execution State Change",
"source": "aws.codepipeline",
"account": "123456789012",
"time": "2017-04-22T03:31:47Z",
"region": "us-east-1",
"resources": [
"arn:aws:codepipeline:us-east-1:123456789012:pipeline:myPipeline"
],
"detail": {
"pipeline": "myPipeline",
"version": "1",
"state": "STARTED",
"execution-id": "01234567-0123-0123-0123-012345678901"
}
}
targets: [
new EcsTask({
cluster,
taskDefinition: task,
}),
],
});
please, see this doc for the understanding of Event Patterns
I was able to setup AutoScaling events as rules in EventBridge to trigger SSM Commands, but I've noticed that with my chosen Target Value the event is passed to all my active EC2 Instances. My Target key is a tag shared by those instances, so my mistake makes sense now.
I'm pretty new to EventBridge, so I was wondering if there's a way to actually target the instance that triggered the AutoScaling event (as in extracting the "InstanceId" that's present in the event data and use that as my new Target Value). I saw the Input Transformer, but I think that just transforms the event data to pass to the target.
Thanks!
EDIT - help with js code for Lambda + SSM RunCommand
I realize I can achieve this by setting EventBridge to invoke a Lambda function instead of the SSM RunCommand directly. Can anyone help with the javaScript code to call a shell command on the ec2 instance specified in the event data (event.detail.EC2InstanceId)? I can't seem to find a relevant and up-to-date base template online, and I'm not familiar enough with js or Lambda. Any help is greatly appreciated! Thanks
Sample of Event data, as per aws docs
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-west-2",
"resources": [
"auto-scaling-group-arn",
"instance-arn"
],
"detail": {
"StatusCode": "InProgress",
"Description": "Launching a new EC2 instance: i-12345678",
"AutoScalingGroupName": "my-auto-scaling-group",
"ActivityId": "87654321-4321-4321-4321-210987654321",
"Details": {
"Availability Zone": "us-west-2b",
"Subnet ID": "subnet-12345678"
},
"RequestId": "12345678-1234-1234-1234-123456789012",
"StatusMessage": "",
"EndTime": "yyyy-mm-ddThh:mm:ssZ",
"EC2InstanceId": "i-1234567890abcdef0",
"StartTime": "yyyy-mm-ddThh:mm:ssZ",
"Cause": "description-text"
}
}
Edit 2 - my Lambda code so far
'use strict'
const ssm = new (require('aws-sdk/clients/ssm'))()
exports.handler = async (event) => {
const instanceId = event.detail.EC2InstanceId
var params = {
DocumentName: "AWS-RunShellScript",
InstanceIds: [ instanceId ],
TimeoutSeconds: 30,
Parameters: {
commands: ["/path/to/my/ec2/script.sh"],
workingDirectory: [],
executionTimeout: ["15"]
}
};
const data = await ssm.sendCommand(params).promise()
const response = {
statusCode: 200,
body: "Run Command success",
};
return response;
}
Yes, but through Lambda
EventBridge -> Lambda (using SSM api) -> EC2
Thank you #Sándor Bakos for helping me out!! My JavaScript ended up not working for some reason, so I ended up just using part of the python code linked in the comments.
1. add ssm:SendCommand permission:
After I let Lambda create a basic role during function creation, I added an inline policy to allow Systems Manager's SendCommand. This needs access to your documents/*, instances/* and managed-instances/*
2. code - python 3.9
import boto3
import botocore
import time
def lambda_handler(event=None, context=None):
try:
client = boto3.client('ssm')
instance_id = event['detail']['EC2InstanceId']
command = '/path/to/my/script.sh'
client.send_command(
InstanceIds = [ instance_id ],
DocumentName = 'AWS-RunShellScript',
Parameters = {
'commands': [ command ],
'executionTimeout': [ '60' ]
}
)
You can do this without using lambda, as I just did, by using eventbridge's input transformers.
I specified a new automation document that called the document I was trying to use (AWS-ApplyAnsiblePlaybooks).
My document called out the InstanceId as a parameter and is passed this by the input transformer from EventBridge. I had to pass the event into lambda just to see how to parse the JSON event object to get the desired instance ID - this ended up being
$.detail.EC2InstanceID
(it was coming from an autoscaling group).
I then passed it into a template that was used for the runbook
{"InstanceId":[<instance>]}
This template was read in my runbook as a parameter.
This was the SSM playbook inputs I used to run the AWS-ApplyAnsiblePlaybook Document, I just mapped each parameter to the specified parameters in the nested playbook:
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
See the document below for reference. They used a document that was already set up to receive the variable
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-eventbridge-input-transformers.html
This is the full automation playbook I used, most of the parameters are defaults from the nested playbook:
{
"description": "Runs Ansible Playbook on Launch Success Instances",
"schemaVersion": "0.3",
"assumeRole": "<Place your automation role ARN here>",
"parameters": {
"InstanceId": {
"type": "String",
"description": "(Required) The ID of the Amazon EC2 instance."
}
},
"mainSteps": [
{
"name": "RunAnsiblePlaybook",
"action": "aws:runCommand",
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
}
}
]
}
A Composer cluster went down because its airflow-worker pods needed a Docker image that was not accessible.
Now access to the Docker image was restore, but the airflow-scheduler pod has disappeared.
I tried updating the Composer Environment by setting a new Environment Variable, with the following error :
UPDATE operation on this environment failed X minutes ago with the following error message: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({
"Date": "recently",
"Audit-Id": "my-own-audit-id",
"Content-Length": "236",
"Content-Type": "application/json",
"Cache-Control": "no-cache, private"
})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"airflow-scheduler\" not found",
"reason": "NotFound",
"details": {
"name": "airflow-scheduler",
"group": "apps",
"kind": "deployments"
},
"code": 404
}
Error in Composer Agent
How can I launch a airflow-scheduler pod on my Composer cluster ?
What is the .yaml configuration file I need to apply ?
I tried launching the scheduler from inside another pod with airflow scheduler, and while it effectively starts a scheduler, it's not a Kubernetes pod and will not integrate well with the managed airflow cluster.
To restart the airflow-scheduler, run the following
# Fetch the old deployment, and pipe it into the replace command.
COMPOSER_WORKSPACE=$(kubectl get namespace | egrep -i 'composer|airflow' | awk '{ print $1 }')
kubectl get deployment airflow-scheduler --output yaml \
--namespace=${COMPOSER_WORKSPACE}| kubectl replace --force -f -
I'm trying to create a CodePipeline to deploy an application to EC2 instances using Blue/Green Deployment.
My Deployment Group looks like this:
aws deploy update-deployment-group \
--application-name MySampleAppDeploy \
--deployment-config-name CodeDeployDefault.AllAtOnce \
--service-role-arn arn:aws:iam::1111111111:role/CodeDeployRole \
--ec2-tag-filters Key=Stage,Type=KEY_AND_VALUE,Value=Blue \
--deployment-style deploymentType=BLUE_GREEN,deploymentOption=WITH_TRAFFIC_CONTROL \
--load-balancer-info targetGroupInfoList=[{name="sample-app-alb-targets"}] \
--blue-green-deployment-configuration file://configs/blue-green-deploy-config.json \
--current-deployment-group-name MySampleAppDeployGroup
blue-green-deploy-config.json
{
"terminateBlueInstancesOnDeploymentSuccess": {
"action": "KEEP_ALIVE",
"terminationWaitTimeInMinutes": 1
},
"deploymentReadyOption": {
"actionOnTimeout": "STOP_DEPLOYMENT",
"waitTimeInMinutes": 1
},
"greenFleetProvisioningOption": {
"action": "DISCOVER_EXISTING"
}
}
I'm able to create a blue/green deployment manually using this command, it Works! :
aws deploy create-deployment \
--application-name MySampleAppDeploy \
--deployment-config-name CodeDeployDefault.AllAtOnce \
--deployment-group-name MySampleAppDeployGroup \
# I can specify the Target Instances here
--target-instances file://configs/blue-green-target-instances.json \
--s3-location XXX
blue-green-target-instances.json
{
"tagFilters": [
{
"Key": "Stage",
"Value": "Green",
"Type": "KEY_AND_VALUE"
}
]
}
Now, In my CodePipeline Deploy Stage, I have this:
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "app"
}
],
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "MySampleAppDeploy",
"DeploymentGroupName": "MySampleAppDeployGroup"
/* How can I specify Target Instances here? */
},
"runOrder": 1
}
]
}
All EC2 instances are tagged correctly and everything works as expected when using CodeDeploy via the command line, I'm missing something about how AWS CodePipeline works in this case.
Thanks
You didn't mention which error you get when you invoke the pipeline? Are you getting this error:
"The deployment failed because no instances were found in your green fleet"
Taking this assumption, since you are using manual tagging in your CodeDeploy configuration, this is not going to work to deploy using Blue/Green with manual tags as CodeDeploy expects to see a tagSet to find the "Green" instances and there is no way to provide this information via CodePipeline.
To workaround this, please use the 'Copy AutoScaling' option for implementing Blue/Green deployments in CodeDeploy using CodePipeline. See Step 10 here [1]
Another workaround is that you can create lambda function that is invoked as an action in your CodePipeline. This lambda function can be used to trigger the CodeDeploy deployment where you specify the target-instances with the value of the green AutoScalingGroup. You will then need to make describe calls at frequent intervals to the CodeDeploy API to get the status of the deployment. Once the deployment has completed, your lambda function will need to signal back to the CodePipeline based on the status of the deployment.
Here is an example which walks through how to invoke an AWS lambda function in a pipeline in CodePipeline [2].
Ref:
[1] https://docs.aws.amazon.com/codedeploy/latest/userguide/applications-create-blue-green.html
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.