How to start airflow-scheduler pod for Google Cloud Composer? - google-cloud-platform

A Composer cluster went down because its airflow-worker pods needed a Docker image that was not accessible.
Now access to the Docker image was restore, but the airflow-scheduler pod has disappeared.
I tried updating the Composer Environment by setting a new Environment Variable, with the following error :
UPDATE operation on this environment failed X minutes ago with the following error message: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({
"Date": "recently",
"Audit-Id": "my-own-audit-id",
"Content-Length": "236",
"Content-Type": "application/json",
"Cache-Control": "no-cache, private"
})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"airflow-scheduler\" not found",
"reason": "NotFound",
"details": {
"name": "airflow-scheduler",
"group": "apps",
"kind": "deployments"
},
"code": 404
}
Error in Composer Agent
How can I launch a airflow-scheduler pod on my Composer cluster ?
What is the .yaml configuration file I need to apply ?
I tried launching the scheduler from inside another pod with airflow scheduler, and while it effectively starts a scheduler, it's not a Kubernetes pod and will not integrate well with the managed airflow cluster.

To restart the airflow-scheduler, run the following
# Fetch the old deployment, and pipe it into the replace command.
COMPOSER_WORKSPACE=$(kubectl get namespace | egrep -i 'composer|airflow' | awk '{ print $1 }')
kubectl get deployment airflow-scheduler --output yaml \
--namespace=${COMPOSER_WORKSPACE}| kubectl replace --force -f -

Related

How do I use Hashicorp Vault in Cloud Foundry

So I have a nodejs webservice which I push into Cloud Foundry (PCF), then I am storing some credentials in Vault so when a user hits my web service endpoint with some credentials I extract the credentials from the Vault, compare them against the credentials from the request and if the match I allow the request to be processed else I reject the request.
So to install Vault in PCF I use the next command:
cf create-service hashicorp-vault shared foo-vault
Then I create a key using this command:
create-service-key foo-vault foo-vault-key
Then I bind the service to the app like this:
cf bind-service foo-ws foo-vault
I restage the web service and when I print the environmental variables using this command:
cf restage foo-ws
I get this values:
{
"hashicorp-vault": [{
"credentials": {
"address": "http://somehost:433/",
"auth": {
"accessor": "kMr3iCSlekSN2d1vpPjbjzUk",
"token": "some token"
},
"backends": {
"generic": [
"cf/7f1a12a9-4a52-4151-bc96-874380d30182/secret",
"cf/c4073566-baee-48ae-88e9-7c7c7e0118eb/secret"
],
"transit": [
"cf/7f1a12a9-4a52-4151-bc96-874380d30182/transit",
"cf/c4073566-baee-48ae-88e9-7c7c7e0118eb/transit"
]
},
"backends_shared": {
"organization": "cf/8d4b992f-cca3-4876-94e0-e49170eafb67/secret",
"space": "cf/bdace353-e813-4efb-8122-58b9bd98e3ab/secret"
}
},
"label": "hashicorp-vault",
"name": "my-vault",
"plan": "shared",
"provider": null,
"syslog_drain_url": null,
"tags": [],
"volume_mounts": []
}]
}
So my question is if there is a way to define the backends, token and address?
Thanks in advance for your help.
Greetings

UnrecognizedClientException when invoking a lambda from another lambda using localstack

I am running localstack inside of a docker container with this docker-compose.yml file.
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
To start localstack I run TMPDIR=/private$TMPDIR docker-compose up.
I have created two lambdas. When I run aws lambda list-functions --endpoint-url http://localhost:4574 --region=us-east-1 this is the output.
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "qmDiumefhM0UutYv32By67cj24P/NuHIhKHgouPkDBs=",
"FunctionName": "handler",
"LastModified": "2019-08-08T17:56:58.277+0000",
"RevisionId": "ffea379b-4913-420b-9444-f1e5d51b5908",
"CodeSize": 5640253,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:handler",
"Environment": {
"Variables": {
"DB_NAME": "somedbname",
"IS_PRODUCTION": "FALSE",
"SERVER": "xxx.xxx.xx.xxx",
"DB_PASS": "somepass",
"DB_USER": "someuser",
"PORT": "someport"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
},
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "wbT8YzTsYW4sIOAXLtjprrveq5NBMVUaa2srNvwLxM8=",
"FunctionName": "paymentenginerouter",
"LastModified": "2019-08-08T18:00:28.923+0000",
"RevisionId": "bd79cb2e-6531-4987-bdfc-25a5d87e93f4",
"CodeSize": 6602279,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:paymentenginerouter",
"Environment": {
"Variables": {
"DB_QUERY_LAMBDA": "handler",
"AWS_REGION": "us-east-1"
}
},
"Handler": "handler",
"Role": "r1",
"Timeout": 3,
"Runtime": "go1.x",
"Description": ""
}
]
}
Inside the paymentenginerouter code I am attempting to call the handler lambda via:
lambdaParams := &invoke.InvokeInput{
FunctionName: aws.String(os.Getenv("DB_QUERY_LAMBDA")),
InvocationType: aws.String("RequestResponse"),
LogType: aws.String("Tail"),
Payload: payload,
}
result, err := svc.Invoke(lambdaParams)
if err != nil {
resp.StatusCode = 500
log.Fatal("Error while invoking lambda:\n", err.Error())
}
Where invoke is an import: invoke "github.com/aws/aws-sdk-go/service/lambda"
When I run the paymentenginerouter lambda via:
aws lambda invoke --function paymentenginerouter --payload '{ "body": "{\"id\":\"12\",\"internalZoneCode\":\"xxxxxx\",\"vehicleId\":\"xxxxx\",\"vehicleVrn\":\"vehicleVrn\",\"vehicleVrnState\":\"vehicleVrnState\",\"durationInMinutes\":\"120\",\"verification\":{\"Token\":null,\"lpn\":null,\"ZoneCode\":null,\"IsExtension\":null,\"ParkingActionId\":null},\"selectedBillingMethodId\":\"xxxxxx\",\"startTimeLocal\":\"2019-07-29T11:36:47\",\"stopTimeLocal\":\"2019-07-29T13:36:47\",\"vehicleVin\":null,\"orderId\":\"1\",\"parkingActionType\":\"OnDemand\",\"digitalPayPaymentInfo\":{\"Provider\":\"<string>\",\"ChasePayData\":{\"ConsumerIP\":\"xxxx\",\"DigitalSessionID\":\"xxxx\",\"TransactionReferenceKey\":\"xxxx\"}}}"
}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I receive this error:
localstack_1 | 2019/08/08 20:02:28 Error while invoking lambda:
localstack_1 | UnrecognizedClientException: The security token included in the request is invalid.
localstack_1 | status code: 403, request id: bd4e3c15-47ae-44a2-ad6a-376d78d8fd92
Note
I can run the handler lambda without error by calling it directly through the cli:
aws lambda invoke --function handler --payload '{"body": "SELECT TOKEN, NAME, CREATED, ENABLED, TIMESTAMP FROM dbo.PAYMENT_TOKEN WHERE BILLING_METHOD_ID='xxxxxxx'"}' --endpoint-url=http://localhost:4574 --region=us-east-1 out --debug
I thought the AWS credentials are setup according the environment variables in localstack but I could be mistaken. Any idea how to get past this problem?
I am quite new to AWS lambdas and an absolute noob when it comes to localstack so please ask for more details if you need them. It's possible I am missing a critical piece of information in my description.
The error you receiving The security token included in the request is invalid. means that your lambda is trying to call out to the real AWS with invalid credentials, rather than going to Localstack.
When running a lambda inside localstack and where the lambda code itself has to call out to any AWS services hosted in localstack, you will need to ensure that any endpoints used are re-directed to localstack.
To do this there is a documented feature in localstack found here:
https://github.com/localstack/localstack#configurations
LOCALSTACK_HOSTNAME: Name of the host where LocalStack services are available.
This is needed in order to access the services from within your Lambda functions
(e.g., to store an item to DynamoDB or S3 from Lambda). The variable
LOCALSTACK_HOSTNAME is available for both, local Lambda execution
(LAMBDA_EXECUTOR=local) and execution inside separate Docker containers
(LAMBDA_EXECUTOR=docker).
Within your lambda code, ensure you use this environment variable to set the hostname (preceded with http:// and suffixed with the port number of that service in localstack, e.g. :4569 for dynamodb). This will ensure that the calls go to the right place.
Example in Go code snippet that would be added to your lambda where you are making a call to DynamoDB:
awsConfig.WithEndpoint("http://" + os.Getenv("LOCALSTACK_HOSTNAME") + ":4569")

Retrieving Id elasticbeanstalk (EBS) environment in the terminal?

How do I get the Id of my elasticbeanstalk (EBS) environment in the terminal?
This command returns an object in the terminal with some properties for the environment:
aws elasticbeanstalk describe-environments --environment-names my-env
Is it possible to get only the EnvironmentId from that object in the terminal?
{
"Environments": [
{
"ApplicationName": "xxxx-xxxx-xxxx-xxxxx",
"EnvironmentName": "my-env",
"VersionLabel": "Initial Version",
"Status": "Ready",
"EnvironmentArn": "arn:aws:elasticbeanstalk:eu-central-1:xxxxxxx:environment/xxxx-xxxxx-xxxx-xxxx/my-env",
"EnvironmentLinks": [],
"PlatformArn": "arn:aws:elasticbeanstalk:eu-central-1::platform/Multi-container Docker running on 64bit Amazon Linux/2.11.0",
"EndpointURL": "awseb-e-2-xxxxx-xxxxxx-xxxxx.eu-central-1.elb.amazonaws.com",
"SolutionStackName": "64bit Amazon Linux 2018.03 v2.11.0 running Multi-container Docker 18.03.1-ce (Generic)",
"EnvironmentId": "e-1234567",
"CNAME": "my-env.elasticbeanstalk.com",
"AbortableOperationInProgress": false,
"Tier": {
"Version": "1.0",
"Type": "Standard",
"Name": "WebServer"
},
"Health": "Green",
"DateUpdated": "2018-07-12T06:10:17.056Z",
"DateCreated": "2018-07-11T20:03:26.970Z"
}
]
}
In this case, the result that I'm expecting in my terminal to appear is e-1234567
If you want to use the AWS CLI for this, you would need to filter the output of aws elasticbeanstalk describe-environments --environment-names my-env using a tool such as grep. One possible (by no means optimal/concise) solution :
aws elasticbeanstalk describe-environments --environment-names my-env | grep EnvironmentId | grep -Eo "e-[A-Za-z0-9_]+"
The better solution is to use an AWS SDK such as boto3 (the Python AWS SDK).
import boto3
elasticbeanstalk = boto3.client(
'elasticbeanstalk',
region_name='us-west-2'
)
response = elasticbeanstalk.describe_environments(
EnvironmentNames=['my-env']
)
if response['Environments']:
print(response['Environments'][0]['EnvironmentId'])
AWS SDKs are available in other popular languages such as Go, Java, Ruby, JavaScript, PHP as well.

AWS CLI Update_Stack can't pass parameter value containing a /

I've been banging my head all morning on trying to create a powershell script that will ultimately update an AWS stack. Everything is great right up to the point where I have to pass parameters to the cloudformation template.
One of the parameter values (ParameterKey=ZipFilePath) contains a /. But the script fails complaining that it was expecting a = but found a /. I've tried escaping the slash but then the API complains that it found the backslash instead of an equals. Where am I going wrong?
... <snip creating a zip file> ...
$filename = ("TotalCommApi-" + $DateTime + ".zip")
aws s3 cp $filename ("s3://S3BucketName/TotalCommApi/" + $filename)
aws cloudformation update-stack --stack-name TotalCommApi-Dev --template-url https://s3-region.amazonaws.com/S3bucketName/TotalCommApi/TotalCommApiCFTemplate.json --parameters ParameterKey=S3BucketName,ParameterValue=S3BucketNameValue,UsePreviousValue=false ParameterKey=ZipFilePath,ParameterValue=("TotalCommApi/" + $filename) ,UsePreviousValue=false
cd C:\Projects\TotalCommApi\TotalComm_API
And here is the pertinent section from the CloudFormation Template:
"Description": "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters": {
"ZipFilePath": {
"Type": "String",
"Description": "Path to the zip file containing the Lambda Functions code to be published."
},
"S3BucketName": {
"Type": "String",
"Description": "Name of the S3 bucket where the ZipFile resides."
}
},
"AWSTemplateFormatVersion": "2010-09-09",
"Outputs": {},
"Conditions": {},
"Resources": {
"ProxyFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {"Ref": "S3BucketName" },
"S3Key": { "Ref": "ZipFilePath" }
},
And this is the error message generated by PowerShell ISE
[image removed]
Update: I am using Windows 7 which comes with Powershell 2. I updgraded to Powershell 4. Then my script yielded this error:
On recommendation from a consulting firm, I uninstalled the CLI that I installed via msi, then I upgraded Python to 3.6.2 and then re-installed the CLI via pip. I still get the same error. I "echo"d the command to the screen and this is what I see:
upload: .\TotalCommApi-201806110722.zip to s3://S3bucketName/TotalCommApi/TotalCommApi-201806110722.zip
aws
cloudformation
update-stack
--stack-name
TotalCommApi-Dev
--template-url
https://s3-us-west-2.amazonaws.com/s3BucketName/TotalCommApi/TotalCommApiCFTemplate.json
--parameters
ParameterKey=S3BucketName
UsePreviousValue=true
ParameterKey=ZipFilePath
ParameterValue=TotalCommApi/TotalCommApi-201806110722.zip
Sorry for the delay getting back to you on this - the good news is that I might have a hint about what your issue is.
ParameterKey=ZipFilePath,ParameterValue=("TotalCommApi/" + $filename) ,UsePreviousValue=false
I was driving myself mad trying to reproduce this issue. Why? Because I assumed that the space after ("TotalCommApi/" + $filename) was an artifact from copying, not the actual value that you were using. When I added the space in:
aws cloudformation update-stack --stack-name test --template-url https://s3.amazonaws.com/test-bucket-06-09/test.template --parameters ParameterKey=S3BucketName,ParameterValue=$bucketname,UsePreviousValue=false ParameterKey=ZipFilePath,ParameterValue=testfolder/$filename ,UsePreviousValue=false
Error parsing parameter '--parameters': Expected: '=', received: ','
This isn't exactly your error message (, instead of /), but I think it's probably a similar issue in your case - check to make sure the values that are being used in your command don't have extra spaces somewhere.

Deploy image to AWS Elastic Beanstalk from private Docker repo

I'm trying to pull Docker image from its private repo and deploy it on AWS Elastic Beanstalk with the help of Dockerrun.aws.json packed in zip. Its content is
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "docker/.dockercfg"
},
"Image": {
"Name": "namespace/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Where "my-bucket" is my bucket's name on s3, which uses the same location as my BS environment. Configuration that's set in key is the result of
$ docker login
invoked in docker2boot app's terminal. Then it's copied to folder "docker" in "my-bucket". The image exists for sure.
After that I upload .zip with dockerrun file to EB and on deploy I get
Activity execution failed, because: WARNING: Invalid auth configuration file
What am I missing?
Thanks in advance
Docker has updated the configuration file path from ~/.dockercfg to ~/.docker/config.json. They also have leveraged this opportunity to do a breaking change to the configuration file format.
AWS however still expects the former format, the one used in ~/.dockercfg (see the file name in their documentation):
{
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
Which is incompatible with the new format used in ~/.docker/config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
}
They are pretty similar though. So if your version of Docker generates the new format, just strip the auths line and its corresponding curly brace and you are good to go.