AWS ECS Environment variables aren't set - amazon-web-services

For some reason the environment variables, although I've configured them in my ECS task, are not set in the running container. What am I missing? Why are the values empty?
I have the following AWS::ECS::TaskDefinition:
AirflowWebTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Join ['', [!Ref 'AWS::StackName', -dl-airflow-web]]
ContainerDefinitions:
- Name: dl-airflow-web
Cpu: '10'
Essential: 'true'
Image: companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0
Command: ['webserver']
Memory: '1024'
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref 'AirflowCloudwatchLogsGroup'
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: dl-airflow-web
PortMappings:
-
ContainerPort: 8080
Environment:
- Name: LOAD_EX
Value: n
- Name: EXECUTOR
Value: Celery
- Name: MYQL_HOST
Value: !Ref 'RDSDNSName'
- Name: MYSQL_PORT
Value: !Ref 'RDSPort'
- Name: MYSQL_DB
Value: !Ref 'AirflowDBName'
- Name: USERNAME
Value: !Ref 'AirflowDBUser'
- Name: PASSWORD
Value: !Ref 'AirflowDBPassword'
And I am using a docker image which is a fork of https://github.com/puckel/docker-airflow. The entrypoint for the image inspects environment variables as follows:
#!/usr/bin/env bash
AIRFLOW_HOME="/usr/local/airflow"
CMD="airflow"
TRY_LOOP="20"
: ${MYSQL_HOST:="default-mysql"}
: ${MYSQL_PORT:="3306"}
Where the $MYSQL_* variables are set to a default if they have not been set in the docker run command.
When I run the container image from docker-compose using the configuration below, it works and the environment variables are all set:
webserver:
image: companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0
environment:
- LOAD_EX=n
- EXECUTOR=Celery
- MYSQL_HOST=mysql
- MYSQL_PORT=3306
- USERNAME=dev-user
- PASSWORD=dev-secret-pw
- SQS_HOST=sqs
- SQS_PORT=9324
- AWS_DYNAMODB_ENDPOINT=http://dynamodb:8000
ports:
- "8090:8080"
command: webserver
And the following command in my entrypoint.sh:
echo "$(date) - Checking for MYSQL (host: $MYSQL_HOST, port: $MYSQL_PORT) connectivity"
Logs this output:
Fri Jun 2 12:55:26 UTC 2017 - Checking for MYSQL (host: mysql, port: 3306) connectivity
But inspecting my cloudwatch logs shows this output with the default values:
Fri Jun 2 14:15:03 UTC 2017 - Checking for MYSQL (host: default-mysql, port: 3306) connectivity
But I can ssh into the EC2 host, run docker inspect [container_id] and verify that the environment variables are set:
Config": {
"Hostname": "...",
"Domainname": "",
"User": "airflow",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"5555/tcp": {},
"8080/tcp": {},
"8793/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"MYSQL_PORT=3306",
"PASSWORD=rds-secret-pw",
"USERNAME=rds-user",
"EXECUTOR=Celery",
"LOAD_EX=n",
"MYQL_HOST=rds-cluster-name.cluster-id.aws-region.rds.amazonaws.com",
"MYSQL_DB=db-name",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DEBIAN_FRONTEND=noninteractive",
"TERM=linux",
"LANGUAGE=en_US.UTF-8",
"LANG=en_US.UTF-8",
"LC_ALL=en_US.UTF-8",
"LC_CTYPE=en_US.UTF-8",
"LC_MESSAGES=en_US.UTF-8"
],
"Cmd": [
"webserver"
],
"Image": "companyname-docker-snapshot-local.jfrog.io/docker-airflow:1.0",
"Volumes": null,
"WorkingDir": "/usr/local/airflow",
"Entrypoint": [
"/entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.amazonaws.ecs.cluster": "...",
"com.amazonaws.ecs.container-name": "...",
"com.amazonaws.ecs.task-arn": "...",
"com.amazonaws.ecs.task-definition-family": "...",
"com.amazonaws.ecs.task-definition-version": "16"
}
},
And if I run:
$ docker exec [container-id] echo $MYSQL_HOST
The output is empty

your task definition defines env variable MYQL_HOST. You got that right in the docker compose. Just the CF, fix it and it should be fine.

Related

Serverless Error: TypeError: Cannot read property 'options' of undefined

I'm trying to upgrade my serverless app from 1.51.0 to 2.7.0
while deploying the app I'm getting the below error:
[08:39:38] 'dev:sls-deploy' errored after 9.85 s
[08:39:38] TypeError: Cannot read property 'options' of undefined
at module.exports (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/utils/telemetry/generatePayload.js:236:66)
at PluginManager.run (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/classes/PluginManager.js:685:9)
08:39:38.428520 durable_task_monitor.go:63: exit status 1
First I thought it might be due to plugins I updated the plugins but still not able to resolve.
Here is my serverless.yml:
service: my-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-es-logs
- serverless-domain-manager
- serverless-plugin-ifelse
- serverless-prune-plugin
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
lambdaHashingVersion: 20201221
endpointType: PRIVATE
role: lambdaExecutionRole
apiGateway:
resourcePolicy:
- Effect: Allow
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
- Effect: Deny
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
Condition:
StringNotEquals:
aws:SourceVpce:
- "vpce-************"
environment:
APP_SERVICE: ${self:service}
APP_ENV: ${self:custom.stage}
APP_REGION: ${self:custom.region}
BUCKET_NAME: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LOG_REQUEST_ID: "x-request-id"
custom:
prune:
automatic: true
includeLayers: true
number: 5
serverlessIfElse:
- If: '"${self:custom.stage}" == "uat"'
Exclude:
- functions.abc-handler
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.abc-handler
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
ms4-handler:
handler: src/apifunctions/my.handler
events:
- http:
path: /hello
method: ANY
- http:
path: /hello/{proxy+}
method: ANY
resources:
Resources:
onboardingBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LifecycleConfiguration:
Rules:
- Id: expirationRule
Status: "Enabled"
ExpirationInDays: 10
Jenkins Deployment Steps:
#!groovy
import groovy.json.JsonSlurperClassic
pipeline {
agent {label 'ecs-tf12'}
stage('Serverless Deploy') {
agent {
docker {
label "ecs"
image "node:10.15"
args "-u 0:0"
}
}
steps {
script {
sh 'node --version'
sh 'npm --version'
sh 'npm config set registry http://registry.npmjs.org'
sh 'npm install -g serverless#2.7.0'
sh 'npm list -g serverless'
sh 'npm install -g typescript#3.9.10'
sh 'npm install'
sh "npx gulp install-terraform-linux"
sh 'cp -v serverless-private.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
sh 'cp -v serverless-public.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
}
}
}
}
My Package.json
{
"name": "my-app",
"version": "1.0.0",
"description": "Serverless Service",
"scripts": {
"build": "tslint --project tsconfig.json **/*.ts && serverless package",
"deploy": "tslint --project tsconfig.json **/*.ts && serverless deploy",
"offline": "tslint --project tsconfig.json **/*.ts && serverless offline"
},
"dependencies": {
"ajv": "^6.10.2",
"axios": "^0.27.2",
"body-parser": "^1.19.0",
"express": "^4.17.1",
"https-proxy-agent": "^4.0.0",
"joi": "^17.4.0",
"json-stream-stringify": "^2.0.4",
"launchdarkly-node-server-sdk": "^6.4.3",
"lodash": "^4.17.21",
"serverless-domain-manager": "^5.1.1",
"serverless-http": "^2.7.0",
"serverless-step-functions": "^2.23.0",
"source-map-support": "^0.5.16",
"uuid": "^3.3.3",
"xml-js": "^1.6.11"
},
"devDependencies": {
"#hewmen/serverless-plugin-typescript": "^1.1.17",
"#types/aws-lambda": "8.10.39",
"#types/body-parser": "^1.17.1",
"#types/express": "^4.17.2",
"#types/lodash": "^4.14.149",
"#types/node": "^13.1.6",
"#types/uuid": "^3.4.6",
"aws-sdk": "^2.1204.0",
"execa": "^4.0.0",
"gulp": "^4.0.2",
"serverless": "^2.7.0",
"serverless-es-logs": "^3.4.2",
"serverless-offline": "^8.0.0",
"serverless-plugin-ifelse": "^1.0.7",
"serverless-plugin-typescript": "^1.2.0",
"serverless-prune-plugin": "^2.0.1",
"serverless-webpack": "^5.5.0",
"ts-loader": "^6.2.1",
"tslint": "^5.20.1",
"tslint-config-prettier": "^1.18.0",
"typescript": "^3.9.10",
"typescript-tslint-plugin": "^0.5.5",
"webpack": "^4.41.5",
"webpack-node-externals": "^1.7.2"
},
"author": "The serverless webpack authors (https://github.com/elastic-coders/serverless-webpack)",
"license": "MIT"
}
I'm not able to figure out the reason and solution for this.
any Ideas?
Found a similar question: but it doesn't resolve as I'm using 2.7.0

How do I check packets coming to AWS Loadbalancer and Istio gateway with PacketBeat?

I am configuring Kubernetes based on aws ec2.
I use elasticsearch's packetbeat to get the geometric of clients accessing the service.
Istio is used as the service mesh of Kubernetes, and CLB is used for the load balancer.
I want to know the client ip accessing the service and the domain address the client accesses here.
my packetbeat.yml
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.auto_promices_mode: true
packetbeat.interfaces.with_vlans: true
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80,5601,8081,8002,5000, 8000, 8080, 9200]
send_request: true
send_response: true
send_header: ["User-Agent", "Cookie", "Set-Cookie"]
real_ip_header: "X-Forwarded-For"
- type: mysql
ports: [3306, 3307]
- type: memcache
ports: [11211]
- type: redis
ports: [6379]
- type: pgsql
ports: [5432]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: cassandra
ports: [9042]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883,8883, 9243, 15021, 15443, 32440]
send_request: true
send_response: true
send_all_headers: true
include_body_for: ["text/html", "application/json"]
packetbeat.procs.enabled: true
packetbeat.flows:
timeout: 30s
period: 10s
fields: ["server.domain"]
processors:
- include_fields:
fields:
- source.ip
- server.domain
- add_dokcer_metadata:
- add_host_metadata:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
# with version 7 of Packetbeat use the following line instead of the one above.
#format: '%{[destination.ip]}:%{[destination.port]}'
output.elasticsearch:
hosts: ${ELASTICSEARCH_ADDRESS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
pipeline: geoip-info
setup.kibana:
host: 'https://myhost:443'
my CLB listener
CLB has enabled proxy protocol.
But the packet beat doesn't bring me the data I want.
search for tls log
"client": {
"port": 1196,
"ip": "10.0.0.83"
},
"network": {
"community_id": "1:+ARNMwsOGxkBkrmWfCVawtA1GKo=",
"protocol": "tls",
"transport": "tcp",
"type": "ipv4",
"direction": "egress"
},
"destination": {
"port": 8443,
"ip": "10.0.1.77",
"domain": "my host domain"
},
search for flow.final: true
"event": {
"duration": 1051434189423,
"kind": "event",
"start": "2022-10-28T05:25:14.171Z",
"action": "network_flow",
"end": "2022-10-28T05:42:45.605Z",
"category": [
"network_traffic",
"network"
],
"type": [
"connection"
],
"dataset": "flow"
},
"source": {
"geo": {
"continent_name": "Asia",
"region_iso_code": "KR",
"city_name": "Has-si",
"country_iso_code": "KR",
"country_name": "South Korea",
"region_name": "Gg",
"location": {
"lon": 126.8168,
"lat": 37.2072
}
},
"port": 50305,
"bytes": 24174,
"ip": "my real ip address",
"packets": 166
},
I can find out if I search separately, but there are no two points of contact.
I would like to see the log of the above two combined.
The domain the client accesses + real client ip.
please help me..

localstack/doker: can't get es runing on default port 4571

So I ran sudo docker-compose up with the following .yaml file:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4563-4599:4563-4599"
- "8080:8080"
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
- SERVICES=s3,es,s3,ssm
- DEFAULT_REGION=us-east-1
- DATA_DIR=.localstack
- AWS_ENDPOINT=http://localstack:4566
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/localstack:/tmp/localstack
networks:
- my_localstack_network
networks:
my_localstack_network:
Then I created a ES domain:
aws es create-elasticsearch-domain --domain-name MyEsDomain --endpoint-url=http://localhost:4566
and getting the following output:
{
"DomainStatus": {
"DomainId": "000000000000/MyEsDomain",
"DomainName": "MyEsDomain",
"ARN": "arn:aws:es:us-east-1:000000000000:domain/MyEsDomain",
"Created": true,
"Deleted": false,
"Endpoint": "MyEsDomain.us-east-1.es.localhost.localstack.cloud:4566",
"Processing": true,
"UpgradeProcessing": false,
"ElasticsearchVersion": "7.10",
"ElasticsearchClusterConfig": {
"InstanceType": "m3.medium.elasticsearch",
"InstanceCount": 1,
"DedicatedMasterEnabled": true,
"ZoneAwarenessEnabled": false,
"DedicatedMasterType": "m3.medium.elasticsearch",
"DedicatedMasterCount": 1,
"WarmEnabled": false
},
...
When I try to hit the ES server thru port 4571, I'm getting "empty reply"
curl localhost:4571
curl: (52) Empty reply from server
I also tried to hit port 4566, and getting back {"status": "running"}.
Look like Elasticesearch never start on my machine.
localstack version > 0.14.0 removed 4571 port, see https://github.com/localstack/localstack/releases/tag/v0.14.0
Try using localstack/localstack-full image.
Localstack/localstack is the light version that does not include elasticsearch.

Dynamically set the EC2 Instance Type per Environment using ebextensions

I want to create a EC2 instance type t3.medium on all environments and m5.large on production.
I'm using .ebextensions (YAML) like so:
option 1:
Mappings:
EnvironmentMap:
"production":
TheType: "m5.large"
SecurityGroup: "foo"
...
"staging":
TheType: "t3.medium"
SecurityGroup: "bar"
...
option_settings:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: "aws-elasticbeanstalk-ec2-role"
InstanceType: !FindInMap
- EnvironmentMap
- !Ref 'AWSEBEnvironmentName'
- TheType
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Option 2:
InstanceType: {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Option 3:
InstanceType:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Results
Option 1 fails with Invalid Yaml (but I took this from this AWS example.
Option 2 and 3 fail with the same problem.
The FindInMap function is not "called":
Invalid option value: '{"Fn::FindInMap":["EnvironmentMap","EC2InstanceType"]},{"Ref":"AWSEBEnvironmentName"}' (Namespace: 'aws:autoscaling:launchconfiguration', OptionName: 'InstanceType'): Value is not one of the allowed values: [c1.medium, c1.xlarge, c3.2xlarge, ....
It tries to interpret the whole function/thing as a string.
For the SecurityGroups property it works, for InstanceType it does not.
I can't do it dynamically and I can't find how to achieve this neither on AWS doc, SO, or anywhere else. I would assume this is simple stuff. What am I missing?
EDIT:
Option 4: using conditionals
Conditions:
IsProduction: !Equals [ !Ref AWSEBEnvironmentName, production ]
option_settings:
aws:autoscaling:launchconfiguration:
InstanceType: !If [ IsProduction, m5.large, t3.medium ]
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Error: YAML exception: Invalid Yaml: could not determine a constructor for the tag !Equals in...
But this comes from documentation on conditions and if.
EDIT 2:
I eventually found out that the option InstanceType is obsolute and we should use:
aws:ec2:instances
InstanceTypes: "t3.medium"
But alas, this does not solve the problem either because I cannot use the replacement functions here as well (Fn:findInMap).
The reason why FindInMap does not work in option_settings is the fact that only four intrinsic functions are allowed there (from docs):
Ref
Fn::GetAtt
Fn::Join
Fn::GetOptionSetting
I'm not convinced that SecurityGroups worked. I think your script failed before FindInMap in SecurityGroups got chance to be evaluated.
However, I tried to find a way using Resources. The closes I got was with the following config file:
Mappings:
EnvironmentMap:
production:
TheType: "t3.medium"
staging:
TheType: "t2.small"
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
InstanceType:
? "Fn::FindInMap"
:
- EnvironmentMap
-
Ref: "AWSEBEnvironmentName"
- TheType
Although this is a step closer, it ultimately fails as well. The reason is that when EB is jointing our Resources config file with its own template, it produces the following:
"InstanceType": {
"Ref": "InstanceType", # <--- this should NOT be here :-(
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
instead of
"InstanceType": {
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
And this happens because the original InstanceType (before the joint operation) is:
"InstanceType":{"Ref":"InstanceType"},
Therefore, EB instead of replacing InstanceType with our custom InstanceType provided in our config file, it just merges them.

AWS Cloudformation template for a codepipeline with jenkins build stage

I need to write a CFT for pipeline with Jenkins integration for build/test. I found this documentation to setup ActionTypeId for the jenkins stage. But this doc does not specify how to set the server url of the jenkins server. And also it is not clear to me where to give the Jenkins Provider name. Is it in the ActionTypeId or in configuration properties?
I could not find any example for this use case in the internet as well.
Please provide a proper example for setup Jenkins Action Provider for AWS Codepipeline using AWS Cloudformation template.
Following is a section from the sample cft I wrote from what I learnt from above doc.
"stages": [
{
"name": "Jenkins",
"actions": [
...
{
"name": "Jenkins Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"provider": "Jenkins",
"version": "1"
},
"runOrder": 2,
"configuration": {
???
},
...
}
]
},
...
]
The piece of information which was missing for me was that I need to create a Custom Action to use Jenkins as the Action provider for my codepipeline.
First I added the custom action as below:
JenkinsCustomActionType:
Type: AWS::CodePipeline::CustomActionType
Properties:
Category: Build
Provider: !Ref JenkinsProviderName
Version: 1
ConfigurationProperties:
-
Description: "The name of the build project must be provided when this action is added to the pipeline."
Key: true
Name: ProjectName
Queryable: false
Required: true
Secret: false
Type: String
InputArtifactDetails:
MaximumCount: 5
MinimumCount: 0
OutputArtifactDetails:
MaximumCount: 5
MinimumCount: 0
Settings:
EntityUrlTemplate: !Join ['', [!Ref JenkinsServerURL, "/job/{Config:ProjectName}/"]]
ExecutionUrlTemplate: !Join ['', [!Ref JenkinsServerURL, "/job/{Config:ProjectName}/{ExternalExecutionId}/"]]
Tags:
- Key: Name
Value: custom-jenkins-action-type
The jenkins server URL is given in the settings for Custom Action
and the Jenkins provider name is given for Provider. Which were the
issues I had initially.
Then configure the pipeline stage as following:
DevPipeline:
Type: AWS::CodePipeline::Pipeline
DependsOn: JenkinsCustomActionType
Properties:
Name: Dev-CodePipeline
RoleArn:
Fn::GetAtt: [ CodePipelineRole, Arn ]
Stages:
...
- Name: DevBuildVerificationTest
Actions:
- Name: JenkinsDevBVT
ActionTypeId:
Category: Build
Owner: Custom
Version: 1
Provider: !Ref JenkinsProviderName
Configuration:
# JenkinsDevBVTProjectName - Jenkins Job name defined as a parameter in the CFT
ProjectName: !Ref JenkinsDevBVTProjectName
RunOrder: 4
The Custom Action has to be created before the Pipeline. Hence DependsOn: JenkinsCustomActionType