Github actions - pass secret variables to render ECS task definition action - amazon-web-services

In order to deploy new task to ECS im using amazon-ecs-render-task-definition GitHub action.
This action receives a task-definition.json as a parameter. This JSON contain secrets that i dont want to push, is there a way to inject some parameter to this JSON? Maybe from aws secrets manager?
For example - task-definition.json
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": ****"password"**** // ITS A SECRET!
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 500,
"essential": true
}],
"family": "hello_world" }

Apparently there is a build in solution for using aws-scrent-manager secrets:
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
}
]
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-data-security-container-task/

Another solution is to use sed to insert your secrets
So your workflow becomes something like -
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Add secrets to Task Definition
run: |
sed -i "s/<jwt_secret>/$JWT_SECRET/g" task.json
sed -i "s/<mongo_password>/$MONGO_PASSWORD/g" task.json
env:
JWT_SECRET: ${{secrets.JWT_SECRET}}
MONGO_PASSWORD: ${{secrets.MONGO_PASSWORD}}
Then you edit your to task.json to include the placeholders that sed will use for the replacement
{
"ipcMode": null,
"executionRoleArn": null,
"containerDefinitions": [
{
...
"environment": [
{
"name": "JWT_SECRET",
"value": "<jwt_secret>"
},
{
"name": "MONGO_PASSWORD",
"value": "<mongo_password>"
},
]
...
}
]
}

All repos have a place to store their secrets, see creating and using encrypted secrets. As for editing .json, preinstalled jq looks like an obvious choice here, or maybe powershell if you're more familiar with that (just remember about tweaking -Depth).

Related

AWS service can't start task, but starting task manually works

Until now I had a backend running single tasks. I now want to switch to services starting my tasks. For two of the tasks I need direct access to them so I tried using ServiceConnect.
When I run this task standalone it starts. When I start a service without ServiceConnect with the same task inside it also starts. When I enable ServiceConnect I get this error message inside of the 'Deployments and events' tab in the service:
service (...) was unable to place a task because no container instance met all of its requirements.
The closest matching container-instance (...) is missing an attribute required by your task.
For more information, see the Troubleshooting section of the Amazon ECS Developer Guide.
When I check the attributes of all free containers with:
ecs-cli check-attributes --task-def some-task-definition --container-instances ... --cluster some-cluster
I just get:
Container Instance Missing Attributes
heyvie-backend-dev None
My task definition looks like that:
{
"family": "some-task-definition",
"taskRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::...:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"cpu": "1024",
"memory": "982",
"containerDefinitions": [
{
"name": "...",
"image": "...",
"essential": true,
"healthCheck": {
"command": ["..."],
"startPeriod": 20,
"retries": 3
},
"portMappings": [
{
"name": "somePortName",
"containerPort": 4321
}
],
"mountPoints": [
{
"sourceVolume": "...",
"containerPath": "..."
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "...",
"awslogs-region": "eu-...",
"awslogs-stream-prefix": "..."
}
}
}
],
"volumes": [
{
"name": "...",
"efsVolumeConfiguration": {
"fileSystemId": "...",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
],
"requiresCompatibilities": ["EC2"]
}
My service definition looks like that:
{
"cluster": "some-cluster",
"serviceName": "...",
"taskDefinition": "some-task-definition",
"desiredCount": 1,
"launchType": "EC2",
"deploymentConfiguration": {
"maximumPercent": 100,
"minimumHealthyPercent": 0
},
"placementConstraints": [
{
"type": "distinctInstance"
}
],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
...
],
"securityGroups": ["..."],
"assignPublicIp": "DISABLED"
}
},
"serviceConnectConfiguration": {
"enabled": true,
"namespace": "someNamespace",
"services": [
{
"portName": "somePortName",
"clientAliases": [
{
"port": 4321
}
]
}
]
},
"schedulingStrategy": "REPLICA",
"enableECSManagedTags": true,
"propagateTags": "SERVICE"
}
I also added this to the user data of my launch template:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_CLUSTER=some-cluster
EOF
Did anyone experience something similiar or does know what could cause that issue?
I used ServiceDiscovery, I think, it's the easiest way to replace a dynamic ip address of a task in a service (on every restart the ip address changes and that's probably what you're trying to avoid?).
With ServiceDiscovery you are creating a new DNS record and instead of ip-address:port you can just use serviceNameOfNamespace.namespace. to connect to a task. ServiceDiscovery worked without any problem on an existing task.
Hope that helps, I don't really know if there are any benefits for ServiceConnect except for higher connection counts and retry functionalities, so if anybody knows more about differences between those I'm happy to learn.

How to Create AWS Task Definition JSON from Existing task definition?

I have a an existing task definition 'my-task-definition' that I can get the data for using 'aws ecs describe-task-definition --task-definition my-task-definition' (I put the output of that into my_file.json'). But my understanding is that the output from 'aws ecs describe-task-definition --task-definition my-task-definition' is not valid input for 'aws ecs register-task-definition --cli-input-json file://<path_to_json_file>/my_file.json'. What additional piece(s) of data do I have to add to that file (or remove from it). The file (with the arns changed) is below:
{
"taskDefinition": {
"taskDefinitionArn": "arn:aws:ecs:us-west-1:112233445566:task-definition/my-task-definition:64",
"containerDefinitions": [
{
"name": "my-container",
"image": "123456789023.dkr.ecr.us-west-1.amazonaws.com/monolith-repo:latest",
"cpu": 0,
"memory": 1600,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 0,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "SERVER_FLAVOR",
"value": "JOB"
}
],
"mountPoints": [],
"volumesFrom": [],
"linuxParameters": {
"initProcessEnabled": true
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-task-definition",
"awslogs-region": "us-west-1",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "my-task-definition",
"taskRoleArn": "arn:aws:iam::111222333444:role/my_role",
"networkMode": "bridge",
"revision": 64,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.25"
}
],
"placementConstraints": [],
"compatibilities": [
"EXTERNAL",
"EC2"
],
"requiresCompatibilities": [
"EC2"
]
}
}
You are getting an error because the output from the aws ecs describe-task-definition command has additional fields that are not recognized by the aws ecs register-task-definition command.
There is no built in solution to be able to easily update a running Task Definition using the AWS CLI. However, it is possible to script a solution using a tool like jq.
One possible solution is something like this:
TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_FAMILY" --region "us-east-1")
NEW_TASK_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$NEW_IMAGE" '.taskDefinition | .containerDefinitions[0].image = $IMAGE | del(.taskDefinitionArn) | del(.revision) | del(.status) | del(.requiresAttributes) | del(.compatibilities)')
aws ecs register-task-definition --region "us-east-1" --cli-input-json "$NEW_TASK_DEFINITION"
These commands update the docker image in an existing task definition and delete the extra fields so that you can register a new task definition.
There is an open Github Issue that is tracking this issue. https://github.com/aws/aws-cli/issues/3064

AWS CodePipeline Fails: "Exception while trying to read the task definition artifact filef rom: SourceArtifact"

I have an AWS CodePipeline setup that is meant to pull from CodeCommit, use CodeBuild, and then do a Blue/Green deployment via CodeDeploy.
I believe it to be configured correctly (will discuss specifics below), but every time I get to the "Deploy" stage, I get the error message:
Invalid action configuration: Exception while trying to read the task definition artifact file from: SourceArtifact
I've looked through other SO answers, and I've checked the following:
SourceArtifact is well under 3MB in size.
The files taskdef.json and appspec.yml are both inside the SourceArtifact (these are the names as configured in my CodePipeline definition) which is generated in the first stage of the CodePipeline.
The artifact is able to be decrypted via KMS key as the CodePipeline is configured to make use of one (since SourceArtifact comes from a different account) and the CodeBuild step is able to successfully complete (it creates a Docker image and saves to ECR).
I can see no syntax errors of any kind in taskdef.json or appspec.yml as they're essentially copies from working versions of those files from different projects. The placeholder names remain the same.
Checking CodeTrail and checking list-action-executions (via CLI) don't show any additional error information.
Here's the "Deploy" stage config (as entered via a Terraform script):
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeployToECS"
version = "1"
input_artifacts = ["SourceArtifact", var.charting_artifact_name]
configuration = {
ApplicationName = aws_codedeploy_app.charting_codedeploy.name
DeploymentGroupName = aws_codedeploy_app.charting_codedeploy.name
TaskDefinitionTemplateArtifact = "SourceArtifact"
AppSpecTemplateArtifact = "SourceArtifact"
TaskDefinitionTemplatePath = "taskdef.json"
AppSpecTemplatePath = "appspec.yml"
Image1ArtifactName = var.charting_artifact_name
Image1ContainerName = "IMAGE1_NAME"
}
}
}
taskdef.json (account numbers redacted):
{
"executionRoleArn": "arn:aws:iam::<ACCOUNT_NUM>:role/fargate-iam-role",
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/sentiment-logs",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "sentiment-charting"
}
},
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"cpu": 0,
"environment": [],
"mountPoints": [],
"volumesFrom": [],
"image": "<IMAGE1_NAME>",
"name": "sentiment-charting"
}
],
"placementConstraints": [],
"memory": "4096",
"compatibilities": [
"EC2",
"FARGATE"
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:<ACCOUNT_NUM>:task-definition/sentiment-charting-taskdef:4",
"family": "sentiment-charting-taskdef",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "2048",
"revision": 4,
"status": "ACTIVE",
"volumes": []
}
appspec.yml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "<TASK_DEFINITION>"
LoadBalancerInfo:
ContainerName: "sentiment-charting"
ContainerPort: 80
PlatformVersion: "LATEST"
I'm at a bit of a loss as to how best to continue troubleshooting without spinning my wheels. Any help would be greatly appreciated.
TIA

AWS CodePipeline with ECS Blue/Green deployment fails with internal error | take 2

Firstly, I am aware that there is a similar question (AWS CodePipeline with ECS Blue/Green deployment fails with internal error), however the person that answered it, didn't provide sufficient detail.
As per this answer: https://superuser.com/questions/1388058/getting-internal-error-in-aws-code-pipeline-while-deploying-to-ecs .. I have gone through the aws guide: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#constraints... To ensure that all the "required" fields are in my taskdef.json (below)
As for my pipeline (build) buildSpec ...
- printf '{"ImageURI":"%s"}' $ECR_REPO_URI:demo > imageDetail.json
- echo Build completed on `date`
artifacts:
files:
- imageDetail.json
The pipeline build stage setup is simple, I simply set BuildArtifact as the output. So I can ref the imageDetail.json from the pipeline deploy stage.
As for my pipeline (deploy) AppSpec ...
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "pipeline_demo"
ContainerPort: 80
PlatformVersion: "LATEST"
The pipeline deploy stage setup is as follows:
Input artifacts: BuildArtifact, SourceArtifact; then:
Amazon ECS Task Definition: SourceArtifact "taskdef.json"
AWS CodeDeploy AppSpec file: SourceArtifact "taskdef.json"
Dynamically update task definition image: BuildArtifact
Placeholder text in the task definition: IMAGE1_NAME
(..some of which was sourced from this guide: https://medium.com/#shashank070/in-my-previous-blog-i-have-explained-how-to-do-initial-checks-like-code-review-code-build-cddcc21afd9f
.. and the taskdef:
{
"family": "devops-platform-ecs-task-def",
"type": "AWS::ECS::TaskDefinition",
"properties": {
"containerDefinitions": [
{
"name": "pipeline_demo",
"image": "<IMAGE1_NAME>",
"cpu": "1024",
"memory": "1024",
"essential": true,
"portMappings": [
{
"hostPort": 0,
"protocol": "tcp",
"containerPort": 80
}
]
}
],
"ExecutionRoleArn": "arn:aws:iam::xxxxxx:role/devops_codepipeline",
"NetworkMode": "null",
"PlacementConstraints": [
"type": "memberOf",
"expression": ""
],
"ProxyConfiguration": {
"type": "APPMESH",
"containerName": "",
"properties": [
{
"name": "",
"value": ""
}
]
},
"RequiresCompatibilities": [
"EC2"
],
"Tags": [
{
"key": "",
"value": ""
}
],
"TaskRoleArn": "",
"Volumes": [
{
"name": "",
"host": {
"sourcePath": ""
},
"dockerVolumeConfiguration": {
"scope": "task",
"autoprovision": true,
"driver": "",
"driverOpts": {
"KeyName": ""
},
"labels": {
"KeyName": ""
}
}
}
]
}
}
Nonetheless, I still get the error ...
Any help would be much appreciated!
Your task definition is not valid. From a quick look I can see following invalid property:
"type": "AWS::ECS::TaskDefinition",
Please review the sample Task defs here [1]. Also I would recommend to remove any extraneous section from taskdef so they do not interfere in any way.
[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html
I actually found that the inclusion of another policy AWSCodeDeployRoleForECS solved the issue. It referenced in the aws guide for CLI (we've been administering via the console thus far, hence I spotted this by chance alone) Create a Service Role (CLI). After including the policy, the pipeline has progressed past the issue listed issue, but has encountered another "Invalid Configuration Action; Container list cannot be empty." - which your answer may solve? Nonetheless, at least I have a more meaningful error to work with.

"Invalid configuration for registry" error when executing "eb local run"

I think this is a very easy to fix problem, but I just can't seem to solve it! I've spent a good amount of time looking for any leads on Google/SO but couldn't find a solution.
When executing eb local run, I'm getting this error:
Invalid configuration for registry
$ eb local run
ERROR: InvalidConfigFile :: Invalid configuration for registry 12345678.dkr.ecr.eu-west-1.amazonaws.com
The image lines in my Dockerrun.aws.json are as follows:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "frontend",
"host": {
"sourcePath": "/var/app/current/frontend"
}
},
{
"name": "backend",
"host": {
"sourcePath": "/var/app/current/backend"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/config/nginx"
}
},
{
"name": "nginx-proxy-content",
"host": {
"sourcePath": "/var/app/current/content/"
}
},
{
"name": "nginx-proxy-ssl",
"host": {
"sourcePath": "/var/app/current/config/ssl"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/backend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/backend",
"sourceVolume": "backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxsaxaax"
},
{
"name": "PG_USERNAME",
"value": "
},
{
"name": "PG_HOST",
"value": "123456.dsadsau89das.eu-west-1.rds.amazonaws.com"
},
{
"name": "FE_URL",
"value": "http://develop1.com"
}
]
},
{
"name": "frontend",
"image": "123456.dkr.ecr.eu-west-1.amazonaws.com/frontend:latest",
"Update": "true",
"essential": true,
"memory": 512,
"links": [
"backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/frontend",
"sourceVolume": "frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "www.eample.com"
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
],
"links": [
"backend",
"frontend"
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-content",
"containerPath": "/var/www/html"
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-ssl",
"containerPath": "/etc/nginx/ssl",
"readOnly": true
}
]
}
],
"family": ""
}
It seems that you have a broken docker-registry auth config file. In your home, this file ~/.docker/config.json, should look something like:
{
"auths": {
"https://1234567890.dkr.ecr.us-east-1.amazonaws.com": {
"auth": "xxxxxx"
}
}
}
That is generated with the command docker login (related to aws ecr get-login)
Check that. I say this because you are entering in an exception here:
for registry, entry in six.iteritems(entries):
if not isinstance(entry, dict):
# (...)
if raise_on_error:
raise errors.InvalidConfigFile(
'Invalid configuration for registry {0}'.format(registry)
)
return {}
This is due to outdated dependencies in the current version of the awsebcli tool. They pinned version "docker-py (>=1.1.0,<=1.7.2)" which does not support the newer credential helper formats. The latest version of docker-py is the first one to properly support the latest credential helper format and until the AWS EB CLI developers update docker-py to use 2.4.0 (https://github.com/docker/docker-py/releases/tag/2.4.0) this will remain broken.
First is that it's not valid json, The PG_USERNAME field does not have the enclosing quote.
{
"name": "PG_USERNAME",
"value": "
},
Should be
{
"name": "PG_USERNAME",
"value": ""
},
Next thing to check is to see if your Beanstalk instance profile has access to the ecr registry.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
Specifies the Docker base image on an existing Docker repository from which you're building a Docker container. Specify the value of the Name key in the format / for images on Docker Hub, or // for other sites.
When you specify an image in the Dockerrun.aws.json file, each instance in your Elastic Beanstalk environment will run docker pull on that image and run it. Optionally include the Update key. The default value is "true" and instructs Elastic Beanstalk to check the repository, pull any updates to the image, and overwrite any cached images.
Do not specify the Image key in the Dockerrun.aws.json file when using a Dockerfile. .Elastic Beanstalk will always build and use the image described in the Dockerfile when one is present.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
Test to make sure you can access your ecr outside of Elasticbeanstalk as well.
$ docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
latest: Pulling from amazonlinux
8e3fa21c4cc4: Pull complete
Digest: sha256:59895a93ba4345e238926c0f4f4a3969b1ec5aa0a291a182816a4630c62df769
Status: Downloaded newer image for aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html