I have successfully pushed my 3 docker images on ECR.
Configured an ECS cluster.
Created 3 task definitions for those 3 images stored in respective ECR repositories.
Now, I want to run a public image of redis on the same cluster as a different task. I tried created a task definition of the same using the following URL: public.ecr.aws/ubuntu/redis:latest
But as soon as I run it as a new task I get the following error:
Essential container in task exited
Any specific reason for this error or am I doing something wrong?
Ok, so the redis image needs to either set a password (I recommend this as well) or allow it to connect to the redis cluster without a password.
To configure a password or disable passwords auth, you need to set environment variables in the image. You can read the documentation under the heading Configuration
Luckily, this is easy in ECS. You need to specify the environment variable in the task definition. So either:
{
"family": "",
"containerDefinitions": [
{
"name": "",
"image": "",
...
"environment": [
{
"name": "ALLOW_EMPTY_PASSWORD",
"value": "yes"
}
],
...
}
],
...
}
or for a password:
{
"family": "",
"containerDefinitions": [
{
"name": "",
"image": "",
...
"environment": [
{
"name": "REDIS_PASSWORD",
"value": "your_password"
}
],
...
}
],
...
}
For more granular configuration you should read the documentation of the redis docker image I posted above.
Related
I have a basic node app that I've wrapped in a Dockerfile
FROM node:lts-alpine3.15
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "run", "serve" ]
I push that to Gitlab's container registry. I'm trying to deploy it from there to AWS, but running into problems on the ECS side. In ECS I have:
a cluster (frontend)
a service (frontend)
both of which are configured in terraform
resource "aws_ecs_cluster" "frontend" {
name = "frontend"
setting {
name = "containerInsights"
value = "enabled"
}
}
resource "aws_ecs_service" "frontend" {
name = "frontend"
cluster = aws_ecs_cluster.frontend.id
deployment_controller {
type = "EXTERNAL"
}
tags = {
Name = "WebAppFrontend"
}
}
The web app is in a different repository from the terraform infrastructure. In my .gitlab-ci.yml I'm trying to register a new task definition for the web app I'm trying to register a new task definition with a json file.
I want when there's been changes to the web app I was to perform a rolling update so both the new version and old version are running, but I can't get one version deployed to ecs. My .gitlab-ci.yml is
deploy_ecs:
stage: deploy_ecs
script:
- aws ecs register-task-definition --cli-input-json file://task_definition.json
task_definition.json is:
{
"family": "frontend",
"containerDefinitions": [
{
"name": "frontend",
"image": "registry.gitlab.com/myproject/application/myimage:latest",
"memory": 300,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80
}
],
"essential": true,
"environment": [
{
"name": "Frontend",
"value": "dev"
}
]
}
]
}
Attempting to create a service from the console I get this error
The selected task definition is not compatible with the selected compute strategy.
Manually on the ec2 instance infrastructure for the ecs cluster I can run
docker run -d -p 80:8080 myimage
which does run the app. Am I able to:
Deploy the task definition file as above and run the service in my cluster
Deploy in a way so that there will be both versions in a rolling update to avoid any downtime
Do both of the above from my .gitlab-ci.yml
The ec2 instance is confirmed to be running the ecs-agent and I can see the container instance showing correctly so I know ecs is running.
I used console and the service was created successfully.
{
"requiresCompatibilities": [
"EC2"
],
"family": "frontend",
"containerDefinitions": [
{
"name": "frontend",
"image": "registry.gitlab.com/myproject/application/myimage:latest",
"memory": 300,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80
}
],
"essential": true,
"environment": [
{
"name": "Frontend",
"value": "dev"
}
]
}
]
}
The task eventually failed with access denied but the rest everything worked. Plus you need to add the " ecsTaskExecutionRole" for the task to function.
I have a single-instance elastic beanstalk environment which runs a docker image which is hosted as a private image on Dockerhub. This works fine. I am trying to create a new multi-container environment which runs the exact same image (plus one other, not icluded in my code example here). In the multi-container environment, I cannot get elastic beanstalk to launch my docker image, I get the following error:
ECS task stopped due to: Task failed to start. (img1_name: img2_name: CannotPullContainerError: Error response from daemon: pull access denied for user/repo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied)
Here is the dockerrun for my single-instance environment:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my_bucket",
"Key": ".dockercfg"
},
"Image": {
"Name": "user/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 443
}
],
"Logging": "/var/log/nginx"
}
And here is the .dockerfcg file:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "my_token"
}
}
}
Again, the above works fine.
My multi-instance dockerrun file is as follows:
{
"AWSEBDockerrunVersion": "2",
"authentication": {
"bucket": "my_bucket",
"key": ".dockercfg"
},
"containerDefinitions": [
{
"name": "img_name",
"image": "user/repo:tag",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 5000
}
]
}
],
"Logging": "/var/log/nginx"
}
I have ssh-ed into my elastic-beanstalk instance and run the following to check that it is able to access the .dockercfg from my s3 bucket:
aws s3api get-object --bucket mybucket --key dockercfg dockercfg
I have also tried various different formats for the .dockercfg file including...
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
I'm tearing my hair out over this, I've found a few similar threads here and on AWS forums, but nothing seems to resolve my issue. Any help greatly appreciated.
The following CloudFormation script creates a task definition but does not seem to create the container definition correctly. Can anyone tell me why?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Test stack for troubleshooting task creation",
"Parameters": {
"TaskFamily": {
"Description": "The task family to associate the task definition with.",
"Type": "String",
"Default": "Dm-Testing"
}
},
"Resources": {
"TaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"Family": {
"Ref": "TaskFamily"
},
"RequiresCompatibilities": [
"EC2"
],
"ContainerDefinitions": [
{
"Name": "sample-app",
"Image": "nginx",
"Memory": 200,
"Cpu": 10,
"Essential": true,
"Environment": [
{
"Name": "SOME_ENV_VARIABLE",
"Value": "SOME_VALUE"
}
]
}
]
}
}
}
}
When I view the created task, there is no container listed in the builder view of task definition in aws.
The information is listed, however, under the json tab of the task definition:
Note that the above image is a subset of the info shown, not all of it.
The result of this is that, when the task is run in a cluster, it does run the image, but runs it without the environment variables applied. In addition, CF does not report any errors when creating this stack, or when running the created task.
Finally, the CloudFormation script is a cut down example of the 'real' script which has started exhibiting this same issue. That script has been working fine for around a year now, and, as far as I can see, there have been no changes to the script between it working and breaking.
I would greatly appreciate any thoughts or suggestions on this because my face is beginning to hurt from smashing it against this particular wall.
Turns out this was a bug in cloudformation that only occurred when creating a task definition using a script through the aws console. Amazon have now resolved this.
I'm deploying an ASP.NET Core Web API app as a docker image to AWS ECS, so use a task definition file for that.
It turns out the app only works if I specify environment variable VIRTUAL_HOST with the public DNS of my EC2 instance (as highlighted here: http://docs.servicestack.net/deploy-netcore-docker-aws-ecs), see taskdef.json below:
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com"
}
]
}
]
}
Once the app is deployed to AWS ECS, I hit the endpoints - eg http://ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com/v1/ping
with the actual public DNS of my EC2 instance in VIRTUAL_HOST all works fine
without the env variable I'm getting "503 Service Temporarily Unavailable" from nginx/1.13.0
and if I put an empty string to VIRTUAL_HOST I'm getting a "502 Bad Gateway" from nginx/1.13.0.
Now, I'd like to avoid specifying virtual host in the taskdef file - is that possible? Is my problem ASP.NET Core related or nginx related?
Amazon ECS have a secret management system using Amazon S3. You have to create a secret in your ECS interface, and then you will be able to reference it in your configuration, as an environment variable.
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "SECRET_S3_VIRTUAL_HOST"
}
]
}
]
}
Store secrets on Amazon S3, and use AWS Identity and Access Management (IAM) roles to grant access to those stored secrets in ECS.
Full blog post
You could also make your own NGinx Docker image, which will already contain the environment variable.
FROM nginx
LABEL maintainer YOUR_EMAIL
ENV "VIRTUAL_HOST" "ec2-xx-xxx-xxxxxx.compute1.amazonaws.com"
And you would just have to build it, ship it privately and then use it for your configuration.
AWS documentation describes how you authenticate to Github using your browser, and that you're currently logged into Github as a valid user with permission to the repository you want to deploy from:
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ.html#github-integ-behaviors-auth
Is there any way to setup CodeDeploy without linking my user and having a browser? I'd love to do this using webhooks on each repository and AWS API calls, but I'll make a Github 'service user' if I have to.
More examples:
http://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
I'd love to use webhooks on my repo, or set them up myself, than permit AWS access to every repository on my Github account.
There does not appear to be an alternative to doing the OAuth flow in your browser at this point. If you're concerned about opening your whole Github account up to Amazon, creating a service user is probably the best approach, unfortunately it seems this user still needs administrative access to your repos to set up the integration.
After more research I realized my first answer is wrong, you can use AWS CLI to create a CodePipeline using a Github OAuth token. Then you can plug in your CodeDeploy deployment from there. Here's an example configuration:
{
"pipeline": {
"roleArn": "arn:aws:iam::99999999:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": "myusername",
"Repo": "myrepo",
"Branch": "master",
"OAuthToken": "**************"
},
"runOrder": 1
}
]
},
{
"name": "Beta",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodePipelineDemoFleet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-99999999"
},
"name": "MySecondPipeline",
"version": 1
}
}
You can create the pipeline using the command:
aws codepipeline create-pipeline --cli-input-json file://input.json
Make sure that the Github OAuth token has permissions admin:repo_hook and repo.
Reference: http://docs.aws.amazon.com/cli/latest/reference/codepipeline/create-pipeline.html
CodeDeploy and Github integration works based on Github Oauth. So to use the CodeDeploy and Github integration, you will have to trust CodeDeploy github application using your github account. Currently this integration will only work in your browser with a valid github account cause CodeDeploy application will always redirect back to CodeDeploy console to verify&finish the OAuth authentication process.
You can do it using this bash command
FROM LOCAL TO REMOTE
rsync --delete -azvv -e "ssh -i /path/to/pem" /path/to/local/code/* ubuntu#66.66.66.66:/path/to/remote/code
FROM REMOTE TO LOCAL
rsync --delete -azvv -e "ssh -i /path/to/pem" ubuntu#66.66.66.66:/path/to/remote/code/* /path/to/local/code
rsync checks file versions and updates the files that need to be update