Using application version label jenkins-mt-002.zip, which defines a Dockerfile as well as a Dockerrun.aws.json file. The JSON file only contains the volume mapping.
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"HostDirectory": "/efs_mount/master/live",
"ContainerDirectory": "/root/.jenkins"
}
]
}
I am trying to map an EFS mount located at /efs_mount on the host system to /root/.jenkins inside of the docker container. I thought I had set it up correctly, but apparently I'm doing something wrong. Could someone take a look and let me know what I'm doing wrong?
Related
That is what I get when I follow the instructions at the official Docker tutorial here: tutorial link
I uploaded my Dockerrun.aws.json file and followed all other instructions.
The logs show nothing even when I click Request:
If anyone has a clue as to what I need to do, ie. why would not having a default VPC even matter here? I have only used my AWS account to set up Linux Machine EC2 instances for a Deep Learning nanodegree at Udacity in the past (briefly tried to set up a VPC just for practice but am sure I deleted/terminated everything when I found out that is not included in the free tier).
The author of the official tutorial forgot to add that you have to add the tag to the image name in the Dockerrun.aws.json file per below in gedit or other editor where :firsttry is the tag:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "hockeymonkey96/catnip:firsttry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
It works:
In the task definition on ECS, I have provided environment variable as following:
Key as HOST_NAME and the value as something.cloud.com
On my local I use this docker run command and I'm able to pass in my env variables, but through task definition the variables are not being passed to container.
The docker run command below works on local, but how do I set it up in the task definition in AWS ECS?
docker run -e HOST_NAME=something.cloud.com sid:latest
You should call it name and not key, see example below
{
"name": "nginx",
"image": "",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"environment": [
{
"name": "HOST_NAME",
"value": "something.cloud.com"
}
]
}
If you used the new docker compose integration with ECS, then you will need to update the stack.
It is smart enough to update only the parts that changed. For my case, this was the task definition not picking new environment variables set on a .env file and mounted on the docker container
Run the same command you used to create the stack, only that this time round it'll update it(only parts that changed)
docker compose --context you-ecs-context up -f your.docker-compose.yml
For more: https://docs.docker.com/engine/context/ecs-integration/#rolling-update
You can set hostname var at task definition JSON file
hostname
Type: string
Required: no
The hostname to use for your container. This parameter maps to Hostname in the Create a container section of the Docker Remote API and the --hostname option to docker run.
Composition
Jenkins server on EC2 instance, uses EFS
Docker image for above Jenkins server
Need
Write templates to directory on EFS each time ECS starts the task which builds the Jenkins server
Where is the appropriate place to put a step to do the write?
Tried
If I do it in the Dockerfile, it writes to the Docker image, but never propagates the changes to EFS so that the templates are available as projects on the Jenkins server.
I've tried putting the write command in jenkins.sh but I can't figure out how that is run, anyway it doesn't place the templates where I need them.
The original question included:
Write templates to directory on EFS each time ECS starts the task
In addition to #luke-peterson's answer you can use the shell script as an entry point in your docker file, in order to copy files between the mounted EFS folder and the container.
Instead of ENTRYPOINT, use following directive in your dockerfile:
CMD ["sh", "/app/startup.sh"]
And inside startup.sh you can copy files freely and run the app (.net core app in my example):
cp -R /app/wwwroot/. /var/jenkins-home
dotnet /app/app.dll
Of course, you can also do it programmatically insede the app itself.
You need to start the task with a volume, then mount that volume into the container. This way you have persistent storage across multiple Jenkins start/stop cycles.
Your task definition would look something like the below (I've removed the non relevant parts). The important components are mountPoints and volumes. Not that this is not the same as volumesFrom as you aren't mounting volumes from another container, but rather running them in a single task.
This also assumes you're running Jenkins in the default JENKINS_HOME directory as well as having mounted your EFS drive to /mnt/efs/jenkins-home on the EC2 instance.
{
"requiresAttributes": ...
"taskDefinitionArn": ... your ARN ...,
"containerDefinitions": [
{
"portMappings": ...
.... more config here .....
"mountPoints": [
{
"containerPath": "/var/jenkins_home",
"sourceVolume": "jenkins-home",
}
]
}
],
"volumes": [
{
"host": {
"sourcePath": "/mnt/efs/jenkins-home"
},
"name": "jenkins-home"
}
],
"family": "jenkins"
}
Task definition within ECS:
What I'm trying to achieve: I have a docker container which contains a CMS, that CMS has a folder named 'assets'. I need the asset folder to be available to other containers, and also for the data to be safe from deletion when containers/images are removed.
How I've attempted to solve it: I have read all about mounting volumes in multi container environments and looked at a bunch of examples and came up with the following dockerrun.aws.json file
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "assets",
"host": {
"sourcePath": "/var/app/current/cms"
}
}
],
"containerDefinitions": [
{
//...
"mountPoints": [
{
"sourceVolume": "assets",
"containerPath": "/var/www/assets",
"readOnly": false
}
]
}
]
}
I can upload this via Beanstalk and everything builds and all boxes are green, however if I login to the EC2 instance and ls /var/app/current the directory is empty. I was expecting to see /var/app/current/cms/assets sitting there...
I think I'm missing a core concept or flag in my build file, any direction or better way of achieving what I'm trying to do would be appreciated.
Try taking a look at this link and try it out.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
I believe this is similar to what you are asking for.
J
I created the following simple Dockerrun file per the instructions here using a public container and it's successfully running a single instance.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sbeam/influxdb",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8086"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
]
}
However I want the /data directory to be mounted within the EC2 instance as a certain EBS volume. I've found answers (here and here) that indicate a .ebextensions is needed, but since I am not uploading a .zip image for the container, how is this possible? Is it necessary to download the Docker container, add the .ebextensions directory, zip and re-upload?
You will have to follow option 3: "Create a .zip file containing your application files, any application file dependencies, the Dockerfile, and the Dockerrun.aws.json file." as explained in your first link.
In your case the .zip file may only contain the Dockerrun.aws.json and the .ebextensions folder.
In order to attach the EBS volume to your instance, you can check this article: http://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments. It will give you indications concerning the content of the .config file that will be inside the .ebextensions folder