How do I deploy a named data volume, with contents, to nodes in a swarm? Here is what I want to do, as described in the Docker documentation:
“Consider a situation where your image starts a lightweight web server. You could use that image as a base image, copy in your website’s HTML files, and package that into another image. Each time your website changed, you’d need to update the new image and redeploy all of the containers serving your website. A better solution is to store the website in a named volume which is attached to each of your web server containers when they start. To update the website, you just update the named volume.”
(source:
https://docs.docker.com/engine/reference/commandline/service_create/#add-bind-mounts-or-volumes)
I'd like to use the better solution. But the description doesn't say how the named volume is deployed to host machines running the web servers, and I can't get a clear read on this from the documentation. I'm using Docker-for-AWS to set up a swarm where each node is running on a different EC2 instance. If the containers are supposed to mount the volume locally, then how is it deployed to each node of the swarm? If it is mounted from a manager node as a network filesystem visible to the nodes, how is this specified in the docker-compose yaml file? And how does the revised volume get deployed from the development machine to the swarm manager? Can this be done through a deploy directive in a docker-compose yaml file? Can it be done in Docker Cloud?
Thanks
Related
I am transferring an existing PHP application to the Elastic Beanstalk and have a newbie question. My application has a data folder that grows and changes over time and can grow quite large currently the folder is a subfolder in the root directory of the application. In the traditional development model I just upload the changed PHP files are carry on using the same data folder, how can I do this in the Elastic Beanstalk?
I dont want to have to download and upload the data folder everytime I deploy a new version of the application. What is the best practice to do this in the AWS Beanstalk?
TIA
Peter
This is a question of continious deployment.
Elastic BeanStalk supports CD from AWS CodePipeline: https://aws.amazon.com/getting-started/tutorials/continuous-deployment-pipeline/
To address "grows and changes over time and can grow quite large currently the folder is a subfolder in the root directory of the application", you can use CodeCommit to version your code using Git. If you version the data folder with your application, the deployment will include it.
If the data is something you can offload to an object store (S3) or a database (RDS/DynamoDB/etc), it would be a better practice to do so.
As per the AWS documentation here, Elastic Beanstalk applications run on EC2 instances that have no persistent local storage, as a result your EB application should be as stateless as possible, and should use persistent storage from one of the storage offerings offered by AWS.
A common strategy for persistent storage is to use Amazon EFS, or Elastic File Service. As noted in the documentation for using EFS with Elastic Beanstalk here:
Your application can treat a mounted Amazon EFS volume like local storage, so you don't have to change your application code to scale up to multiple instances.
Your EFS drive is essentially a mounted network drive. Files stored in EFS will be accessible across any instances that have the file system mounted, and will persist beyond instance termination and/or scaling events.
You can learn more about EFS here, and using EFS with Elastic Beanstalk here.
I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.
I have run through the docker 'Get Started' tutorial (https://docs.docker.com/get-started/part6/) and have also followed the all the instructions with my own application and AWS. I used the wrong image in my service definition in my docker-compose.yml file. I have corrected the docker-compose.yml file and have tried to run docker stack deploy but I get the following and the nothing happens on the swarm. Is there something I can do to get the swarm to use the correct image or do I need to start from scratch?
[myapp-swarm] ~/PycharmProjects/myapp $ docker stack deploy -c
docker-compose.yml myapp
Updating service myservice_web (id: somerandomidstring)
image my_user/myprivaterepo:myapptag could not be accessed on a registry to record
its digest. Each node will access my_user/myprivaterepo:myapptag independently,
possibly leading to different nodes running different versions of the image.
When updating services that need credentials to pull the image, you need to pass --with-registry-auth. Images pulled for a service take a different path than a regular docker pull, because the actual pull is performed on each node in the swarm where an instance is deployed. To pull an image, the swarm cluster needs to have the credentials stored (so that the credentials can be passed on to the node it performs the pull on).
Can you confirm if passing --with-registry-auth makes the problem go away?
I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.
I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.