I was looking at using Elastic Beanstalk with Multicontainer support, although, it seems that AWS is scheduling the retirement of this platform / functionality.
Here is the documentation for supported platform for Elastic Beanstalk: https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html
I used the Docker version 64bit Amazon Linux 2 v3.4.4 running Docker and that version does not support using Dockerrun.aws.json version 2 with multi containers support.
So then I came across this documentation: https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-retiring.html#platforms-retiring.mcdocker
Multi containers support is marked as retiring platform, I was wondering why basically?
Is there a version coming soon or Elastic Beanstalk would just stop doing multi containers support?
Thanks a lot!
The Multicontainer Docker (Amazon Linux AMI) was based on ECS to support muliti-container docker. But since regular Docker environment now supports docker-compose you can multi-containers without ECS:
Docker Compose features. This platform will allow you to leverage the features provided by the Docker Compose tool to define and run multiple containers. You can include the docker-compose.yml file to deploy to Elastic Beanstalk.
Docker Compose makes is much easier to use multi-containers on EB, thus support for ECS seems redundant.
I believe this tool will be used for containerized web applications- https://aws.amazon.com/apprunner/
Related
I am using archlinux on my development. I am trying to use a free tier AMI for EC2 in AWS.
I have found Amazon linux 2 as one of the AMI's
I didnt find arch linux AMI in free tier.
I know using docker i can still use archlinux and keep the environment same
The reason why i want to use arch is i am familiar with the package management which is very crucial for ease on any particular linux distribution.
So will using Docker effect AWS performance and is Docker worth using at all.
Or should i get used to the AMI linux distribution.
If you like Archlinux use the Archlinux Docker.
The Docker overhead is very small.
Using Docker will also make it easy to port your setup to any location: other cloud, desktop, other OS.
Docker is perfect to go. Further, consider that, in different regions, you can use the AWS fargate. It allows you to start docker containers (scaling them up and down, etc) without having to manage servers (EC2 instances).
I have an application I want to deploy as a Docker container using Amazons Beanstalk platform. However the application is only certified against v17 of Docker, but Beanstalk uses v18 of Docker.
Is there a way of configuring Beanstalk to use a specific version of Docker? I cannot find any options to do this when I create my application within the AWS console (I have signed up for a free account so maybe such an option is for paid versions).
I am trying to get AWS X-Ray working in a multi-container Beanstalk app as described in the docs. I found a community-built X-Ray container which I can run alongside my app: pottava/xray:2.0. According to docker stats and docker ps this container is running and receiving/sending network traffic (the traces are sent via UDP to the container). But there is no tracing data showing up in the AWS console.
I have not enabled X-Ray via a .ebextensions/ config file as suggested here. Trying this failed the deployment to Beanstalk. In fact, the multi-container environment is not listed as a supported platform. So while plenty of docs mention using X-Ray on Beanstalk, I am not sure if there is a way to configure this on my multi-Docker environment.
Can X-Ray configured in multi-Docker Beanstalk? If yes, how?
What's the best way to troubleshoot the collection & delivery of traces?
The community-built Docker container to which you've linked should work as desired in AWS Elastic Beanstalk.
Have you added the necessary AWSXrayWriteOnlyAccess managed policy to your Elastic Beanstalk instance profile?
To further troubleshoot, please find the AWS X-Ray daemon logs from within the daemon's Docker container. The log will report any attempted calls to the PutTraceSegments API, as well as any errors which may result. In the linked Docker container, this file is located at /var/log/xray-daemon.log.
Can X-Ray configured in multi-Docker Beanstalk? If yes, how?
Yes, but it's not as simple as the X-ray daemon that can be enabled via .ebextensions as described in Running the X-Ray daemon on AWS Elastic Beanstalk. That won't work on Docker platforms (without significant networking hacks). According to the article,
Elastic Beanstalk does not provide the X-Ray daemon on the Multicontainer Docker (Amazon ECS) platform. Also, it's worth noting that neither Docker platform is listed under Supported Platforms in the article, Configuring AWS X-Ray debugging.
For the Docker platform (Amazon Linux 2), you can use docker-compose to run the X-ray daemon in a container alongside your application. Here is a simple example of the docker-compose.yml that I use in a simple API app:
version: "3.9"
services:
api: # my app instrumented with the AWS X-Ray SDK
build:
context: .
dockerfile: Dockerfile-awseb
ports:
- "80:3000"
environment:
- AWS_XRAY_DAEMON_ADDRESS=xray:2000
env_file: .env
xray:
image: "amazon/aws-xray-daemon"
For the Multicontainer platform, the Scorekeep example in the article, Instrumenting Amazon ECS applications, shows a more elaborate example of instrumenting in a multicontainer Docker environment in Elastic Beanstalk.
What's the best way to troubleshoot the collection & delivery of traces?
Some high-level tips...
Enable pre-requisite permissions as described in the X-Ray Getting Started article (viz., AWSXrayFullAccess).
Check the X-ray daemon logs (which are configured differently in the Scorekeep multicontainer example from the X-Ray daemon on Elastic Beanstalk).
Use the X-ray Analytics console to confirm whether it's receiving traces.
When using a supported platform, you might find additional guidance in Configuring AWS X-Ray debugging.
I got a local Docker stack running Node.js, MongoDB and Nginx.
It runs perfectly using docker-compose up --build.
Now it's time to deploy my application to a production environment.
I have considered EC2 Container Service and EC2, but can you recommend an easier approach? The learning curve is steep!
For MongoDB -
Use AWS quick start MongoDB
http://docs.aws.amazon.com/quickstart/latest/mongodb/overview.html
http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html
For rest of the docker stack i.e NodeJS & Nginx -
Use the AWS ElasticBeanstalk Multi Container Deployment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
Elastic Beanstalk supports Docker, as documented here. Elastic Beanstalk would manage the EC2 resources for you so that you, which should make things a bit easier on you.
You can install Kontena to AWS and use that to deploy your application to production environment (of course other cloud providers are also supported). Transition from Docker Compose is very smooth since kontena.yml uses similar syntax and keys as docker-compose.yml.
With Kontena you will have private image registry, load balancer and secret management built-in that are very useful when running containers in production.
I just "Dockerized" my infrastructure into containers. The environment basically is one nginx-php-fpm container which contains nginx configured with php-fpm. This container connects to multiple data-containers which contains the application files for the specific component.
I've seen multiple talks on deploying a single container to Beanstalk, but I'm not sure how I would deploy an environment like this. Locally the environment works. I got my nginx-php-fpm container using the --volumes-from flag to a data-container.
How would I create the same environment on Beanstalk? I can't find the option to volume from another container. Also is there a good platform that handles the Docker orchestration yet?
AWS allows you to use multicontainer docker.
You can use docker-compose to help you to create your nginx-php-fpm environment.