Reg. Docker Image of microservices in AWS Lambda - amazon-web-services

I developed a spring boot-microservices application in which each microservice is packaged into a separate docker container. The databases for these services are also in separate docker containers. Currently, all these are hosted and running in AWS ECS. If I need to migrate to Lambda, can I reuse the same docker containers as such? (of course, I will add the AWS serverless dependency in all the pom.xml files) and do repackaging. Kindly let me know if I can run the modified docker images as such in Lambda?
Thank You

I think you can't use share the same docker image between your ECS task and Lambda. Because they differ in few aspects and some of them are very specific to lambda, how we write the handler as well as package them.
New for AWS Lambda – Container Image Support
Your wording too is a bit confusing
can I reuse the same docker containers as such?
and then you say
can run the modified docker images as such in Lambda?
Example task definitions
For Spring Boot Application specifically, you can take a look here:
Running APIs Written in Java on AWS Lambda
Java AWS Lambda Container Image Support (Complete Guide)

Related

build and push docker image to AWS ECR using lambda

Is it possible to automate building a docker image from code committed into github (no tests involved) and then push it to AWS ECR using a lambda function?
you cannot do it just with lambda as lambda is not really a suitable execution environment for the docker daemon (necessary to build the images), however you can use lambda + sns to trigger an endpoint that could point to a service you developed, hosted on ec2 that would trigger the docker build command after a git clone (you can use something similar to python's fabfile.org or a framework that allows you to execute server commands).
You sure can extend this idea on perhaps bringing the ec2 build machine up with some ami that automates this, etc....
The big point here is that you don't really have control over what's provisioned in lambda, so you need ec2.

AWS Docker Container for Local Development

I'm using AWS Dynamo DB, Lambda, ElastichSearch, ElasticCache(Redis). I want to bring all these services offline for local development. I wonder's is there a Docker container for all these services?
Perhaps! There's a (set of) Docker containers that claim they provide local implementations of popular AWS services: localstack.
Edit: For lambda specific things there's also Docker Lambda!
I've never actually used these Docker containers, but have wanted to. (But my development needs try to use commodity services instead of vendor specific. So MongoDB instead of DynoDB, and sure we might use ElastiCache to run our Redis cluster, but that just means in local development we can use Redis directly. Having said that, that's not everyone's cup of tea / maybe not possible for some things..)
We use docker for most AWS Services for local development except for AWS Lambda.
We use the service containers as below:
MySQL for RDS MySQL
Redis for ElastiCache
ElasticSearch for AWS ElasticSearch
fake-s3 for S3
ActiveMQ for mocking SQS and SNS topics (The implementation for SNS topics is a bit ugly, but abstracted out in one place with some if-else statements)
Most of our services make use docker-compose to start the dependent containers. We've included these containers on our build server too to run our integration tests.
In addition, most of the containers we are using needed some modifications to the original Docker file. So we had to push our changes to our own Docker repository, which we maintain using ECS.
For Lambda, we do not use a docker container as we start our own HTTP server locally to test and invoke the lambda function.
Been using this setup for over a year without any issues. You may also want to refer to this blog from IFTTT to get some more ideas around DNS resolution and how to make this effort better.

Docker consumer on AWS while using RabbitMQ

I have most of my app designed and deployed. We have gone for RabbitMQ over SQS for several reasons.
I currently have my consumer running in a docker container and would like to deploy this on AWS. I am fairly familiar with Elastic Beanstalk since our web-tier is running there, but it seems all workers deployed this way have to use SQS?
The other option I am aware of is to use ECS for the docker component, but I do not want to make the image publicly available and don't have access to a private repository.
Is there some basic functionality I am missing, a document describing how to deploy to ECS using a Dockerfile and source code locally, or a way of deploying to EB using a Dockerfile without being locked into SQS?
edit
So I have found the note on the docs for EB which says that Dockerfiles are not supported when building a multi-container environment and that repositories currently have to be used for that purpose, so EB is out for me.

Deployment methods for docker based micro services architecture on AWS

I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.
This is not possible yet with the Amazon EC2 Container Service (ECS) alone - while ECS meanwhile supports private registries (see also the introductory blog post), it doesn't yet offer an image build service (as usual, AWS is expected to add such notable additional features over time, see e.g. the Feature Request: ECS container dream service for more on this).
However, it can already be achieved with AWS Elastic Beanstalk's built in initial support for Single Container Docker Configurations:
Docker uses a Dockerfile to create a Docker image that contains your source bundle. [...] Dockerfile is a plain text file that contains instructions that Elastic Beanstalk uses to build a customized Docker image on each Amazon EC2 instance in your Elastic Beanstalk environment. Create a Dockerfile when you do not already have an existing image hosted in a repository. [emphasis mine]
In an ironic twist, Elastic Beanstalk has now added Multicontainer Docker Environments based on ECS, but this highly desired more versatile Docker deployment option doesn't offer the ability to build images in turn:
Building custom images during deployment with a Dockerfile is not supported by the multicontainer Docker platform on Elastic Beanstalk. Build your images and deploy them to an online repository before creating an Elastic Beanstalk environment. [emphasis mine]
As mentioned above, I would expect this to be added to ECS in a not too distant future due to AWS' well known agility (see e.g. the most recent ECS updates), but they usually don't commit to roadmap details, so it is hard to estimate how long we need to wait on this one.
Meanwhile Amazon has introduced EC2 Container Registry https://aws.amazon.com/ecr/
It is a private docker repository if you do not like docker hub. Nicely integrated with the ECS service.
However it does not build your docker images, so it does not solve the entire problem.
I use a bamboo server for building images (the source is in git repositories in bitbucket). Bamboo pushes the images to Amazons container registry.
I am hoping the Bitbucket Pipelines will make the process more smooth with less configuration of build servers. From the videos I have seen all your build configuration sits right in your repository. It is still in a closed beta so I guess we will have to wait a bit more to see what it ends up being.