How to create one docker container per web service request? - web-services

I have a quite heavy batch process (a python script called "run_simulation.py") on which I have very little control, it can be launched by a single user through a web api but it read and writes from disk so it wouldn't handle parallel requests.
Now, I'd like to have one docker container instanciated per request so that all requests can be handled in parallel, what would be the way to do this ? Is this even doable with Docker ? What would be the module responsible to instanciate the container and pass the http request to it ?

Generally you don’t do this. There are two good reasons for that: if you unconditionally launch a container per request it becomes very easy to swamp your system with these background jobs to the point where none can progress; and the setup that would allow you to launch more Docker containers would also give you unlimited root-level access to the host, which you don’t want in a process that accepts network requests.
A better approach is to set up a job queue system. RabbitMQ is popular and open-source, but by no means the only option. When you receive a request that needs background work, you add a job to the queue and return immediately. Meanwhile, you have some number of worker processes which accept jobs from the queue and do the work.
This gives you a couple of benefits. You control how much work can be done in parallel (by controlling the number of worker containers). If you need to do more work by setting up a second server (or even more), they can all connect back to the same queue server, without requiring a complex multi-host container setup. If your workers crash (or get OOM-killed) their jobs will be returned to the queue and can be picked up and retried by other workers. If you decide Docker doesn’t work for you, or that you need a different orchestrator (Nomad, Kubernetes) you can run this exact same setup without making any code changes, just changing the deployment configuration.

Related

Threading vs Containers in Orchestration

I am working on re-designing an existing Spring Boot application which is trying to automate a series of executions of work that the company is currently having done manually by it's staff. It has a main application that works almost as an orchestration application in the sense that it is the service that will call other applications to get the overall piece of work done. It has 7 sub-systems that it invokes, 3 of these systems need to be invoked in some form of order and complete before the other 4 are invoked but they can be invoked asynchronously.
All of these sub-systems have now been moved to Spring Microservices and the application I'm working on must invoke these microservices (some in order and some asynchronously), it is possible that my application will be called more than once at the same time so I need to consider that multiple containers may be needed for each sub-system. I've implemented Open-Feign to invoke each of the Microservices.
They also have the plan in the not so distant future to move this to AWS ECS/Fargate, however for the time being it is going to be run in Linux VM's and the containers are created on the same private network for communication. I'm wondering if I should remove ThreadPoolTaskExecutor completely and just invoke a new container for each simultaneous request to my application, however I've read that threads on a process are still faster and have less overhead than creating a process on a container and considering there's not going to be many containers invoked simultaneously I'm perplexed as to the best approach.
Any advice would be appreciated.
Unless each request scales up the memory consumption linearly in the application by at least an additional 1Gb (then new pod base memory would not be that much compared with it) it is an overkill to spin up a new pod for each request... it is 200Mb of additional memory for each request and I don't see a benefit nor the need.

How to make load balancing broker more fully asynchronous?

After reading through the ZMQ manual about the load balancing broker, I thought that it would be great to implement in my own code. So I did, adding some additional touches to make it more responsive. One performance enhancement I was looking to add was the ability to dispatch to multiple long-running work jobs concurrently. I think I'm right about this, I could be wrong though, so consider the following with respect to just the lbbroker code that's in the manual:
Two workers (clients) simultaneously request work, each with long running jobs given to them (by a manager, or manager). In the current code, It's good because it's not round-robin-ing the work to different recipients, it's selecting FCFS. But there's also a problem in that a reply is first needed from the first worker who gets through before work can be dispensed to the second worker.
Basically, I want to dole worker out as fast as there are workers ready to receive it, FCFS style and concurrently as well. At the same time, I don't want to lose the model that I have where manager A gets through to worker B, and worker B's reply gets back to manager A. Keeping this, which is facilitated by the request-reply pattern, while at the same time allowing worker B to receive the only manager's second work job while A may still be processing it's job is very desired.
How can I most easily go about achieving this? Preferably by modifying my current lbbroker implementation, which isn't too different from lbbroker in the manual.
Thanks in advance.
As it turns out, my difficulties stemmed from an unsufficiently specific understanding of the load balancing broker example; it is not a broker that has REP sockets in order that it must receive between each work request/worker request. So the asynchronous issue does not exist at all.
Basically, a Router has an identity message and in forwarding that along in a consistent manner, you can avoid the issue entirely, and the router is free to connect other manager worker pairs while N concurrent workers work.

Not sure if I should use celery

I have never used celery before and I'm also a django newbie so I'm not sure if I should use celery in my project.
Brief description of my project:
There is an API for sending (via SSH) jobs to scientific computation clusters. The API is an abstraction to the different scientific job queue vendors out there. http://saga-project.github.io/saga-python/
My project is basically about doing a web GUI for this API with django.
So, my concern is that, if I use celery, I would have a queue in the local web server and another one in each of the remote clusters. I'm afraid this might complicate the implementation needlessly.
The API is still in development and some of the features aren't fully finished. There is a function for checking the state of the remote job execution (running, finished, etc.) but the callback support for state changes is not ready. Here is where I think celery might be appropriate. I would have one or several periodic task(s) monitoring the job states.
Any advice on how to proceed please? No celery at all? celery for everything? celery just for the job states?
I use celery for similar purpose and it works well. Basically I have one node running celery workers that manage the entire cluster. These workers generate input data for the cluster nodes, assign tasks, process the results for reporting or generating dependent tasks.
Each cluster node is running a very small python server which takes a db id of it's assigned job. It then calls into the main (http) server to request the data it needs and finally posts the data back when complete. In my case, the individual nodes don't need to message each other and run time of each task is very long (hours). This makes the delays introduced by central management and polling insignificant.
It would be possible to run a celery worker on each node taking tasks directly from the message queue. That approach is appealing. However, I have complex dependencies that are easier to work out from a centralized control. Also, I sometimes need to segment the cluster and centralized control makes this possible to do on the fly.
Celery isn't good at managing priorities or recovering lost tasks (more reasons for central control).
Thanks for calling my attention to SAGA. I'm looking at it now to see if it's useful to me.
Celery is useful for execution of tasks which are too expensive to be executed in the handler of HTTP request (i.e. Django view). Consider making an HTTP request from Django view to some remote web server and think about latencies, possible timeouts, time for data transfer, etc. It also makes sense to queue computation intensive tasks taking much time for background execution with Celery.
We can only guess what web GUI for API should do. However Celery fits very well for queuing requests to scientific computation clusters. It also allows to track the state of background task and their results.
I do not understand your concern about having many queues on different servers. You can have Django, Celery broker (implementing queues for tasks) and worker processes (consuming queues and executing Celery tasks) all on the same server.

Django and Celery Confusion

After reading a lot of blogposts, I decided to switch from crontab to Celery for my middle-scale Django project. I have a few things I didn't understand:
1- I'm planning to start a micro EC2 instance which will be dedicated to RabbitMQ, would this be sufficient for a small-to-medium heavy tasking? (Such as dispatching periodical e-mails to Amazon SES).
2- Computing of tasks, does compution of tasks occur on the Django server or the rabbitMQ server (assuming the rabbitMQ is on a seperate server)?
3- When I need to grow my system and have 2 or more application servers behind a load balancer, do these two celery machines need to connect to the same rabbitMQ vhost? Assuming application servers are the carbon copy and tasks are same and everything is sync on the database level.
I don't know the answer to this question, but you can definitely configure it to be suitable (e.g. use -c1 for a single process worker to avoid using much memory, or eventlet/gevent pools), see also the --autoscale option. The choice of broker transport also matters here, the ones that are not polling are more CPU effective (rabbitmq/redis/beanstalk).
Computing happens on the workers, the broker is only responsible for accepting, routing and delivering messages (and persisting messages to disk when necessary).
To add additional workers these should indeed connect to the same virtual host. You would
only use separate virtual hosts if you would want applications to have separate message buses.

How to serve CPU intensive webservice requests in the cloud?

Background: I'm running a webservice in which each request involves a fair amount of computations (up to 10 seconds on a quadcore machine).
Each request can be broken down to about 150 independent (and equally small) subtasks.
What I'm after: l'm looking for a hosting service that allows me to serve these kinds of requests efficiently in a scalable manner.
What I've considered: I've looked into Google App Engine and Rackspace.
It seems to me as if GAE is intended for simple requests, requiering litte resources to process. Problem with something like Rackspace is that I can't tell in advance how many vCPUs I may need (and even if I knew how big future spikes would be, I don't want to sit with, say, 40 servers idling the rest of the time)
Questions:
Would it be possible to use GAE in the following way:
For each request, split it up into 150 subtasks
Process all subtasks independently by doing 150 concurrent HTTP requests to the same webapp (but through a differrnt method)
Collect the results from the "subresults" and return a response to the original request.
Is there any possibility that Map Reduce for GAE could be of any help?
Is there any other service better suited for this task?
Yes, this is possible. The usual way would be to use Task Queue, possibly via DeferredTask helper class.
1.3 Normal web requests (to frontend instances) are limited to 30s, so doing this in synchronous way is not guaranteed to succeed. Also note that instances are artificially limited to do 10 parallel requests (if multithreading is enabled).
Yes, this is a job for map reduce. But note that map reduce is async - you give it tasks to do and it will be done sometime in the future.
Given the processing you need you might want to look at GAE backends (they are long running with multithrading and come in different sizes). If you need even more processing power, then you might want to look at Compute Engine.
Unless all of these 150 subtasks are read-only activities, trying to run them all in a single thread is just not safe. Web requests are unreliable - people can cancel, hit refresh if it takes too long, close windows in the middle, or just time out due to network issues. The background HTTP requests, likewise, can have a whole mess of problems. The standard solution is to have your front-end code simply build a list of things that need to be done, so it can get back to the user quickly, and have a back-end 'worker' process handle the (potentially unreliable) subtasks. Depending on what your application is doing, you might bounce the user to a "working" screen (like searching for airfare) where they can safely wait for the results to come up, or it might just be stuffed away as a "pending" job (like ordering something from Amazon).
There's countless different ways to handle this basic workflow. If you stick with Google App Engine, they have a "task queue" as part of the platform - providing a simple mechanisms for creating & dispatching background tasks. If you go with Rackspace, their cloud offering is less of a unified platform so you'll have to either roll your own queue or get one to plug into your setup.