(or maybe more)Clojure background process - clojure

Let's say Im making a crawler / scraper in clojure, and I want it to run periodically (at predefined times of day).
I want to define my jobs with quartz / quartzite (at least that seems to be the most robust solution.)
Now, to create a daemon process with clojure, I tried lein-daemon plugin, but it seems that it is a pretty risky endevour, since the plugin seems more than a bit buggy (or I am making some heavy mistakes)
What is the best way for me to create this service?
I want it to be able to restart itself upon system reboot, but I want to use clojure (quartzite) for my jobs (loading them from database, etc).
What is the robust - but clojury - way to create a long-running, daemon process?
EDIT:
The deployment environment will be something like a single VPS or a dedicated server.
There may be a dozen jobs loading their parameters from some data store, running anywhere from 1 - 8 times a day (or maybe more).

The correct process depends a lot on your environment. I work on deployment systems for complex web/mobile infrastructure with many long running Clojure processes. For this we use Pallet to create instances with the code checked out and configured, then we have a function that generates init scripts to start the services at boot. This process is appropriate to environments where you need a repeatable build on a cloud provider so it may be too heavy for your situation.
If you are looking for simple recurring jobs you may want to look into Immutant which is an application server for Clojure with good support for recurring jobs.

Related

Do schedulers slow down your web application?

I'm building a jewellery e-commerce store, and a feature I'm building is incorporating current market gold prices into the final product price. I intend on updating the gold price every 3 days by making calls to an api using a scheduler, so the full process is automated and requires minimal interaction with the system.
My concern is this: will a scheduler, with one task executed every 72 hrs, slow down my server (using client-server model) and affect its performance?
The server side application is built using Django, Django REST framework, and PostgresSQL.
The scheduler I have in mind is Advanced Python Scheduler.
As far as I can see from the docs of "Advanced Python Scheduler", they do not provide a different process to run the scheduled tasks. That is left up to you to figure out.
From their docs, they are recommending a "BackgroundScheduler" which runs in a separate thread.
Now there are multiple issues which could arise:
If you're running multiple Django instances (using gunicorn or uwsgi), APS scheduler will run in each of those processes. This is a non-trivial problem to solve unless APS has considered this (you will have to check the docs).
BackgroundScheduler will run in a thread, but python is limited by the GIL. So if your background tasks are CPU intensive, your Django process will get slower at processing incoming requests.
Regardless of thread or not, if your background job is CPU intensive + lasts a long time, it can affect your server performance.
APS seems like a much lower-level library, and my recommendation would be to use something simpler:
Simply using system cronjobs to run every 3 days. Create a django management command, and use the cron to execute that.
Use django supported libraries like celery, rq/rq-scheduler/django-rq, or django-background-tasks.
I think it would be wise to take a look at https://github.com/arteria/django-background-tasks as it is the simplest of all with the least amount of setup required. Once you get a bit familiar with this you can weigh the pros & cons on what is appropriate for your use case.
Once again, your server performance depends on what your background task is doing and how long does it lasts.

Use Azure Batch for non-parallel work? Is there a better option?

I have a scenario where our Azure App Service needs to run a job every night. The job cannot scale to multiple machines -- it involves downloading a large data file, and does special processing on it (only takes a couple minutes). Special software will be required to be installed as well. A lot of memory will be needed on the machine for the computation, therefore I was thinking one of the Ev-series machines. For these reasons, I cannot run the job as a web job on the Azure App Service, and I need to delegate it elsewhere.
Anyway, I have experience with Azure Batch so at first I was thinking of Azure Batch. But I am not sure this makes sense for my scenario because the work cannot scale to multiple machines. Does it make sense to have a pool with a single node and single vm on the node? When I need to do the work, an Azure web job enqueues the job, and the pool automatically sizes from 0 to 1?
Are there better options out there? I was look to see if there are any .NET libraries to spin up a single VM and start executing work on it, then disable the VM when done, but I couldn't find anything.
Thanks!
For Azure Batch, the scenario of a single VM in a single pool is valid. Azure Container Instances or Azure Functions would appear to be a better fit, however, if you can provision the appropriate VM sizes for your workload.
As you suggested, you can combine Azure Functions/Web Jobs to enqueue the work to an Azure Batch Job. If you have autoscaling or an autopool set on the Azure Batch Pool or Job, respectively, then the work will be processed and the compute resources will be deallocated after (assuming you have the correct settings in-place).

django deployment with java and c++

I have created a django app that contains c++ for some of the views as well as a java library. How would I deploy this app? What kind of hosting service allows for multiple languages? I have looked at EC2, GAE, and several platforms (like heroku) but I can't seem to find a definitive solution.
I have never deployed anything to the web so a simple explanation would be much appreciated.
PaaS stuff is probably not your best bet. If you want the scalability and associated buzzwords(muh 99.9999999999% availability because my servers are hosted in a parallel dimension without electrical storms, power outages, hurricanes, earthquakes, or nuclear holocausts) that comes with hosting your application on a huge web company's platform, check out IaaS(Infrastructure as a service) systems like Google's Compute Engine or AWS. With these you just get a virtual server (or servers), running your Linux distro of choice, and you can install and run whatever you please on them without being constrained to a specific platform like App Engine or Heroku(where you have to basically write your app to specifically run on that platform). If you plan on consuming a ton of bandwidth/resources from the get-go, you will almost certainly get a better deal using a dedicated server(s) from a small company.
Interested in what specifically you are executing C++ for in a Django view. Image/video processing?
Well. Deployment is not really something where a simple explanation helps much.
First I would check what the requirements to the operating system are (compilers, dependencies,…). That will maybe reduce the options quickly.
I guess that with a setup containing C++ & Java artifacts, the usual PaaS (GaE, Heroku,…) offerings will not be sufficient because they define the stack. And a mixture of Python/C++/Java is rather uncommon I'd say.
Choosing an IaaS offering (EC2, …) may be an option. There you can run your whole self-defined stack and have the possibility of easier scaling.
Hosting the application on your own server(s) is also always possible. Check your data protection regulations to find out if it's not even a requirement.
There are a lot of ways to get the Django application to run. The Django documentation has some information about deployment. If you have certain special requirements, uwsgi may be a good application server.
You may also want a web server in front of the application. Possibilities range from using uwsgi's built-in http server or using e.g. Nginx with uwsgi.
All in all every component of the whole "deployment" has hundereds of bells and whistels and it's not easy to give advice without knowing specific requirements and properties of the system itself. You'll also probably need a database you have to deploy.
But before deploying it to the web, it's also important to have a solid build process to assemble all the parts. And not only on the development machine. With three languages involved this should be the first step solve. If it easily and automagically deploys in a development environment, moving it to a server is easier.

Update a table after x minutes/hours

I have these tables
exam(id, start_date, deadline, duration)
exam_answer(id, exam_id, answer, time_started, status)
exam_answer.status possible values are 0-not yet started 1-started 2-submitted
Is there a way to update exam_answer.status once now - exam_answer.time_started is greater than exam.duration? Or if it is already past deadline?
I'll also mentioning this if it might help me better, I'm building this for a django project.
Django applications, like any other WSGI/web application, are only meant to handle request-response flows. If there aren't any requests, there is no activity and such changes will not happen.
You can either write a custom management command that's executed periodically by a cron job, but you run into the risk of possibly displaying incorrect data. You have elegant means at your disposal to compute the statuses before any related views start their processing, but this might be potentially a wasteful use of resources.
Your best bet might be to integrate a task scheduler with your application, such as Celery. Do not be discouraged because Celery seemingly runs in a concurrent multiprocess environment across several machines--the service can be configured to run in a single-thread and it provides a clean interface for scheduling such tasks that have to run at some exact point in the future.

AWS instances patch updates for multiple instances

I have about 40 linux instances running in AWS across multiple regions under VPC. Now I need to patch linux kernal, run updates for apache, php, mysql. Its quite hard to do it logging in to each servers. how to automate this process or easily run updates on all the servers at once.
For this situation you might be more or less forced to do this by hand, but in the future a design makeover would be in order to make your life easier for dealing with stuff like this. I would recommend that you look into puppet or Chef as they enable you to script your infrastructure and when you have updates/changes that need to occur, you apply them to the systems in question or just rebuild the system over again.
For this scenario if you were to use Chef, you would just update your scripts and tell chef to update all of your systems.
Granted, I know that this bit of information doesn't help your current predicament, but it's a recommendation for future environments to alleviate issues like these.
Check out AWS Systems Manager (SSM).
This is a nice walk though of running patches with SSM. SSM allows you to:
Choose maintenance windows
White-List or Black-List patches
Determine a patch acceptance level (e.g. High vs Critical)
Delay patch acceptance by a specified period (want to wait a day in case a patch turns out to be bad?)
Apply patches to specifically tagged instances (you could run multiple patch groups if you like)
It's a bit of trouble to set up though.
Option 2
Depending on your needs you might simply want to install and configure yum-cron (on Ubuntu You'd use unattended-upgrades). I've been doing that for years and I have literally seen zero counts of a security patch causing a breaking regression. I'd far rather have every host automatically patched and deal with the fallout if it ever breaks rather than have unpatched hosts. One note is that I disable automatic reboots so kernel updates don't take effect till I reboot.
Option 3
If you just want to do something on all hosts "fire and forget" style you could explore dsh (distributed ssh, you can probably guess everything you need to know about it ;- ). I've used that with some success. I've also used Ansible and Chef (both require a bit of setup and learning curve)