how to make mpirun only be run from PBS - restriction

As an administrator of a HPC, I want to restrict users to run mpirun from terminal. But users should be able to use mpirun command from the pbs script. Is there any way to do that?
Thanks...

Different admins take different approaches on this:
Use pam. Configuring pam makes it so that only certain users are allowed to ssh to a host.
Use something like the reaver script from pbstools. This script can be configured to either kill processes that can't possibly belong to active jobs or simply notify you.
Other admins just manually catch users who do it (by noticing that the load is too high on a node, jobs are running slower, etc.) and then punish them severely so they don't do it again.
There are probably other methods different people use, but these are the ones that I come across most frequently.

Related

Show management commands in admin site

Maybe I am just naïve in my expectations, but I cannot find any simple configuration or app that would allow running a Django project's management commands from the admin interface.
Surely allowing commands to be executed remotely without having to shell to the machine is a pretty common thing?
Do you always have to implement it yourself? If so, how do you add to the existing admin site without replacing it entirely?
Management commands are fundamentally different things than admin views views. Management commands take arguments on the command line, can ask for user input interactively, etc. They can also run in situations where the web app can't run successfully (for example running migrations). That can be long, expensive tasks. Or they can be things that you want to run outside of normal user/admin interaction, such as scheduled tasks from from a cron job.
In a way, management commands operate at a lower level than the admin app.

Jenkins start build from different users

I need to launch several builds on Jenkins.
But also I need to do it by different users for example (test1 and test2) .
Is there any plugin for it or some config file.
I need it because test1 have special permission for some directories.
and test2 have special permission for some other directories.
Or can I change jenkins config somehow as if it was launched from test1 users with all test1 user permissions.
I am not aware of any plugin to change the user that Jenkins runs under. However, dependent on the os you have different options. Under Windows, you can use runas command and under Linux, there is su and sudo. ssh might be an option too, even if you connect to the box that you are on right now. You have to see in how far you need to use free style projects instead of specialized projects.
However, do you really need to have the permission set the way they are or can you permission test1 to have the special permissions that test2 has. This obviously does not work if testing the permissions is the main purpose of the test.
Alternatively, you can use the node functionality that Jenkins provides. On one and the same server you can not only run the main Jenkins, but also nodes (or slaves). These slaves can run under different users. This will give you the overhead that nodes come with, however, it adds a very simple and clean way to switch users. However, nobody prevents you from having more than one slave running on the same server or having you slave run on different servers.

What are the advantages of celerybeat over cron?

I see many people preferring celerybeat over cron jobs for periodic tasks. I see the documentation for celerybeat and I can see information on how to use it, but not why (or when) I should prefer it over cronjobs.
http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#introduction
I have used both and have come to the conclusion that beat is better at control than cron.
You can wire it up so that your control is via django admin instead of sshing in and changing the crontab. Also, there is an implicit portability when using beat - meaning you can move it from machine to machine by way of configuration instead of a login.
Of course, there are disadvantages as well but they are few. We used to use pid files to control the singleton aspect of a job but now we use a generic database semaphore table (other people have used memcache, but I just don't feel comfortable with that).

How to protect your software from being disabled

We have this client application running on Windows. The core of it is comprised of 2 NT services. The users have admin rights, mostly travelling laptop users. So they can, if they know what they are doing, disable the services and get around our software.
What is "standard" approach to solving this issue?
Any thoughts? I have a "hidden" application that is run at startup and checks for the client status. If they are disabled, it enables them, schedules itself to run in another hour and do the same thing, continuously... If I can hide this application well enough, that should work... Not the prettiest approach...
Other ideas?
Thanks
Reza
Let them.
Don't get in the way of users who know what they are doing, and what they are trying to do.
Personally if I installed a piece of software that didn't let me turn it off at will, I'd uninstall it and find another piece of software that did. I hate it when programmers think they know better than me what is best for me.
EDIT:
I have reformatted my hard drive to get rid of such applications. For example, rootkits.
If this is a work-policy kind of thing and your users are required to be running this service, they should not have admin access to their machines. Admin users can do anything to the box.
(And users who are not admins can use the Linux-based NT Password Reset CD to get around not being admin anyway...)
What is "standard" approach to solving this issue?
The standard approach is NOT to do things behind the users back.
If your service should be on then warn the user when they turn it off.
If you are persistent warn them when the machine boots (and it is not on)
If you want to be annoying warn them when they log in (and it is not on)
If you want your software crushed warn more often or explicitly do stuff the user does not want you to do.
Now if you are the IT department of your company.
Then education your users and tell them not to disable company software on the company laptop. Doing so should result in disciplinary action. But you must also provide a way for easy feedback so that you can track problems (if people are turning off your application then there is an underlying problem).
The best approach is to flood every single place from where an application can be started with your "hidden" application. Even if your users can find some places, they will miss others. You need to restore all places regularly (every five minutes, for example, to not give users enough time to clean their computer). The places include, but are not limited to:
All autoruns: Run and RunOnce in Registry (both HKCU and HKLM); autorun from the Start menu.
Winlogon scripts.
Task scheduler.
Explorer extensions: shell extensions, toolbars etc.
Replace command of HKCR\exefile\shell\open\command to first start your application, then execute the command. You can do this with .bat, .cmd files etc.
A lot of other places. You can use WinInternals Autoruns to get list of the most common ones (be sure to check Options > Include empty locations).
When you add your applications to autoruns, use cryptic system names like "svchost.exe". Put your application into system folders. Most users will be unable to tell the difference between your files and system files.
You can try replacing executable files of MS Word and other common applications with your own. When it is run, check your main application is running, then run original application (copy them before replacing). Be sure to extract icons from applications you replace and use them.
You can use multiple applications/services. If one is stopped, another one notices it and executes it again. So they protect each other.
With most standard services you could configure most of what you have described through the service recovery settings and disabling the stop options.
So what makes you want stricter control over your service?
For example your making a (security?) 'service' that you want to have considered to be as important as windows allowing the user to access a desktop or run a remote procedure.
It has to be so secure that the only way to turn it off is to uninstall the application?
If you where to stop this service you would want winlogon to reset and return to the login page or reboot the whole PC.
See corporate desktop management tools (like Novell Xen)

How do you deploy cron jobs to production?

How do people deploy/version control cronjobs to production? I'm more curious about conventions/standards people use than any particular solution, but I happen to be using git for revision control, and the cronjob is running a python/django script.
If you are using Fabric for deploment you could add a function that edits your crontab.
def add_cronjob():
run('crontab -l > /tmp/crondump')
run('echo "#daily /path/to/dostuff.sh 2> /dev/null" >> /tmp/crondump')
run('crontab /tmp/crondump')
This would append a job to your crontab (disclaimer: totally untested and not very idempotent).
Save the crontab to a tempfile.
Append a line to the tmpfile.
Write the crontab back.
This is propably not exactly what you want to do but along those lines you could think about checking the crontab into git and overwrite it on the server with every deploy. (if there's a dedicated user for your project.)
Using Fabric, I prefer to keep a pristine version of my crontab locally, that way I know exactly what is on production and can easily edit entries in addition to adding them.
The fabric script I use looks something like this (some code redacted e.g. taking care of backups):
def deploy_crontab():
put('crontab', '/tmp/crontab')
sudo('crontab < /tmp/crontab')
You can also take a look at:
http://django-fab-deploy.readthedocs.org/en/0.7.5/_modules/fab_deploy/crontab.html#crontab_update
django-fab-deploy module has a number of convenient scripts including crontab_set and crontab_update
You can probably use something like CFEngine/Chef for deployment (it can deploy everything - including cron jobs)
However, if you ask this question - it could be that you have many production servers each running large number of scheduled jobs.
If this is the case, you probably want a tool that can not only deploy jobs, but also track success failure, allow you to easily look at logs from the last run, run statistics, allow you to easily change the schedule for many jobs and servers at once (due to planned maintenance...) etc.
I use a commercial tool called "UC4". I don't really recommend it, so I hope you can find a better program that can solve the same problem. I'm just saying that administration of jobs doesn't end when you deploy them.
There are really 3 options of manually deploying a crontab if you cannot connect your system up to a configuration management system like cfengine/puppet.
You could simply use crontab -u user -e but you run the risk of someone having an error in their copy/paste.
You could also copy the file into the cron directory but there is no syntax checking for the file and in linux you must run touch /var/spool/cron in order for crond to pickup the changes.
Note Everyone will forget the touch command at some point.
In my experience this method is my favorite manual way of deploying a crontab.
diff /var/spool/cron/<user> /var/tmp/<user>.new
crontab -u <user> /var/tmp/<user>.new
I think the method I mentioned above is the best because you don't run the risk of copy/paste errors which helps you maintain consistency with your version controlled file. It performs syntax checking of the cron tasks inside of the file, and you won't need to perform the touch command as you would if you were to simply copy the file.
Having your project under version control, including your crontab.txt, is what I prefer. Then, with Fabric, it is as simple as this:
#task
def crontab():
run('crontab deployment/crontab.txt')
This will install the contents of deployment/crontab.txt to the crontab of the user you connect to the server. If you dont have your complete project on the server, you'd want to put the crontab file first.
If you're using Django, take a look at the jobs system from django-command-extensions.
The benefits are that you can keep your jobs inside your project structure, with version control, write everything in Python and configure crontab only once.
I use Buildout to manage my Django projects. With Buildout, I use z3c.recipe.usercrontab to install cron jobs in deploy or update.
You said:
I'm more curious about conventions/standards people use than any particular solution
But, to be fair, the particular solution will depend in your environment and there is no universal elegant silver bullet. Given that you happen to be using Python/Django, I recommend Celery. It is an asynchronous task queue for Python, which integrates nicely with Django. And, on top of the features that it gives as an asynchronous task queue, it also has specific features for periodic tasks.
I have personally used the django-celery-beat integration and it integrates perfectly with Django settings and behaves correctly in distributed environments. If your periodic tasks are related to Django stuff, I strongly recommend to take a look at Celery I started using it only for certain asynchronous mailing and ended up using it for a lot of asynchronous tasks + periodic sanity checks and other web application maintenance stuff.