Automated testing of privileged operations - unit-testing

How do you unit/integration test code that requires a different privilege level than exists in your continuous integration environment?
In my non-root, CCRB-driven build environment, I've got some utility functions that assume privileges that don't hold in my automated build environment: either root privileges or special accounts and groups. (For example, one function changes UID/GID and supplementary groups to a specified account, changes root and current working directory, and divorces from any controlling terminal.)
We could run the tests by hand, of course, but then we might forget to run them.
How have others tackled this issue?

I would try to factor out the security management code behind a mockable interface, so that in unit tests I can provide fake privileges however I want.
This way it would be possible to test both that barring the required privileges the function fails, and that with the privileges granted it does what it is supposed to do.
Without more concrete details it is hard to say more.

Related

Detect System reboot and start an App

We have an exe which actually checks the contents of a folder and then kicks off a windows service to do some processing on the files in that folder.
So, we made this exe as part of System start up program so it runs everytime the system reboots/starts.
Now the user is very annoyed as he gets pop up for UAC everytime he restarts. But we need to have admin rights for this exe as it kicks off a windows service. Therefore I researched and found a couple of solns for this prob.
This and This
But couldn't decide which is better and less vulnerable for security implications.
Another potential solution can be in the code of .exe itself detect the system start up and if we have any content in the target folder then only ask for UAC from user and kick off the windows service . Else just don't run the exe. I am not sure how to do this in C++. Any pointers would be helpful. If there is any better solution, always welcome.
You probably want to use Task Scheduler here.
Just create a task as part of the install process, with "When the computer starts" as the trigger, and set the "Run with highest privileges" security option.
The problem is that you're mixing up the system and user sessions.
If the processing of those files is done on behalf of a user, it probably should not be done by a service. What if two users wanted their files processed? What security context should the service use for that? And obviously you shouldn't need Administrator right to process some user files.
If the service is performing some system-level task, it shouldn't depend on a user. And in fact running at startup suggests you want this mode. (User applets start at login, not after reboot). The main problem in your design therefore seems to be that you try to run an app (with UI) at the wrong moment which requires far too many permissions (causing UAC). Redesign the service so that it does all the tasks which require admin permissions, and when installing the service set it to start automatically. This still requires UAC at installation, but that is when UAC is expected.

Rails app, Continuous Integration/Deployment Environments

When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).

Jenkins start build from different users

I need to launch several builds on Jenkins.
But also I need to do it by different users for example (test1 and test2) .
Is there any plugin for it or some config file.
I need it because test1 have special permission for some directories.
and test2 have special permission for some other directories.
Or can I change jenkins config somehow as if it was launched from test1 users with all test1 user permissions.
I am not aware of any plugin to change the user that Jenkins runs under. However, dependent on the os you have different options. Under Windows, you can use runas command and under Linux, there is su and sudo. ssh might be an option too, even if you connect to the box that you are on right now. You have to see in how far you need to use free style projects instead of specialized projects.
However, do you really need to have the permission set the way they are or can you permission test1 to have the special permissions that test2 has. This obviously does not work if testing the permissions is the main purpose of the test.
Alternatively, you can use the node functionality that Jenkins provides. On one and the same server you can not only run the main Jenkins, but also nodes (or slaves). These slaves can run under different users. This will give you the overhead that nodes come with, however, it adds a very simple and clean way to switch users. However, nobody prevents you from having more than one slave running on the same server or having you slave run on different servers.

Git/Django: Granular code permissioning/availability

We're thinking of bringing in a couple of specialists for short-term projects. I'm trying to figure out how to allow them to effectively develop against our code base without releasing the whole code base to them.
Each project has well defined areas they need access to; primarily our main models, together with specific pieces of our app.
We've started to do a better job of breaking up the project into multiple apps within a single django project, but they all still live together in a single git repository. If you check out the repository you get everything.
What are successful strategies for arranging code and repositories such that third parties can access core models and selected functionality without having access to everything?
Note that since this is a somewhat rare need, I'd strongly prefer a setup that doesn't inconvenience our core developers - their lives should be minimally impacted by the setup.
You might try git-submodule as a way of developing each app as its own git repository while still letting developers grab the root and all apps with one "git clone". It's not totally painless though since when you do this any changes to a submodule will need to be committed there and then again in the root repository to reference the new submodule commit. This is probably inescapable, since if you want anyone beside a core developer to be able to commit to an individual app then the app's commits must be independent.

What do you use for Staging / Deployment Artifact Servers?

I am thinking about writing my own release storage server and before I do this, I'd like to know what people use to see integration instead of create.
So what do you use to store your builds for internal access?
I'm looking for a web app that allows me to upload artifacts and then reference them by various tags so I can group them together by component or release vehicle. I also want access controls per build by readiness or promotion.
I define staging as placing built artifacts on a server for communities of users to access. The artifacts are usually zip files containing either applications or libraries + documentation. The user communities are developers, QA, and service delivery/operations. Basically, the creators, the checkers and external-users.
We release artifacts individually and as groups in a release vehicle (e.g., release 1.1 contains foo 1.0.1 and bar 1.0.7). Depending on the artifact, we may want to restrict access. Operations shouldn't be able to access pre-released builds and we may want to track who downloads a limited availability release.
So, I'm hoping to find a tool that does most of what I want with a good extensible design so I can add in what I don't have.
Any one know of a good tool for managing the builds post-build?
Examples might be:
quickbuild/lunt build
Team forge
build forge
Jira & confluence as a set
sonatype nexus
home grown
SVN repository using branching to promote builds from dev->Qa->GA
Peter,
Since you're not getting many answers, I'll let you know about AnthillPro whose developer, Urbancode, I work for.
Ok, disclaimers out of the way, AnthillPro is designed to serve exactly the broad audience that you're discussing - dev, checkers, and operations. Compared to the tools you list, AnthillPro is something like a BuildForge (a key competitor of ours) or quick build with a tightly integrated artifact repository (like nexus). So the builds are run, and you can view the results of your builds - and the build artifacts - in a nice web ui. Users with correct permissions can run a secondary process like a deployment or test against prior builds - and the artifacts from the selected build.
The goal is to manage the entire build lifecycle from creation, through various testing tools and deployment environments out through release to production. It's not a big nasty suite, instead we integrate with tools like Subversion and Jira to make sure every release has a manifest of source and problem ticket changes.
Your release packages would map well to AnthillPro's built in dependency system. We often see customers create virtual projects that take little or no source code, but instead either relate or package components into a release bundle.
Where AnthillPro may fall short for you is that generally, we would allow operations to see pre-release builds. However, you could add rules that would immediately fail / block an attempted releaes by operations of any build not marked as "pre-release". AnthillPro's system of statuses allows the team to flag a build with custom markers like, "In QA" or "Approved for Release". Combined with rules about running workflows,that should give you the control you need. If some projects are particularly sensitive, you'd just use the role based security to lock those away.
Hope that gives you something to look into.
-- Eric
My options are
build automation systems like AntHill, QuickBuild, TeamForge, BuildForge
file server
source control server
maven repository manager (nexus, archiva)
My goals are
group build by multiple criteria (artifact type, release vehicle, stage/phase)
promote build from dev -> qa -> released
provide access control for dev builds, qa ready builds, production ready builds
I'm going to focus on either source control as file server (using svn) or maven repos manager as file server using nexus. The rational is as follows:
minimize effort
minimize cost
use something I can easily extend when needed to (because I'm certain my requirements will shift).
maven use is growing and will eventually be the dominant build technology here.
Thanks for the information.