I've been working on a personal project in C# that allows the user to execute scripts written by other users and restrict the permissions of that script. I can use .NET Code Access Security mechanisms to sandbox user scripts and make sure that they have only those permissions that the user wants to give them.
Broadly speaking, my security requirements are:
the user should be able to restrict the untrusted script's access to only certain parts of the filesystem, including forbidding all filesystem access
the user should be able to restrict the untrusted script's network connections to only certain IP addresses or hostnames, including forbidding all network connections
it is okay if the user script manages to hang or terminate the host application, but the user script must not be able to circumvent the permission restrictions (i.e. denial of service is okay, breach isn't)
I'm contemplating trying to do something similar in C++, as a sort of a personal exercise. Obviously, things are more complicated when running native code directly, even if the user scripts are written in a scripting language like Lua.
The first approach I can think of is to insert my own hooks in the scripting environments standard library functions. For example, if the scripting language is Lua, instead of exposing io.open normally, I would have to expose a wrapper that checked the arguments against the script's permissions before passing them to the original implementation.
My concern with this approach is that it drastically increases the amount of my own code that is in charge of security and, therefore, a potential security vulnerability that I wrote myself. In other words, when working with .NET CAS, I can trust that Microsoft did its job well in the sandboxing code, as opposed to having to trust my own sandboxing code.
Are there any alternatives I'm unaware of?
Related
This isn't really a technical question, but say you've developed an app for commercial use. If you get questions pertaining to the security of your app from a person who isn't necessarily technically well-versed, saying that you've taken standard security measures like encryption of passwords, protection of routes, secure database connection etc. won't have much meaning to people who don't understand what these terms mean. With that in mind, is there any way to show/prove more generally that your app is secure e.g. is there a certification from AWS for example, that will show clients that your app can be trusted?
For a security aware client, to gain assurance that your software is reasonably secure, you should be able to present the secure development lifecycle that was in place during development and resulted in secure software. Because that is really the only way to gain that assurance.
A secure sdlc includes elements like developer security awareness/education to know about and be able to avoid security issues. It includes feature reviews, security architecture and code reviews during development, static scanning (sast), dynamic scanning (dast), or more recently iast, it also includes penetration testing, and in case of SaaS, also secure operations, configuration management, log management, devsecops.
You simply cannot get this level of assurance afterwards.
You can have some elements of it though.You can run a static scan, you can buy a penetration test, you can show how you deal with security issues and so on. In many cases, that's actually good enough, but be aware that really secure software is not only this.
I'm not actually whitelisted yet to use google cloud functions, but I have this question:
http functions have access to my google cloud context?
I wan't to run untrusted javascript code so I want to use a function as a sandbox, where the user can just run simple javascripts.
If I understand your request correctly, you are looking to have Cloud HTTP Functions evaluate user-provided Javascript code on the server side.
By your description, the only real ways the function would be able to evaluate the user's code would be essentially using eval or new Function(). To confirm the risks I mentioned, I created a cloud function that simply passes the POST request body to eval. Without any dependencies, I could issue HTTP requests on behalf of the cloud function which could be quite bad.
Given that most useful cloud functions would have "#google-cloud" as a dependency, the user could gain access to that context. I was able to require #google-cloud and get all the information accessible to that object (application credentials, application information, etc.). Having such information available to a malicious user is considerably worse that the first test. In addition, Cloud Functions are authenticated by default, presumably by default application credentials, thus gaining all the abilities of the gcloud client library.
In the end, the safest way to run user-provided code on the server would be within a container. This would essentially lock the user's code into a Linux box where the resources and networking capabilities can be entirely governed by you. On the Google Cloud Platform, you're best means of accomplishing this would likely be using App Engine as a front-end to handle user requests and Compute Engine VMs to create and run containers for user code. It's more complex but doesn't risk destroying your Google Cloud Platform project.
I am writing a C++ application with a postgresql 9.2 database backend. It is an accounting software. It is a muti user application with privilege separation features.
I need help in implementing the user account system. The privileges for users need not be mutually exclusive. Should I implement it at the application level, or at the database level?
The company is not very large at present. Assume about 15-20 offices with an average of 10 program users per office.
Can I make use of the roles in postgres to implement this? Will it become too tedious, unmanageable or are there some flaws in such an approach?
If I go via the application route, how do I store the set of privileges a user has? Will a binary string suffice? What if there are additional privileges later, how can I incorporate them? What do I need to do to ensure that there are no security issues? And in such an approach I am assuming the application connects with the privileges required for the most privileged user.
Some combination of the two methods? Or something entirely different?
All suggestions and arguments are welcome.
Never provide authorization from a client application, which is run on uncontrolled environment. And every device, that a user has physical access to, is an uncontrolled environment. This is security through obscurity — a user can simply use a debugger to get a database access credentials from client program memory and just use psql to do anything.
Use roles.
When I was developing an C++/PostgreSQL desktop application I've chosen to disallow all users access to modify all tables and I've created an API using Pl/PgSQL functions with VOLATILE SECURITY DEFINER options. But I think it wasn't a best approach, as it's not natural and error prone to use for example:
select add_person(?,?,?,?,?,?,?,?,?,?,?,?);
I think a better way would be to allow modifications to tables which a user needs to modify and, when needed, enforce authorization using BEFORE triggers, which would throw an error when current_user does not belong to a proper role.
But remember to use set search_path=... option in all functions that have anything to do with security.
If you want to authorize read-only access to some tables then it gets even more complicated. Either you'd need to disable select privilege for these tables and create API using security definer functions for accessing all data. This would be a monster size API, extremely ugly and extremely fragile. Or you'd need to disable select privilege for these tables and create views for them using create view with (security_barrier). Also not pretty.
I'm interested in what strategies can be used for handling user authentication and authorization in a cross-platform distributed system. I would like to be able to mark actions in the system as belonging to a certain user and also to only allow certain users to do certain actions.
Background:
The system is currently used under Windows only, where actions initiated by a user is currently only tagged as coming from the particular machine of the user. An action basically involves using the cpu for doing some calculations and returning the result.
There is currently no authorization whatsoever being done and no authentication of users (or between computers). The system is running as a service with low privileges (using the NETWORK SERVICE account). Data is not sensitive and all users on the local network are allowed to use the system to their hearts content.
The system can be deployed in both homogeneous Windows domain setups as well as workgroups without a domain controller or with a mix of a domain together with a bunch of worker computers not belonging to the domain.
Problem
In order to add functionality to the system, say for instance collecting usage statistics per user or displaying who is making use of a computer it is necessary to keep track of individual users. In addition to this, if certain actions should only be allowed to be performed by certain users (for instance changing some settings for the system), some form of authorization is also required.
I have a good understanding of the Windows way of doing this and adding this type of functionality in a homogeneous Windows domain setup would be straightforward using the built-in authentication and authorization functionality in Windows. This also eliminates the need to assign special accounts only valid for this particular system to the system - once the user has logged in normally all authentication and authorization can be done without requiring any extra user interaction.
But what if the system should be able to run on Mac OSX? Or Linux? What if it is supposed to run in a mixed environment with some users in a Windows domain, others on OSX and some worker machines running Linux? (The actual system can be ported to all of these systems and handles cross-platform communication and so on). I have limited knowledge of the way authentication and authorization is handled on these platforms and no knowledge of how this can be achieved when interacting between platforms.
Are there any good strategies to use for cross-platform user authentication and authorization in a distributed system like this? Note the dual use of "cross-platform" here - both as in being able to compile the program for different platforms and as in being able to interact between platforms.
I've tagged the question as C++, since this is the language the system is written in, but I am willing to accept anything that can be interacted with using C++. Any ideas (including wild architectural changes) are welcome!
Update:
An example of what would be interesting to be able to achieve:
User A is logged in to machine 1, which is a Windows machine.
User A opens the administrative interface for the system, selects machine 2 (a Linux system) and adjusts a setting. The system verifies that user A in fact has sufficient privileges to be allowed to do this.
Since A already verified his identity when he/she logged in, it would be desirable to allow him/her to change this setting without having to provide additional credentials.
You chould use claims based authentication with SAML tokens, which work cross platform.
On the windows side there is a library for this: Windows Identity Foundation.
See: http://msdn.microsoft.com/en-us/security/aa570351
This question has been asked a few times on SO from what I found:
When should a web service not be used?
Web Service or DLL?
The answers helped but they were both a little pointed to a particular scenario. I wanted to get a more general thought on this.
When should a Web Service be considered over a Shared Library (DLL) and vice versa?
Library Advantages:
Native code = higher performance
Simplest thing that could possibly work
No risk of centralized service going down and impacting all consumers
Service Advantages:
Everyone gets upgrades immediately and transparently (unless versioned API offerred)
Consumers cannot decompile the code
Can scale service hardware separately
Technology agnostic. With a shared library, consumers must utilize a compatible technology.
More secure. The UI tier can call the service which sits behind a firewall instead of directly accessing the DB.
My thought on this:
A Web Service was designed for machine interop and to reach an audience
easily by using HTTP as the means of transport.
A strong point is that by publishing the service you are also opening the use of the
service to an audience that is potentially vast (over the web or at least throughout the
entire company) and/or largely outside of your control / influence / communication channel
and you don't mind or this is desired. The usage of the service is much easier as clients
simply have to have an internet connection and consume the service. Unlike a library which
may not be so easily done (but can be done). The usage of the service is largely open. You are making it available to whomever feels they could use it and however they feel to use it.
However, a web service is in general slower and is dependent on an internet connection.
It's in general harder to test than a code library.
It may be harder to maintain. Much of that depends on your maintainance and coding practices.
I would consider a web service if several of the above features are desired or at least one of them
is considered paramount and the downsides were acceptable or a necessary evil.
What about a Shared Library?
What if you are far more in "control" of your environment or want to be? You know who will be using the code
(interface isn't a problem to maintain), you don't have to worry about interop. You are in a situation where
you can easily achieve sharing without a lot of work / hoops to jump through.
Examples in my mind of when to use:
You have many applications in your control all hosted on the same server or two that will use the library.
Not so good example, you have many applications but all hosted on a dozen or so servers. Web Service may be a better choice.
You are not sure who or how your code could be used but know it is of good value to many. Web Service.
You are writing something only used by a limited set of applications, perhaps some helper functions. Library.
You are writing something highly specialized and is not suited for consumption by many. Such as an API for your Line of Business
Application that no one else will ever use. Library.
If all things being equal, it would be easier to start with a shared library and turn it into a web service but not so much vice versa.
There are many more but these are some of my thoughts on it...
Based on multiple sources...
Common Shared Library
Should provide a set of well-known operations that perform common tasks (e.g., String parsing, numerical manipulations, builders)
Should Encapsulate common reusable code
Have minimal dependencies on other libraries
Provide stable interfaces
Services
Should provide reusable application-components
Provide common business services (e.g., rate-of-return calculations, performance reports, or transaction history services)
May be used to connect existing software from disparate systems or exchange data between applications
Here are 5 options and reasons to use them.
Service
has peristent state
you need to release updates often
solves major business problem and owns data related to it
need security: user can't see your code, user can't access you storage
need agnostic intereface like REST (you can auto generate shallow REST clients for client languages esily)
need to scale separately
Library
you simply need a collection of resusaable code
needs to run on client side
can't tolerate any downtime
can't tolerate even few milliseconds of latency
simplest solution that couldd possibly work
need to ship code to data (high thoughput or map-reduce)
First provide library. Then service if need arises.
agile approach, you start with simplest solution than expand
needs might evolve and become more like "Service" cases
Library that starts local service.
many apps on the host need to connect to it and send some data to it
Neither
you can't seriously justify even the library case
business value is questionable
Ideally if I want both advantages, I'll need a portable library, with the agnostic interface glue, automatically updated, with obfuscated (hard to decompile) or secure in-house environment.
Possible using both webservice and library to turn it viable.