As per title, looking to identify which DB users have passwords or not.
I can see that the password itself comes from the inaccessible pg_shadow, but pg_user does not distinguish those with or without passwords.
Really need this to assist with auditing, processes for password rotation and migration between environments.
Any pointers appreciated ....
Related
I hope you are all safe and not too crazy yet :)
Here is my use case:
Company A signs up and creates an account and can add 5 additional users / employees to be able to login and see all that companies info and no other. Now Company B signs up and can add 15 additional user / employees as they have a higher tier plan and same thing they can't see the other company. Since this is not highly sensitive data, I'll just use a low level isolation ie. "SELECT * FROM 'table' WHERE tenant_uuid = 1"
This is the first time looking at the whole multi-tenant thing, and I sorta get that but I am struggling to see how I can have multiple users tied to the one account.
The only thing I can think of is an abstract user that has a FK to the company. The company would be created with the owner user attached to the company, a custom model manager will accomplish this and then the owner can go in and add his employees.
Is this my solution or has anyone seen or used something different? any help in the matter would be greatly appreciated.
I need some guidance about the best way to store a lot of information for a single user using AWS.
The problem is that every user after signing up to my website needs to pick abilities from a bank of about 40 abilities (properties that any user can chose) and I need to find a good way to store them per user.
I am currently using Cognito for user table, and the dynamoDB to store user information.
Theoretically, I can just have a column on my dynamoDB for every ability, and then have '1' if user chose it and '0' if not. But this will lead to about 40 extra columns, and I wanted to know if there is a better way of handling this.
Thank you for your time!
You're using a NoSQL database (Dynamo) but thinking relational (columns for everything). Why not have a table that has, for example, an optional column for each ability? It's conceptually similar to the relational but the columns don't have to exist for every user and, if an ability is added, there isn't a table upgrade issue. Something like:
That would allow you to read a user and determine which abilities they have. Yes, you'd still have to loop through all possible abilities for a user but not every user needs to have an explicitly set ability.
I'm putting together a proposal for the development of a web application.
The app is to be launched in multiple countries, and some of the client's partners and (allegedly; I'm no lawyer) some of the countries involved have rules about where personal data can be stored. The upshot is that there is a hard requirement that particular data about certain countries' users is stored on servers in that country. (It sounds like they're OK with me caching data in any country, though -- so I intend to have a Redis in-memory store in the main data centre.) Some of the data (credit card details, for example) will additionally be encrypted, but this seems to make no difference to them in terms of where it can be stored.
With the current set of requirements, users from one country won't actually ever interact with users from another country, so one obvious option is to run different instances of the application in each country, entirely self-contained. This is simpler from an architectural point of view, but harder to manage, and would have overall higher server costs. It might get complicated if for example the client wants reports on all users across all countries, or eventually they want to merge the databases, and users' primary keys have to change. Not impossible, but it'd likely be a pain.
Probably better would be to have a central database with all information the client deems it acceptable to host in a single spot (North America somewhere), and then satellite databases in each country holding the information the client needs to be kept "at home".
So the main database would have the main users table, consisting of only a PK and a country code, and would have lots of other tables. Each local database would have a "user details" table, with a foreign key (to the main users table on the main database) and a bunch of other columns of personally identifiable information, as well as username, email address, password, etc.
The client may then push to have other data stored in the satellite locations, some of which may be one-to-many with a user or many-to-many with a user.
My questions:
How can this be handled with Django? Can it, or should I look at other frameworks?
Can the built-in User model be edited to look in all the satellite databases for the matching User model on log in, and when logged in to retrieve the user data from those databases without too much trouble?
Are there any guidelines you can give me to make sure code stays simple and things stay efficient?
Will this be significantly easier if the satellite database only has one-to-one data with the main User table? I imagine that having one-to-many or many-to-many data in those satellite databases would be a major pain (or at least inefficient), or am I wrong?
To answer your questions accordingly:
Looks like something that you could do in Django (I like Django so I may not be the best to opinion here) - maybe the following will convince you (or not).
A microservice approach? Multiple instances of the "user" resource multiservice each with it's own database (I heard you about the costs but maybe?).
You can do plenty with Django Authentication Backends (including wirting your own) - there is a "remote" auth backend you could use as an example. Read about stateless authentication (JWT).
Look at points 1 and 2.
Consider not using the built-in Django user model is it doesn't suit you.
I'm working on a Qt/C++ open source project that uses MySQL databases. One class will be used during initial configuration (first run) where the user will be able to select a database. Is there a way to provide a list of all databases on a host without logging in and executing a SHOW DATABASES; transaction? I want to get a list of all databases on the host, not just those owned by a particular user. The only way I know of to do this is to execute SHOW DATABASES; as root on a specific host, but I don't want to require the user to have root access except in certain situations where it is absolutely necessary.
The idea is to have a dialog that lets the user select the default database they want to use during subsequent sessions and provide the user/pass that goes with it. Bonus points if I can get the owner of each database too. (for instance, have the program display that database foo is owned by johndoe while database foo2 is owned by janesmith) Once the user has made a choice, the dialog will then write this info to that user's program configuration file which gets read on normal startup.
Can this be done or will I have to find some workaround such as making the user provide a login/password first and showing a list of databases owned by that account? That would be relatively cumbersome but easy.
You can't execute MySql queries without logging in. That said it is possible to create a user which has very minimal privileges.
You can create a user with just enough privileges to show the list of databases and run the query as that user, then when the user has logged in change the connection string.
There is a SHOW DATABASES grant which allows just that : http://dev.mysql.com/doc/refman/5.0/en/privileges-provided.html#priv_show-databases
Normally you define a user for the application with read-only privileges and after fetching the information needed you present it to the user and then ask for his credentials. I'm just oversimplifying and not going over the specifics of how this is done.
I run a REST service on AppEngine (which may not be relevant). Each REST request is accompanied by a user id and password, and at the beginning of each request I hash the password to see if it matches my records before proceeding.
This is working well, theoretically, but in practice I get bursts of requests from users - 4 or 5 a second. It's taking BCrypt 500ms to hash the password for each request! What a waste!
Clearly, I don't want to optimize the BCrypt time down. Are there standard practices for caching hashes? Would memcache be a safe place to store a table of recently hashed passwords and their hashes? I guess at that point I might as well store the users' plain-text passwords in Memcache. I'd love to do a 3ms memcache lookup instead of a 500ms hash, but security is paramount. Would it make more sense to implement some sort of session abstraction?
Thanks for any advice!
Edit for extra context: this is a gradebook application that stores sensitive student data (grades). Teachers and students log in from everywhere, including over wifi, etc. Every successful request is sent over https.
The usual rule of thumb with REST APIs is to have them remain fully stateless, however, as with goto there is a time and a place depending on your requirements. If you're not averse to the idea of having the client store a temporary session key which you only need to regenerate occasionally then you might try that out.
When a client makes a request, check whether they're sending a session key variable along with the user ID and password. If they are not, generate one based on the current server time and their password, then pass it back to the client. Store it in your own database along with its creation time. Then, when a client makes a request that includes a session key, you can verify it by directly comparing it to the session key stored in your database without requiring a hash. As long as you invalidate the key every few hours, it shouldn't be much of a security concern. Then again, if you're currently sending the password and user ID in the clear then you already have security issues.
Your best bet given your current approach is to keep a mapping of attempted passwords to their bcrypted form in memcache. If you're concerned for some reason about storing a plaintext password in memcache, then use an md5 or sha1 hash of the attempted password as a key instead.
The extra step isn't really necessary. Item stored in memcache don't leak to other apps.