Where does Sitecore stores the security restrictions for an item for each user roles? - sitecore

I need to know if a user with a specific role has access to a specific item by just using the Sitecore database tables without using Sitecore API. So my question is in which table and in which column is it stored?

Security is stored against individual items in __Security field. This is a shared field and as such will be in SharedFields table. Security information is actually a pipe delimited list. NOTE: Going directly to the schema is not recommended as it may change at Sitecore's discretion.
SQL below will get the security for all items in the database, update the where clause as required to get security for the items you are interested.
SELECT Id, ItemId, FieldId, Value, Created, Updated
FROM SharedFields
WHERE FieldId = '{DEC8D2D5-E3CF-48B6-A653-8E69E2716641}' /* Guid is the ID of the __Security field */
Result:
8AA88E96-2110-4BE1-A554-BAE9C60536FF 418B3B60-61E2-4E6C-B98F-061C88239087 DEC8D2D5-E3CF-48B6-A653-8E69E2716641 au|sitecore\agency|pd|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename|pe|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename| 2011-03-07 11:48:14.563 2011-03-07 11:48:14.563
06A6DB6C-6DEF-40E0-8CF8-8E179888DBB8 F1AF5582-B6A2-4435-8307-2837C1644EFB DEC8D2D5-E3CF-48B6-A653-8E69E2716641 au|sitecore\agency|pd|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename|pe|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename| 2011-03-07 11:48:14.270 2011-03-07 11:48:14.270

The SQL schema is not setup like you may think. The rights are stored on a Sitecore item field and not a specific column in the table. In SQL it will just be part of the XML data for the content item. You could parse that but I don't recommend going directly to SQL. Can you explain why you must do this using SQL?

Security is associated with each individual item and with is in _Security field.
This field is shared and is in SharedFields Tables.
Each value is separated by pipe.
The information related to user roles are being stored in Users table with Role id and Role Name.

Related

Update only one column in GCP datastore table

I want to update only one column in GCP Datastore table.
For Example : Table has columns id, name, descriptions, price, data.
I receive data to update only descriptions. I want to update only descriptions column without reading other data.(want to avoid read before write)
It is possible to update only column of datastore without reading data from datastore.
If not what other database in GCp allow to do it?
Cloud Datastore is a document database which stores entities, and there are no fixed columns or schema. Instead, each entity can have a different set of properties, which are similar to columns in a traditional relational database.check this document for more information
You cannot update specific properties of an entity.As this documentation says you have to update the entire entity.To update a specific property of an entity, you would need to retrieve the entire entity, modify the desired property, and then write the entire entity back to the database.

setting foreign key and primary key in gcp firestore

I am new to GCP and NOSQL.
is it possible to have primary and foreign key in the GCP fire-store
Example: I have two table STUDENT and DEPARTMENT
table looks like below
Department-table
dept-id(primary key)
deptname
Student-table
dept-id(foreign key)
student-id
student name
can anybody please help in design this in GCP Fire-store?
To a database, a key is the same as any UUID/randomID and can be shared and used between users, teams, admins, businesses, of all kinds. what matters is how that data is associated. Since Firestore is a noSQL database, there is no direct relational references, so one key cannot be equal to another without including secondary lookups.
In the same way you would define a user profile by an ID, you can create an empty document with a random ID to facilitate the ID of a team, or in this case the department. You can also utilize string combinations if you have a team and a sub-team, so long as at the point of the database request you have access to the team/department ID, you can use Regex to match a string comparison.
Example: request.resource.data.name.matches('/^' + departmentID)
To make a foreign key work with Security Rules or within the client, you must get the key that contains the data as the key should be the name of the document in question to streamline the request as you cannot perform queries or loop through data within Security Rules.
I great read on this subject, I highly suggest this article
https://medium.com/firebase-developers/a-list-of-firebase-firestore-security-rules-for-your-project-fe46cfaf8b2a
But my suggestion is to use a key that represents the department directly rather than using additional resource to have a foreign key and managing it.
Firestore won't support referential integrity.
It means that you can use any (subject to rules and conventions) names for fields, but the semantic and additional functionality is to be maintained by you, rather than by the system.

How to provide permission to the user to access only one column in the created Microsoft Lists?

Am new to Microsoft Lists and trying to implement the library management system. Have prepared a list to show the book details using the 'From Excel' list. Need to restrict the permission based on the user role(admin, client).
For example, If a user needs to request a book, there might be a column to access for the user to send a request for the desired book. So that, an admin will get notified for the request and take action.
Similarly, from the list i created, i need to provide permission to the user to access only one column. The rest of the column can only be for view purposes.
Note: As i searched i found we can set permission like view, view, and edit, and stop sharing the list based on the roles of Members, Owners, and Visitors.
Could anyone please guide me on this?
Regards,
Vadivel
#Karthi,
It's not possible to configure column permission, the least permission is item-level. There is no column-level or view level permission.
Here are 2 possible solutions:
Make the target column read-only. Then develop another interface for the administrator to manage the data. For example, through SharePoint rest API, we can turn the column back to editable and post updates then immediately turn it to read-only.
Check Set List Column Read Only in SharePoint using PowerShell
How to update read only field
Hide the target column and make a calculated column then set its value equal to the target column. The user will only see those calculated columns, any updates on the target column will be reflected in calculated columns.
Check Make SharePoint Columns read-only without coding

Using RLS with Analysis Service Live Connection in a PBIE "App Owns Data" scenario

I'm kind of new to PBI and I'm looking if it's the right tool for my case.
I would like to use Power BI Embedded in a web application for our customer (where they're logged in) which do not have any Power BI account/licence.
The database on which the reports are based are on-premise so we're would use Analysis Service Live Connection to access them.
Each customer should have his own report.
Is it possible to use RLS in that case?
Does that mean we've to create a role for each of them?
What username should be given in the EffectiveIdentity? Is it 'free text' that is used by PBI to get the username in the DAX?
If each customer will have his own report, then why do you need RLS at all? Just make the report to show what the user is supposed to see. Or you want to have a single report (or set of reports), which is shared between the users and they should see only their data? I will assume it is the later one.
I will start with the last question - the effective identity is not a "free text". It must be a valid user name, having rights to access the data, as specified in the documentation:
The effective identity that is provided for the username property must be a Windows user with permissions on the Analysis Services server.
The you can define RLS in your Analysis Service model, by adding a "users security" table, where you specify which rows should be visible to each user. Define relationships between this users security table and other tables in the model, and then let RLS to filter the data in the security table. The relationships with the rest of the model will apply cascade filtering on the data, so only relevant rows will be visible to the user. See Implement row-level security in an Analysis Services tabular model for example.
So the answer of your second question is no, you don't need a separate role for each user, because the filtering is based on the username and for every user it filters the same thing the same way.

DynamoDB table/index schema design for querying multi-valued attributes

I'm building a DynamoDB app that will eventually serve a large number (millions) of users. Currently the app's item schema is simple:
{
userId: "08074c7e0c0a4453b3c723685021d0b6", // partition key
email: "foo#foo.com",
... other attributes ...
}
When a new user signs up, or if a user wants to find another user by email address, we'll need to look up users by email instead of by userId. With the current schema that's easy: just use a global secondary index with email as the Partition Key.
But we want to enable multiple email addresses per user, and the DynamoDB Query operation doesn't support a List-typed KeyConditionExpression. So I'm weighing several options to avoid an expensive Scan operation every time a user signs up or wants to find another user by email address.
Below is what I'm planning to change to enable additional emails per user. Is this a good approach? Is there a better option?
Add a sort key column (e.g. itemTypeAndIndex) to allow multiple items per userId.
{
userId: "08074c7e0c0a4453b3c723685021d0b6", // partition key
itemTypeAndIndex: "main", // sort key
email: "foo#foo.com",
... other attributes ...
}
If the user adds a second, third, etc. email, then add a new item for each email, like this:
{
userId: "08074c7e0c0a4453b3c723685021d0b6", // partition key
itemTypeAndIndex: "Email-2", // sort key
email: "bar#bar.com"
// no more attributes
}
The same global secondary index (with email as the Partition Key) can still be used to find both primary and non-primary email addresses.
If a user wants to change their primary email address, we'd swap the email values in the "primary" and "non-primary" items. (Now that DynamoDB supports transactions, doing this will be safer than before!)
If we need to delete a user, we'd have to delete all the items for that userId. If we need to merge two users then we'd have to merge all items for that userId.
The same approach (new items with same userId but different sort keys) could be used for other 1-user-has-many-values data that needs to be Query-able
Is this a good way to do it? Is there a better way?
Justin, for searching on attributes I would strongly advise not to use DynamoDB. I am not saying, you can't achieve this. However, I see a few problems that will eventually come in your path if you will go this root.
Using sort-key on email-id will result in creating duplicate records for the same user i.e. if a user has registered 5 email, that implies 5 records in your table with the same schema and attribute except email-id attribute.
What if a new use-case comes in the future, where now you also want to search for a user based on some other attribute(for example cell phone number, assuming a user may have more then one cell phone number)
DynamoDB has a hard limit of the number of secondary indexes you can create for a table i.e. 5.
Thus with increasing use-case on search criteria, this solution will easily become a bottle-neck for your system. As a result, your system may not scale well.
To best of my knowledge, I can suggest a few options that you may choose based on your requirement/budget to address this problem using a combination of databases.
Option 1. DynamoDB as a primary store and AWS Elasticsearch as secondary storage [Preferred]
Store the user records in DynamoDB table(let's call it UserTable)as and when a user registers.
Enable DynamoDB table streams on UserTable table.
Build an AWS Lambda function that reads from the table's stream and persists the records in AWS Elasticsearch.
Now in your application, use DynamoDB for fetching user records from id. For all other search criteria(like searching on emailId, phone number, zip code, location etc) fetch the records from AWS Elasticsearch. AWS Elasticsearch by default indexes all the attributes of your record, so you can search on any field within millisecond of latency.
Option 2. Use AWS Aurora [Less preferred solution]
If your application has a relational use-case where data are related, you may consider this option. Just to call out, Aurora is a SQL database.
Since this is a relational storage, you can opt for organizing the records in multiple tables and join them based on the primary key of those tables.
I will suggest for 1st option as:
DynamoDB will provide you durable, highly available, low latency primary storage for your application.
AWS Elasticsearch will act as secondary storage, which is also durable, scalable and low latency storage.
With AWS Elasticsearch, you can run any search query on your table. You can also do analytics on data. Kibana UI is provided out of the box, that you may use to plot the analytical data on a dashboard like (how user growth is trending, how many users belong to a specific location, user distribution based on city/state/country etc)
With DynamoDB streams and AWS Lambda, you will be syncing these two databases in near real-time [within few milliseconds]
Your application will be scalable and the search feature can further be enhanced to do filtering on multi-level attributes. [One such example: search all users who belong to a given city]
Having said that, now I will leave this up to you to decide. 😊