Upon Toad, I can query tables, but can't use Toad's schema browser to see, what can I do to fix it?
Does that schema (you're connected to) contain any tables or views at all? The fact that you can write queries might mean that you're selecting from synonyms (private or public) which don't belong to you.
Run the following query; it'll show you what you have on disposal in your own schema.
select object_name, object_type from user_objects order by object_type;
Then modify USER_OBJECTS to ALL_OBJECTS so that you'll see what you have access to (it'll show the owner as well):
select owner, object_name, object_type from all_objects order by owner, object_type;
It should help you find the reason why you don't see anything in the Schema Browser.
Related
I have multiple databases which have the same exact schema (different data on each one, database per customer of a SaaS platform).
I would like to create a Dashboard (with charts, datasets) which could be populated by the permissions of the logged in user.
This means the dashboard will query the data from a specified source database, instead of a pre-defined one.
The premise is basically to de-couple a chart / dataset from a database and allow it to be parametrised.
This is a case that is not really supported by Superset, but there's one workaround that I can think of that might work: you can define a DB_CONNECTION_MUTATOR in your superset_config.py that routes to a different database depending on the user.
In your superset_config.py, add this function:
def DB_CONNECTION_MUTATOR(uri, params, username, security_manager, source):
user = security_manager.find_user(username=username)
if url.database = "db_name" and user and user.email.endswith("#examplea.com"):
uri.host = "host-for-examplea.com"
return uri, params
In the function above we're changing the host of the SQLAlchemy URL to host-for-examplea.com if the user has an email ending in #examplea.com.
To make it work, create a default database (which we called db_name in this example), and create all the charts/dashboards based on it. Then, users should be redirected to specific databases by the DB_CONNECTION_MUTATOR.
One serious problem that might happen is with caching, though. You should make sure that all caches are disabled to prevent users from seeing data from other databases.
I'm kind of new to PBI and I'm looking if it's the right tool for my case.
I would like to use Power BI Embedded in a web application for our customer (where they're logged in) which do not have any Power BI account/licence.
The database on which the reports are based are on-premise so we're would use Analysis Service Live Connection to access them.
Each customer should have his own report.
Is it possible to use RLS in that case?
Does that mean we've to create a role for each of them?
What username should be given in the EffectiveIdentity? Is it 'free text' that is used by PBI to get the username in the DAX?
If each customer will have his own report, then why do you need RLS at all? Just make the report to show what the user is supposed to see. Or you want to have a single report (or set of reports), which is shared between the users and they should see only their data? I will assume it is the later one.
I will start with the last question - the effective identity is not a "free text". It must be a valid user name, having rights to access the data, as specified in the documentation:
The effective identity that is provided for the username property must be a Windows user with permissions on the Analysis Services server.
The you can define RLS in your Analysis Service model, by adding a "users security" table, where you specify which rows should be visible to each user. Define relationships between this users security table and other tables in the model, and then let RLS to filter the data in the security table. The relationships with the rest of the model will apply cascade filtering on the data, so only relevant rows will be visible to the user. See Implement row-level security in an Analysis Services tabular model for example.
So the answer of your second question is no, you don't need a separate role for each user, because the filtering is based on the username and for every user it filters the same thing the same way.
I'm using ColdFusion to connect to a RedShift database and I'm trying to understand how to test/assume myself of how the connections work in relation to TEMP tables in RedShift.
In my CFADMIN for the datasource I have unchecked Maintain connections across client requests. I would assume then each user who is using my website would have their own "Connection" to the DB? Is that correct?
Per the RedShift docs about temp tables:
TEMP: Keyword that creates a temporary table that is visible only within the current session. The table is automatically dropped at the end of the session in which it is created. The temporary table can have the same name as a permanent table. The temporary table is created in a separate, session-specific schema. (You cannot specify a name for this schema.) This temporary schema becomes the first schema in the search path, so the temporary table will take precedence over the permanent table unless you qualify the table name with the schema name to access the permanent table.
Am I to understand that if #1 is true and each user has their own connection to the database and thereby their own session then per #2 any tables that are created will be only in that session even though the "user" is the same as it's a connection from my server that is using the same credentials.
3.If my assumptions in #1 and #2 are correct then if I have ColdFusion code that runs a query like so:
drop if exists tablea
create temp table tablea
insert into tablea
select * from realtable inner join
drop tablea
And multiple users are using that same function that does this. They should never run into any conflicts where one table gets dropped as another request is trying to use it correct?
How do I test that this is the case? Besides throwing it into production and waiting for an error how can I know. I tried running a few windows side by side in different browsers and stuff and didn't notice an issue, but I don't know how to know if the temp tables truly are different between clients. (as they should be.) I imagine I could query some meta data but what meta data about the table would tell me that?
I have a similar situation, but with redbrick database software. I handle it by creating unique table names. The general idea is:
Create a table name something like this:
<cfset tablename = TableText & randrange(1, 100000)>
Try to create a table with that name. If you fail try again with a different name.
If you fail 3 times stop trying and mail the cfcatch information to someone.
I have all this code in a custom tag.
Edit starts here
Based on the comments, here is some more information about my situation. In CFAdmin, for the datasource being discussed, the Maintain Connections box is checked.
I put this code on a ColdFusion page:
<cfquery datasource="dw">
create temporary table dan (f1 int)
</cfquery>
I ran the page and then refreshed it. The page executed successfully the first time. When refreshed, I got this error.
Error Executing Database Query.
** ERROR ** (7501) Name defined by CREATE TEMPORARY TABLE already exists.
That's why I use unique tablenames. I don't cache the queries though. Ironically, my most frequent motivation for using temporary tables is because there are situations where they make things run faster than using the permanent tables.
I have an Apex application that is quite large. The need has come up to store detailed usage logs of this application. The information on APEX_WORKSPACE_ACTIVITY_LOG is not enough, because I need to know what queries each user runs on each page.
My first thought was to get the actual Oracle query logs (V$SQL and such), but they provide no information on the user (as far as the database is concerned, all queries are made by APEX_PUBLIC_USER). I have some information about the user on V$ACTIVE_SESSION_HISTORY, but that's incomplete because it stores samples of active sessions and their SQL queries at 1-second intervals, so I miss too many queries.
So now I'm off to implementing application level logging. The "right" way to fo this would be to go through all the pages in my application and create a logging process to store the relevant information for each one (username and some page items). But I wonder if there might be something simpler that does the trick.
If I understand correcly, "application processes" are run by every page in the application. So if I can get an application process to iterate over the list of page items, I can store them all in the database and be done with it. Something like
for item in page_items {
log(username, item_name, item, date)
}
Can this be done? Or maybe the information I need is on the database already and I don't see it?
You can query the metadata tables to get all items for a specific page and then use that to get their value.
select item_name, v(item_name) item_value
from apex_application_page_items
where application_id = :APP_ID
and page_id = :APP_PAGE_ID;
That will capture all items on the page. Don't forget that if you use items on Page 0 (Global Page) you may want to query that page too.
Additionally, you may want to capture application level items too.
select item_name, v(item_name) item_value
from apex_application_items
where application_id = :APP_ID;
I need to know if a user with a specific role has access to a specific item by just using the Sitecore database tables without using Sitecore API. So my question is in which table and in which column is it stored?
Security is stored against individual items in __Security field. This is a shared field and as such will be in SharedFields table. Security information is actually a pipe delimited list. NOTE: Going directly to the schema is not recommended as it may change at Sitecore's discretion.
SQL below will get the security for all items in the database, update the where clause as required to get security for the items you are interested.
SELECT Id, ItemId, FieldId, Value, Created, Updated
FROM SharedFields
WHERE FieldId = '{DEC8D2D5-E3CF-48B6-A653-8E69E2716641}' /* Guid is the ID of the __Security field */
Result:
8AA88E96-2110-4BE1-A554-BAE9C60536FF 418B3B60-61E2-4E6C-B98F-061C88239087 DEC8D2D5-E3CF-48B6-A653-8E69E2716641 au|sitecore\agency|pd|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename|pe|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename| 2011-03-07 11:48:14.563 2011-03-07 11:48:14.563
06A6DB6C-6DEF-40E0-8CF8-8E179888DBB8 F1AF5582-B6A2-4435-8307-2837C1644EFB DEC8D2D5-E3CF-48B6-A653-8E69E2716641 au|sitecore\agency|pd|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename|pe|-item:write|-item:admin|!*|+item:read|-item:delete|-item:create|-item:rename| 2011-03-07 11:48:14.270 2011-03-07 11:48:14.270
The SQL schema is not setup like you may think. The rights are stored on a Sitecore item field and not a specific column in the table. In SQL it will just be part of the XML data for the content item. You could parse that but I don't recommend going directly to SQL. Can you explain why you must do this using SQL?
Security is associated with each individual item and with is in _Security field.
This field is shared and is in SharedFields Tables.
Each value is separated by pipe.
The information related to user roles are being stored in Users table with Role id and Role Name.