We have exported our GCP billing information to bigquery. When querying over it I've noticed that there seems to be a one-to-many relationship between service and sku. In other words, every instance of a sku seems to have the same service.
If this is indeed the case I can build some nice drill-down type reports whereby the report user drill from service to sku. before I build such a report though I want to confirm that this is indeed the case.
Can multiple instances of a SKU map to different services?
Ah, seems the answer is "no, there isn't a one-to-many relationship between service and sku"
That's a shame.
Related
I'm kind of new to PBI and I'm looking if it's the right tool for my case.
I would like to use Power BI Embedded in a web application for our customer (where they're logged in) which do not have any Power BI account/licence.
The database on which the reports are based are on-premise so we're would use Analysis Service Live Connection to access them.
Each customer should have his own report.
Is it possible to use RLS in that case?
Does that mean we've to create a role for each of them?
What username should be given in the EffectiveIdentity? Is it 'free text' that is used by PBI to get the username in the DAX?
If each customer will have his own report, then why do you need RLS at all? Just make the report to show what the user is supposed to see. Or you want to have a single report (or set of reports), which is shared between the users and they should see only their data? I will assume it is the later one.
I will start with the last question - the effective identity is not a "free text". It must be a valid user name, having rights to access the data, as specified in the documentation:
The effective identity that is provided for the username property must be a Windows user with permissions on the Analysis Services server.
The you can define RLS in your Analysis Service model, by adding a "users security" table, where you specify which rows should be visible to each user. Define relationships between this users security table and other tables in the model, and then let RLS to filter the data in the security table. The relationships with the rest of the model will apply cascade filtering on the data, so only relevant rows will be visible to the user. See Implement row-level security in an Analysis Services tabular model for example.
So the answer of your second question is no, you don't need a separate role for each user, because the filtering is based on the username and for every user it filters the same thing the same way.
I have searched a lot about this question, there are no concrete answers to this.
I have a AWS Redshift DB, has around 6-7 schema' with 10-12 tables in each.
and dashboards are made within schema level as well as across schema.
here's the use case:
I have some users who needs to see only dashboards related to "schema 1" but not "schema 2"
I have some other users who are looking at dashboards which are connected to "schema 1" and as well as "schema 2", but m not able to find any workaround to this.
I have seen a thread saying that it's possible to give access to schema but they haven't mentioned that How.
https://github.com/apache/incubator-superset/issues/5483#issuecomment-494227986
As per the Superset documentation, you can not create access level on the schema but you can create access on data source level. Or you can create custom data sources and can create desired roles as per your need.
Refer: https://superset.incubator.apache.org/security.html#managing-gamma-per-data-source-access
I have created a Dataflow in power bi service. Now my client's requirements is that they want to take the data from the dataflow as per the roles. There is a user table where roles are already defined. My question is that without the relation between tables, how I am supposed to filter the data from all the tables? Is it possible at all? Or how can I make relationship of the tables in dataflow? Or any alternate procedure to take the data from dataflow as per the roles. Help me pls. Thanks in advance.
If your data supports it, for example some sort of mapping between the user and the data they are allowed to see, you will need to use Row Level Security to restrict what the end users see in the report. You will make the relationship between your dataflow and mapping table in Power BI, not the dataflow.
If you mean restricting access to the data in the dataflow based on their role, for example the user creates a report it only loads what they are allowed to see, then this functionality is not supported.
Hope that helps
I am building service oriented system, with multiple services and application.
Current I am not sure how to handle DB references between resources from multiple services and databases.
For example, I have a users service, where I can define all users and their roles.
Next I have, products service, where I can define my products, their prices and other information.
I also have invoicing service, which is used to create invoices. This service will use information from previous two services. It will link products and users to invoice. Now I am not sure what is the best approach for this?
Do I just save product ID and user ID that it got from other two services, without any referential integrity?
If I do this, then I will have problem when generating reports, because at time of generation I will need to send a lot of requests to products service, to get names and prices of product in invoice. Same for users.
Do I create some table products in my invoicing application, and store name and price of product at the moment of invoice creation?
If I go with this approach, then in case that price or name of product changes, I will have inconsistent data across my applications?
Is there some well-known pattern for this kind of problem, that is what is the best solution.
Cross-service references in DB is a common challenge for Data integrity between multiple web services, And specially when we are talking about Real time access.
There is two approaches for your case :
1- Databases Replication across your servers
I suppose that you have each application hosted on a separate server, So i can name your servers as Users_server, Products_server and Invoices_Server.
In your example, your Invoice web service need to grab data from Users & Products Servers, in this case you can create a Replication of your Users Database and Products Database on your Invoices_server.
This way you can run your Join queries on the same server and get data from multiple databases.
Query example :
SELECT *
FROM UsersDB.User u
JOIN InvoicesDB.Invoice i ON u.Id = i.ClientId
2- Main Database Replication
1st step you have to replicate all your databases into one main server we can call it Base_server, which basically contain all your databases from all your services.
Then you can build an internal web service for your application to provide needed data in just "One Call", this answer your question about generating reports.
In other words, you will make one call to the mane Base service instead of making 2 or 3 calls to your separate services.
Note: As a Backend developer we use this organization as a best practice while building a large bundle based application, we create a base bundle and then create service_bundle which rely on the base bundle.
If your services are already live, we may need more details about the technology and databases type you using in order to give you a more accurate solution.
Just because you are using SOA doesn't mean you abandon database integrity. Continue to use referential integrity where your database design requires it.
At the service level, you can have each service be responsible for returning identity information for the entities which it owns. This identity information may or may not be the actual primary key from the database, but it will be used by the clients of the service as though it were the actual primary key.
When a client wants to create an invoice, it will call the User service and receive a User entity, which will contain a User Identifier. It will call the Product service and receive a set of products, each with a product identifier. It will then call the Invoice service to create an invoice, passing the user identifier and the product identifiers. This will likely return an invoice identifier.
You can (probably should) enforce the integrity making the productId and userId foreign keys in your invoice table. Then your DB makes sure the referenced entities exist. Reports should join tables, not query services for each item. I assume a central DB shared across the system.
I'm trying to add a new product to my seller account, using SubmitFeed function (feed type being _POST_PRODUCT_DATA_). Everytime I try it, a different error comes up. Want to confirm if I understand the underlying concepts correct.
ISBN/UPC/EAN are standard global identifiers used to identity a commodity uniquely.
ASIN are standard Amazon identifiers used to identify a commodity on Amazon uniquely.
SKU is my personal unique identifier.
So, if I want to sell a product that is existent on Amazon, I can specify ASIN/UPC/EAN/ISBN. What is benefit of providing Description Data as it won't affect the description already showing on Product Listing Page on Amazon
I can add a new product (not existent on Amazon) by specifying my local SKU and omitting any ASIN/UPC/EAN/ISBN. Are there any specific rules for mandatory fields/data to be specified while adding product with specific categories/product-types?
Your understanding of the underlying concepts seems accurate.
There is no direct benefit to providing tons of details to an already existing product. Unless Amazon for some reason switches to "your" data from whoever else's. There are rumors of that happening, but I haven't had such a case myself.
For the most part, you cannot add a new product without specifying a ASIN/UPC/EAN/ISBN. There are exceptions to this, but they are few and far between. It is in Amazon's interest that identical products will be on the same page. If the description or other information on that page is wrong, contact Amazon. Apart from what the XSDs define as mandatory, there are category specific mandatory fields. The easiest way to find them is by creating a product manually in Seller Central.