Resource Class - Azure SQL DW - azure-sqldw

Just one basic query:
In Azure, SQL Datawarehouse is there a way to know about default resource class and its associated Memory, concurrency slots that are allocated for a given SQL Login User account.
Can this be retrieved from tsql code?

In Azure Data Warehouse resource classes are implemented through database roles.
To find out the database roles of a user you can use this query:
SELECT DP1.name AS DatabaseRoleName,
isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;
AFAIK there is no DMV or predefined stored procedure that would show you the numbers for the max memory size/concurrency slot.
Having said that, on Gen1 you can use prc_workload_management_by_DWU to find the information you're looking for. For Gen2 you could write your own mapping stored proc based on the books online.
If you want to know the real time resource consumption take a look at sys.dm_pdw_exec_requests and joint it with sys.dm_pdw_exec_sessions on session_id to see which user is running each query.

The article Pio refers to (Workload management with resource classes) mentions that smallrc is the default resource class. This applies to all logins. On Gen1, smallrc always gets 1 concurrency slot. On Gen2, smallrc is a dynamic resource class that adds concurrency slots as the instance is scaled. See Memory and concurrency limits for further details on how concurrency slots are allocated to smallrc and the rest of the resource classes.

Related

AWS RDS - replicate data in one table to another

No. I am not talking about read replicas.
The scenario I am thinking of is this. Let's say you have an RDS table called user_profile. You want to record a history of the changes of each user profile in another table, let's say we call it user_profile_history. Is it possible in RDS to do real time porting from the main user_profile table to its history table, whenever updates are done to the main table?
End scenario would be, user_profile table only contain the latest user data. All other past snapshots of profile are in the history table.
Both the tables are on the same RDS database.
I have done my due diligence and did a bit of research but all I could find was read replicas and replicating data to another region. Haven't found any that would cover this scenario. Yes, you could say that we can just implement the logic in the app itself but what if we want to "pass the burden" to the RDS DB?

Bigquery service account restricted to a dataset

Is it possible to create a bigquery service account to limit access to only 1 dataset? When I go through the service account generation process it appears to give access to an entire project and does not show options to limit to a specific data set.
Short answer is yes. But to do it you do not assign the privileges at the project level. You need to actually go and modify the dataset to do it.
Check the documentation here:
https://cloud.google.com/bigquery/docs/dataset-access-controls
It outlines the process with a few different methods.

GCP - Is there a way to get bill line items at Instance level

GCP provides a mechanism to export billing data to BigQuery. This is really helpful but what it lacks is to provide cost line items at the Instance level (or at least I could not figure out a way). We can get cost aggregates at SKU, Project, Service level, but more granularity is required. This is very much possible with Azure and AWS.
Following are the columns I see in the exported BigQuery Billing table;
billing_account_id, invoice.month, cost_type, service.id, service.description, sku.id, sku.description, usage_start_time, usage_end_time, project.id, project.name, project.ancestry_numbers, project.labels.key, project.labels.value, labels.key, labels.value, system_labels.key, system_labels.value, location.location, location.country, location.region, location.zone, cost, currency, currency_conversion_rate, , usage.amount, usage.unit, usage.amount_in_pricing_units, usage.pricing_unit, credits.name, credits.amount, export_time
Is there a workaround to fetch cost aggregates at Instance level?
Example: If I have subscribed for 2 Compute Engines of a specific SKU. Is there a mechanism to get cost aggregates for each Compute Engine separately?
At the moment its not possible to filter your reports in instance level and SKU is the most granular filter.
An approach you can use to identify your instances and a get a better understanding of your data is using labels. As you can see here :
A label is a key-value pair that helps you organize your Google Cloud
instances. You can attach a label to each resource, then filter the
resources based on their labels. Information about labels is forwarded
to the billing system, so you can break down your billing charges by
label.
In this document which explains the billing data table's schema you can see that the labels attached in your resource will be present in your data.

WSO2 CEP - Single Event Table for Multiple Execution PLans

I have been exploring WSO2 CEP for last couple of days.
I am considering a scenario where a single lookup table could be used in multiple execution plans. As far as I know, only way to store data all data is event table.
My questions are:
Can I load an event table once(may be by one execution plan) and share that table with other execution plans?
If answer of Q1 is NO, then it will be multiple copies of same data storing in different execution plans, right ? Is there any way to reduce this space utilization ?
If event table is not the correct solution what are other options ?
Thanks in Advance,
-Obaid
Event tables would work in your scenario. However, might you need to use RDBMS EventTable or Hazelcast EventTable instead of In-memory event tables. With them, you can share single table data with multiple execution plans.
If you want your data to be preserved even after server shutdown, you should use RDBMS EventTables (with this you can also access your table data using respective DB browsers, i.e., H2 browser, MySQL Workbench, etc...). If you just want to share a single event table with multiple execution plans at runtime, you can go ahead with Hazelcast EventTable.

Cross-service references in DB

I am building service oriented system, with multiple services and application.
Current I am not sure how to handle DB references between resources from multiple services and databases.
For example, I have a users service, where I can define all users and their roles.
Next I have, products service, where I can define my products, their prices and other information.
I also have invoicing service, which is used to create invoices. This service will use information from previous two services. It will link products and users to invoice. Now I am not sure what is the best approach for this?
Do I just save product ID and user ID that it got from other two services, without any referential integrity?
If I do this, then I will have problem when generating reports, because at time of generation I will need to send a lot of requests to products service, to get names and prices of product in invoice. Same for users.
Do I create some table products in my invoicing application, and store name and price of product at the moment of invoice creation?
If I go with this approach, then in case that price or name of product changes, I will have inconsistent data across my applications?
Is there some well-known pattern for this kind of problem, that is what is the best solution.
Cross-service references in DB is a common challenge for Data integrity between multiple web services, And specially when we are talking about Real time access.
There is two approaches for your case :
1- Databases Replication across your servers
I suppose that you have each application hosted on a separate server, So i can name your servers as Users_server, Products_server and Invoices_Server.
In your example, your Invoice web service need to grab data from Users & Products Servers, in this case you can create a Replication of your Users Database and Products Database on your Invoices_server.
This way you can run your Join queries on the same server and get data from multiple databases.
Query example :
SELECT *
FROM UsersDB.User u
JOIN InvoicesDB.Invoice i ON u.Id = i.ClientId
2- Main Database Replication
1st step you have to replicate all your databases into one main server we can call it Base_server, which basically contain all your databases from all your services.
Then you can build an internal web service for your application to provide needed data in just "One Call", this answer your question about generating reports.
In other words, you will make one call to the mane Base service instead of making 2 or 3 calls to your separate services.
Note: As a Backend developer we use this organization as a best practice while building a large bundle based application, we create a base bundle and then create service_bundle which rely on the base bundle.
If your services are already live, we may need more details about the technology and databases type you using in order to give you a more accurate solution.
Just because you are using SOA doesn't mean you abandon database integrity. Continue to use referential integrity where your database design requires it.
At the service level, you can have each service be responsible for returning identity information for the entities which it owns. This identity information may or may not be the actual primary key from the database, but it will be used by the clients of the service as though it were the actual primary key.
When a client wants to create an invoice, it will call the User service and receive a User entity, which will contain a User Identifier. It will call the Product service and receive a set of products, each with a product identifier. It will then call the Invoice service to create an invoice, passing the user identifier and the product identifiers. This will likely return an invoice identifier.
You can (probably should) enforce the integrity making the productId and userId foreign keys in your invoice table. Then your DB makes sure the referenced entities exist. Reports should join tables, not query services for each item. I assume a central DB shared across the system.