Is it possible to trigger during an SQL row update using Logic Apps? - sql-update

How to trigger email upon SQL table row update ?
I want to trigger email as soon as user updates the table.
"When an item is modified (V2)" step is not working when I update the table row.

Yes it is Possible, Please make sure that based on the MICROSOFT DOCUMENTATION
You have A ROWVERSION or an IDENTITY column .
e.g:- make sure that you have a column with data type as timestamp .
For more information please refer this Microsoft Blog: Poweruser

Related

How to fetch the latest schema change in BigQuery and restore deleted column within 7 days

Right now I fetch columns and data type of BQ tables via the below command:
SELECT COLUMN_NAME, DATA_TYPE
FROM `Dataset`.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
WHERE table_name="User"
But if I drop a column using command : Alter TABLE User drop column blabla:
the column blabla is not actually deleted within 7 days(TTL) based on official documentation.
If I use the above command, the column is still there in the schema as well as the table Dataset.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS
It is just that I cannot insert data into such column and view such column in the GCP console. This inconsistency really causes an issue.
If I want to write bash script to monitor schema changes and do some operation based on it.
I need more visibility on the table schema of BigQuery. The least thing I need is:
Dataset.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS can store a flag column that indicates deleted or TTL:7days
My questions are:
How can I fetch the correct schema in spanner which reflects the recently deleted the column?
If the column is not actually deleted, is there any way to easily restore it?
If you want to fetch the recently deleted column you can try searching through Cloud Logging. I'm not sure what tools Spanner supports but if you want to use Bash you can use gcloud to fetch logs. Though it will be difficult to parse the output and get the information you want.
Command used below fetched the logs for google.cloud.bigquery.v2.JobService.InsertJob since an ALTER TABLE is considered as an InsertJob and filter it based from the actual query where it says drop. The regex I used is not strict (for the sake of example), I suggest updating the regex to be stricter.
gcloud logging read 'protoPayload.methodName="google.cloud.bigquery.v2.JobService.InsertJob" AND protoPayload.metadata.jobChange.job.jobConfig.queryConfig.query=~"Alter table .*drop.*"'
Sample snippet from the command above (Column PADDING is deleted based from the query):
If you have options other than Bash, I suggest that you create a BQ sink for your logging and you can perform queries there and get these information. You can also use client libraries like Python, NodeJS, etc to either query in the sink or directly query in the GCP Logging.
As per this SO answer, you can use the time travel feature of BQ to query the deleted column. The answer also explains behavior of BQ to retain the deleted column within 7 days and a workaround to delete the column instantly. See the actual query used to retrieve the deleted column and the workaround on deleting a column on the previously provided link.

How to use row level security in Superset UI

I am using the newest version of superset and it has the row-level security option in the UI. Can anyone help me and let me know or give a little walk through that how can I implement it in the UI and use it. There is hardly much documentation there.
Row level security essentially works like a WHERE clause. Let's assume that we build a dashboard using table called tbl_org that look likes:
manager_name department agent
Jim Sales Agent 1
Jim Sales Agent 2
Jack HR Agent 3
Jack HR Agent 4
Say, we need to show Jim only the rows/records where he is a manager on the dashboard when he logs in. The same for Jack. This is when RLS is useful.
The Superset UI provides three fields that need to be filled.
Table: The table on which we want to apply RLS. In this case would be tbl_org
Roles: The role or roles to which you want this rule to apply to. Let's say we use the Gamma role.
Clause: The SQL condition. The condition provided here gets applied to the where clause when the query is executed to fetch data for the dashboard. So for example, if you use the condition manager_name = Jim this will result in the query: SELECT * from tbl_org where manager_name = Jim
If you want dynamically filter the table based on the user who logs in you can use a jinja template:
manager_name = '{{current_username()}}'
For this, the usernames created in Superset need to match the manager_name column in tbl_org
if you want [manager_name = '{{current_username()}}'] make sense,
you have to add ["ENABLE_TEMPLATE_PROCESSING": True] in the config.py.
Row Level Security (RLS) allows an admin to force a WHERE predicate into the query SQL statement that is sent to the DB on the user's behalf.
This can be used to limit the query results to rows that explicitly meet or do not meet specific criteria, and as such, cause the list or rows returned to the user to be filtered. The criteria can be applied based on the target table(s) and user role(s).

How to provide permission to the user to access only one column in the created Microsoft Lists?

Am new to Microsoft Lists and trying to implement the library management system. Have prepared a list to show the book details using the 'From Excel' list. Need to restrict the permission based on the user role(admin, client).
For example, If a user needs to request a book, there might be a column to access for the user to send a request for the desired book. So that, an admin will get notified for the request and take action.
Similarly, from the list i created, i need to provide permission to the user to access only one column. The rest of the column can only be for view purposes.
Note: As i searched i found we can set permission like view, view, and edit, and stop sharing the list based on the roles of Members, Owners, and Visitors.
Could anyone please guide me on this?
Regards,
Vadivel
#Karthi,
It's not possible to configure column permission, the least permission is item-level. There is no column-level or view level permission.
Here are 2 possible solutions:
Make the target column read-only. Then develop another interface for the administrator to manage the data. For example, through SharePoint rest API, we can turn the column back to editable and post updates then immediately turn it to read-only.
Check Set List Column Read Only in SharePoint using PowerShell
How to update read only field
Hide the target column and make a calculated column then set its value equal to the target column. The user will only see those calculated columns, any updates on the target column will be reflected in calculated columns.
Check Make SharePoint Columns read-only without coding

BigQuery Cannot Modify Partitioned Table Schema

Per the BigQuery documentation I am attempting to modify a table's schema by adding a field. The table in question is a partition slice (partitioned by day). I am planning on performing the action on every slice.
Per the documentation (https://cloud.google.com/bigquery/docs/managing-partitioned-tables), I should be able to add field to a partitioned table like any other table. However whenever I attempt to add a field to a partitioned table, I am met with this error:
Could not edit table schema.: Cannot change partitioned/clustered table to non partitioned/clustered table.
I am not able to find any good information on what this error means, or what I'm doing wrong. I have successfully added a field to a non-partitioned table. Does the community have any good ideas to help me troubleshoot?
I understand that you are using the update_table method to update the schema in python, correct me if I'm wrong. You have to do it with the patch API you can try this API to have a better view on how to do it.

Unable to update a Power BI table schema through the API with or without ApiaryIO

I am using Power BI API.
I've got a dataset with some tables and rows.
From Power BI API Console I don't have any issue when retrieving datasets or tables.
However the PUT verb on a table resource to update its schema always returns a 504 - Proxy request timed out
It's the first time I use Apiary IO so it might be its problem and not Power BI update, but that leads me to some questions:
Is there any workaround to test Power BI with, for example, Fiddler? I can type the url and body but I will need an Authorization header with the OAuth2 token if I'm not mistaken. How can I get that? ApiaryIO seems to hide it.
As per Update Schema Documentation the URL with the resource is https://api.powerbi.com/v1.0/myorg/datasets/{myDatasetId}/tables/{myTableName}
and the verb is a PUT. What is then the meaning of the "name": "???" parameter that goes in the JSON body? Is it the table's name or something else? I am assuming it's the table name but it seems redundant as I am already accessing the resource {myTableName} as per the given URL.
And my last related question is how to rename a specific table's column without modifying its data? This is what I'm trying to achieve by updating the schema but I don't understand how does Power BI know what column I am trying to rename.
Thank you!
Sorry that you're having trouble. You can get a token in two ways -the right way is to create an app in AAD (here's how). The wrong way ;) is to open the Power BI.com service, in a browser then open fiddler, then press F5 to reload. You should be able to see the Access Token in various requests. If you register an app, you can plug in your App's information in one of the samples we have https://powerbi.microsoft.com/developers, see client app or web app.
The name you provide in the table is the friendly human readable name that appears in the UI when you're building a report. Without it the system is unusable by humans :).
Let me get back to you on #3.
Calling PUT table will attempt to save upgrade the table without loosing any data (unless you removed columns). If it can't, it will return a conflict error. If you still want to update the table schema, you would have to delete the rows and call PUT table again. There is currently no direct way to rename a column. PUT table would treat it like a delete and add for that column. You would loose the data in that column but not the whole table.