Power Automate: how to catch which column was updated in Dataverse Connector - row

I'm starting from a "When a row is added, modified or deleted" connector, i'm passing in a switch connector that controls if the row is added, modified or deleted.
I'm then using the mail node to notify myself if a row is added, modified or deleted, in the case a row is added i have to include in the mail which fields of that row have been modified.
I can't find if this control is possible (check the row and compare it with the pre-modified version) and how to do it.
This is the embrional flow

As requested i'll try to be more detailed.
Please note that this is a POWER AUTOMATE FLOW so there is almost no code.
The CRUD connector takes 3 arguments:
-Change type (When an item is Added, Modified or Deleted)
-The table name (It's the Dataverse table name)
-The scope (Business Unit)
So i need to know if (for example in the output of this connector) there is a variable or other connector that contains which column changed and caused the trigger)
It's a question about the output or possible connectors related to the Dataverse CRUD node so there is NO CODE involved and no more "after-issue" flow specification needed to understand my request

A solution is to create a new field that keeps the current value of the original field and use trigger conditions to make your flow run only when those two fields don't match, meaning that the original field is updated and that its value has changed.

Related

How does Web Source operation's URL Pattern get it's argument when it's assigned a parameter?

when you AutoREST-Enable a table through object browser> click table> REST tab, you will get a RESTful URI. If you then build a web source using that RESTful URI, you will get five operations - GET with a dot as URL pattern and GET, PUT, POST and DELETE operations with URL pattern has the value of :deptno. When you build a report with form on that web source you will find that all database operations work well, you can insert, update and delete through the form, and you can run the report to get all rows in the table. I need to know how the process work in the background? How the automatic row processing process knows which operation and handler to use? I know that Interactive Reports for example looks for the operation that have "Fetch Rows" it's database operation. So, I assumed that the Form's automatic row processing looks up for the Web source's operation with the database operation that relates to the process to be executed. For example, (correct me if I'm wrong) when clicking CREATE button, it denotes that an Insert process will happen, so, it will search for the web source's operation with the database operation "insert row", then it will find the handler that relates to the HTTP method attribute's value "POST". And the same goes for UPDATE and DELETE. I want to know if I am getting it right and I need to know how the URL Pattern gets it's argument for :deptno?
your understanding of the form region picking the Web Source Operation is correct. Within the Form Region, the name of the clicked button (:REQUEST) actually determines the DML operation (CREATE = Insert, SAVE = Update, DELETE = delete).
A :deptno URL parameter must also be created within the Parameters section of the REST data source. Once that is in place, you'll see the form region node in the Page Designer Tree having a Parameters node - there you can map the Web Source Module parameter to a page item, an application item or something else.
As already mentioned, the primary key values are special in a Web Source Module. In your case, the :deptno placeholder (as part of the URL) corresponds to the DEPTNO data profile column.
For the DML handlers (PUT, POST, DELETE) you don't need to define these as Web Source Module parameters, but the URL placeholders must match the column names in the data profile. This is by design - Web Source Modules are implemented to work this way.

Increment Number OnInsert()

I am trying to increase a field number whenever a new row is added to my table. First I created a variable lastItem specified as a Record with Subtype to my Table. Now I created the following Code on the OnInsert() trigger:
lastItem.FINDLAST;
ItemNo := lastItem.ItemNo + 10;
The above code seems not to work on the OnInsert() trigger but works for one row when I enter it on the ItemNo - OnValidate() trigger.
Any ideas how to get an increasing Number on every new row in my table?
Are you sure that's Dynamics CRM? The code is a Dynamics NAV C/AL code and you talking about the Item table? In this case let NAV to give you the next number from the No. Series properly.
You can use the same approach in any other table : related pattern
You should stay away from doing direct SQL updates and adding triggers to the DB when using Dynamics CRM as it's not supported.
The appropriate way would be to use a plug-in which reads the last value and then does the increment. You'd would register this to run when a new record is created in the system.
You can find some example source code on this CodePlex project: CRM 2011 Autonumbering Solution
You should use the property autoincrement of the field. In this way you increment the field one on one in every row.

Copy Records in Oracle Apex

I need to copy selected row values and store as a new record.
I am using Oracle Apex 4.2 and Tabular Form.
I need to use checkbox to select the rows and button copy. When i select multiple rows followed by click copy button to copy all the selected row values as new rows and save.
Can anyone Help
Copying Records Through an APEX Tabular Form Input
The idea of cloning existing records from a single table through an Oracle APEX Tabular Form works without much interference with the default design that you can set up through the APEX wizard for page region content.
Build a table with an independent primary key.
Suggested to include two auxiliary columns: COPY_REQUEST and COPIED_FROM for running copy operations. Specific form elements will map to these columns on the tabular form that will be set up.
Build an Oracle stored procedure that can read which records need to be copied. This procedure will be invoked each time the SUBMIT button is pressed.
(optional) Consider including a suppression of step (3) in the event that there is nothing to process (i.e., no records marked for copying).
The Working Table for Receiving Input: COPY_ME
TIP: You will have an easier time if you use the standard TABLE creation wizard. Designate CUSTOMER_ID as the PRIMARY_KEY and have APEX create its standard auto-incrementing functionality on top. (sequence plus trigger set up.)
Here's the sample data I used... though it doesn't matter. You can put in your own values and be able to verify what happened easily.
The Heavy Lifting: The Stored Procedure for Cloning Records in COPY_ME
This procedure works with 1 or more records at a time with a special identifier in the COPY_REQUEST table. After the task is done, the procedure cleans up and resets the request value again.
create or replace procedure proc_copy_me_request is
c_request_code CONSTANT char(1):= 'Y';
cursor copy_cursor is
SELECT cme.CUSTOMER_ID, cme.CUSTOMER_NAME, cme.CITY, cme.COUNTRY,
cme.COPY_REQUEST
FROM copy_me cme
WHERE cme.COPY_REQUEST = c_request_code
FOR UPDATE OF cme.COPY_REQUEST;
BEGIN
FOR i in copy_cursor LOOP
INSERT INTO copy_me (customer_name, city, country, copied_from)
VALUES (i.customer_name, i.city, i.country, i.customer_id);
UPDATE copy_me
SET copy_request = null
WHERE CURRENT OF copy_cursor;
END LOOP;
COMMIT;
END proc_copy_me_request;
There is also a column that can be hidden. It tracks where the record was originally copied from.
Note that the cursor is using the FOR UPDATE OF and WHERE CURRENT OF notation. This is important because the procedure is changing the records that are referenced by it.
APEX Page Setup Instructions
Set up a standard FORM type page and choose the TABULAR FORM style. Follow the set up instructions, taking care to map the correct primary key, and also to the PK sequence object created with the table in the previous steps above.
This is what your page set up will look like after these steps are completed:
EDIT The COPY_REQUEST Form Value:
Under the column attributes section, change the Display As option to "simple checkbox"
Under the list of values section, put a single value under the LOV Definition: Y (case sensitive in either way... just be consistent)
EDIT The COPIED_FROM Form Value:
Under the column attributes section, change the Display As option to "Display as Text(Saves State)". This is just to prevent users from stepping on this read-only field. You could also suppress it if it isn't important to know.
CREATE a New Process: Execute Copy Procedure
This is the bottom of the same configuration page, there are very few things to change or add:
Demonstration: Screenshot of COPY_ME Tabular Form Page in Action
The first screenshot below is before the page is tidied up and the checkbox control is put into place.
Plug in some test data and give it at try. The Page Process created in the step above conditionally invokes the stored procedure that processes all copy requests made at the same time when the SUBMIT form button is selected.
COMMENTS: If you spend enough time tinkering around with the built-in wizards in Oracle APEX, there are opportunities to learn new design patterns and process flows compatible within the tool. Adapting your approach can reduce the amount of additional work and frustration.

Changing Length of Siebel Column

Suppose we have a existing siebel column and this column has corresponding mapped eim column also. If I change the length of this siebel base table's column from 100 to 200varhcar by running alter query from backend. How it will impact on the EIM process? Will import process be successful?
Regards,
Robin
If you are interested in knowing conceptually, here are the implications that i can foresee.
a) Table column added using alter table is virtually useless as the application wont be able to use it because its definition is missing from Siebel Repository.
b) If you change the length of an existing column, application would still be using the length mentioned in Siebel Repository.
c) EIM process will ignore your new column length as it loads data dictionary before running the job.
d) And finally, during code migration you have to do the alter table every time since DDLSync process cannot take care of your scenario.
I would advise you not to alter the length of an existing vanilla table column, and instead extend the database table to add a new column. Just as the other poster mentioned, you should do this using Siebel Tools. You will then need to also add reference for this new field into the EIM components (this you also do using Siebel Tools).
This is a best-practice. If your client ever had an Siebel code review done by Oracle, you would be told to do what I described above (not what you were considering doing).
Changing the column length using the alter table command will only change it in the database layer, which will have no repercussions with a siebel standpoint. The EIM tables will still be valid as they will be using the column length mentioned in the repository sent in by tools. If you dont change it in the tools and apply the table, I dont think the changes will work.
I would not recommend that you do this. In this case, probably nothing will go wrong. EIM columns will load data that are upto 100 characters long but from the gui, you could insert upto 200 characters. Something unexpected can go wrong, we would need to know your application better to answer this question.

How do you handle "Sync Framework does not automatically handle the deletion of rows that no longer satisfy a filter condition"

http://msdn.microsoft.com/en-us/library/dd918848.aspx
"It is important to understand that a scope is the combination of tables and filters. For example, you could define a filtered scope named sales-WA that contains only the sales data for the state of Washington from the customer_sales table. If you define another filter on the same table, such as sales-OR, this is a different scope. If you define filters, be aware that Sync Framework does not automatically handle the deletion of rows that no longer satisfy a filter condition. For example, if a user or application updates a value in a column that is used for filtering, a row moves from one scope to another. The row is sent to the new scope that the row now belongs to, but the row is not deleted from the old scope. Your application must handle this situation."
I am just wondering someone can shed some light on how to handle "Sync Framework does not automatically handle the deletion of rows that no longer satisfy a filter condition"?
Many thanks.
The sync providers will (as part of the provisioning step) automatically create tombstone tables and triggers to track row deletions. When rows are not deleted, but updated in such a way, as to fall out of the scope, then the automatically generated schema won't log these as deletions. It will log them as updates. So to extend the Microsoft example, assume your application is syncing only Washington data to Washington sales reps. Some sales that were originally entered as Washington sales are corrected and moved to Oregon. The sync framework won't know that it should remove these now-Oregon records from the Washington reps' local databases.
You have a couple of options to solve this:
Modify the provisioning tools to generate triggers that would handle the situation, instead of the default triggers that don't. Look into extending SqlSyncScopeProvisioning to accomplish this. If done correctly, this is probably the most scale-able/extensible solution.
Modify your application to detect the attempt to move a row out of a scope and have the application delete the row and re-insert it instead of just updating it (probably in a stored procedure). If you already use stored procedures to handle updates, this might be a good option.
Add a background service or process that goes through and looks for records that don't match the scope and delete them. This may end up being the easiest solution - especially if your application is already deployed.