Get next BACHNUMB for econnect taPopRcptHdrInsert - microsoft-dynamics

How do I get the next BACHNUMB for the Dynamics GP eConnect proc taPopRcptHdrInsert?

There is no NEXT batch number like Next Document Number, you have to provide batch id, if this already exist in GP all is fine if not e-connect will create new id.

Related

Dynamic query (Current date) via web services in Power Bi

In my project we are consuming the company's data via Web Service REST. Today we don't do the query dynamically by passing the start date and end date parameters via string.
enter image description here
My goal is for the end date to update dynamically. I've already created a query that takes the current date but I can't put it in the parameter without generating an error in the query.
enter image description here
This is the error message I get when I put the column value in the parameter:
enter image description here
I'm pretty sure I'm getting the syntax wrong. Anyone who can help me, I really appreciate it. I would like to point out that the date format for the API call to work is DD/MM/YYYY.
Can you try using
PutYourOtherTableNameHere[Hoje_Coluna]{0}
instead of
[Hoje_Coluna]
?
To see if that will work, put this in right before your query, then click on the step and see what it returns.
x = PutYourOtherTableNameHere[Hoje_Coluna]{0},

Ajax call returned server error ORA-01403: no data found for APEX Interactive Grid

I am trying to save data into my table using an interactive grid with the help of custom plsql. I am running into an "ORA-01403-no data found" error while inserting data and I can't figure out why.
This is my plsql custom process which I run. Appreciate your help.
DECLARE
em_id NUMBER;
BEGIN
CASE :apex$row_status
WHEN 'C'
THEN
SELECT NVL (MAX (emergency_id), 0) + 1
INTO em_id
FROM emp_emergency_contact;
INSERT INTO emp_emergency_contact
(emergency_id, emp_id, emergency_name, emergency_relation
)
VALUES (em_id, :emp_id, :emergency_name, :emergency_relation
);
WHEN 'U'
THEN
UPDATE emp_emergency_contact
SET emergency_name = :emergency_name,
emergency_relation = :emergency_relation
WHERE emergency_id = :emergency_id;
WHEN 'D'
THEN
DELETE emp_emergency_contact
WHERE emergency_id = :emergency_id;
END CASE;
END;
So far I have not come across any documented way on how to use custom PL/SQL logic for processing submitted rows of APEX 5.1 Interactive Grid via AJAX call.
You are getting no data found error because the return is expected to be in certain json format.
The example you have provided is not too complex and can be with done using standard "Interactive Grid - Automatic Row Processing (DML)" process, which is an AJAX approach. If AJAX call is not important then you can create your own PL/SQL process with custom logic. Example of which is demonstrated in "Sample Interactive Grids" package application, check out Advanced > Custom Server Processing page in this application for more information.
I agree with Scott, you should be using a sequence or identity column for ids.
Not entirely sure. A 'select into' can raise a no_data_found exception, but yours shouldn't.
That being said, you shouldn't have max(id)+1 anywhere in your code. This is a bug. Use a sequence or identity column instead.
I have gotten this many times so the first thing I do is go look at any columns in my grid sql that are not part of the "Save", they are from a join for data only.
I just got it again and it was a heading sort column that I had as a column type of "Number". I changed it to display only and the "Save" now works.
Although, I had already set the "Source" of the column to "Query Only" which is also needed.
It is a bummer the Ajax error message doesn't at least give the column name that caused the error.
Hope this helps someone..
BillC
Add a RETURNING INTO clause after the insert. IG expects a primary key to be returned to query the inserted row.

Increment Number OnInsert()

I am trying to increase a field number whenever a new row is added to my table. First I created a variable lastItem specified as a Record with Subtype to my Table. Now I created the following Code on the OnInsert() trigger:
lastItem.FINDLAST;
ItemNo := lastItem.ItemNo + 10;
The above code seems not to work on the OnInsert() trigger but works for one row when I enter it on the ItemNo - OnValidate() trigger.
Any ideas how to get an increasing Number on every new row in my table?
Are you sure that's Dynamics CRM? The code is a Dynamics NAV C/AL code and you talking about the Item table? In this case let NAV to give you the next number from the No. Series properly.
You can use the same approach in any other table : related pattern
You should stay away from doing direct SQL updates and adding triggers to the DB when using Dynamics CRM as it's not supported.
The appropriate way would be to use a plug-in which reads the last value and then does the increment. You'd would register this to run when a new record is created in the system.
You can find some example source code on this CodePlex project: CRM 2011 Autonumbering Solution
You should use the property autoincrement of the field. In this way you increment the field one on one in every row.

trigger Informatica workflow based on the status column in oracle table

I want to implement the below scenario without using pl/sql procedure or trigger
I have a table called emp_details with coulmns (empno,ename,salary,emp_status,flag,date1) .
If someone updates the columns emp_status='abc' and flag='y', Informatica WF 1 would be in continuous running status and checking emp_status value "ABC"
If it found record / records then query all the records and it will invoke WF 2.
WF 1 will pass value ename,salary,Date1 to WF 2 (Wf2 will populate will insert the records into the table emp_details2).
How can I do this using the informatica approach instead of plsql or trigger?
If you want to achieve this in real time, write the output of WF1 to a message queue and in the second workflow WF2 subscribe to the message queue produced from WF1.
If you have batch process in place. Produce a output file from WF1 and use this output file in WF2. You can easily setup this dependency using job schedulers.
I don't understand why do you need two workflows in the first place. Why not accomplish emp_details2 table updates with the very same workflow that is looking for differences.
Anyway, this can be done using indicator file:
WF1 running continously should create a file if any changes have been found.
WF2 should be running continously with EventWait set to wait for the indicator file specified above. Once found it should use the Assignment Task to rename/delete the file and fetch the desired data from source and populate the emp_details2 table.
If you need it this way, you can pass the data through the indicator file
You can do this in a single workflow, Create a dummy session which which check for the flag in table after this divide the flow into two based on the below link conditions,
Flow one: Link condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS(count)>=1 then run your actual session which will load the data
Flow two: Link Condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS=0, connect this to control task and mark the workflow as complete.
Make sure you schedule the workflow at Informatica level to run continousuly.
Cheers

How to get FBA Fee and commission using Amazon MWS

I am going to extract order details from Amazon and store in a database. I am getting all data except FBA fee and Commission of an order.
Can anyone please guide me on this to get FBA Fee and Commision?
The comission is part of the settlement reports you'll receive every fortnight. I'm not using FBA, but I would assume FBA fees would be included there as well where applicable. Two of those reports are automatically created whenever Amazon is preparing a payout. You can get a list of these reports (they seem to be stored forever) using the GetReportList() call. Their reporttypes are _GET_FLAT_FILE_PAYMENT_SETTLEMENT_DATA_ and _GET_V2_SETTLEMENT_REPORT_DATA_FLAT_FILE_. The two reports cover the same settlement in different formats.
Edit: More details on how to do this:
Call GetReportList using the following parameters:
'Acknowledged' = 'false'
'ReportTypeList.Type.1' = '_GET_FLAT_FILE_PAYMENT_SETTLEMENT_DATA_'
'ReportTypeList.Type.2' = '_GET_V2_SETTLEMENT_REPORT_DATA_FLAT_FILE_'
Please note: You might just want to pick just one of the two ReportTypes.
Also: Acknowledged=false is not actually needed, but I recommend acknowledging the reports you have already processed, so you'll only get a list of new reports to work on, see step 5 below.
You'll get a list of reports back (a "GetReportListResult"). This document gives you a list of reports. You'll need their ReportId for the next step.
Call GetReportusing the ReportId from step 2
Parse the response. It is a CSV file ("flat file" in Amazon terminology) with all your orders within two weeks prior to the report generation.
Upon successfull processing, call UpdateReportAcknowledgements with ReportIdList.Id.1 = ReportId from step 2 to acknowledge the report. This ensures that the next call for GetReportList (step 1) does not get the same data again.
You should get a UpdateReportAdcknowledgementsResult back when Amazon has set that flag.
There is a new API _GET_FBA_ESTIMATED_FBA_FEES_TXT_DATA_
request = new RequestReportRequest();
request.MarketplaceIdList = new IdList();
request.Merchant = amznAccess.merchantId();
request.MarketplaceIdList.Id.Add(amznAccess.marketplaceId());
request.ReportType = "_GET_FBA_ESTIMATED_FBA_FEES_TXT_DATA_";
don't forget to set request start date (for eg 30 days)