Override method not being called from warehouse mobile application ax - microsoft-dynamics

I have been working on some requirement for advance warehouse mobile application in AX. The requirement was to do something when item is scanned. So in order to perform this I have registeroverridemethod of leave when item text box is build. The build methods is below:
//This method is updated in WhsWorkExecuteForm
protected void createTextBox(
container _textBox,
boolean _password = false)
{
FormBuildStringControl stringControl;
stringControl = controlGroup.addControl(FormControlType::String,this.elementName(_textBox));
if (this.elementHasError(_textBox))
{
stringControl.colorScheme(FormColorScheme::RGB);
stringControl.backgroundColor(WHSWorkExecuteForm::errorBackgroundColor());
}
stringControl.text(this.elementData(_textBox));
stringControl.label(this.elementLabel(_textBox));
stringControl.passwordStyle(_password);
stringControl.enabled(this.elementEnabled(_textBox));
//Below code is added to register override method
if(this.elementName(_textBox) == #ItemId)
{
stringControl.registerOverrideMethod(methodStr(FormStringControl,Leave),methodStr(WHSWorkExecuteForm,DynamicButtonControl_modified),this);
}
}
This method is being called when I run the warehouse app from AX AOT i.e. Action Menu item -> WHSWorkExecute but it is not working from browser. I have run the incremental CIL as well but no change.
Any idea? do I need to do changes in DisplayIEOS.aspx as well?

The web browser part of the Warehouse Mobile Device Portal is driven by xml files that are exchanged between the AOS and IIS website. You can read more about that in Warehouse Mobile Device Portal Architecture
The WHSWorkExecute form in the AOT of the Dynamics AX desktop client is basically a quick&dirty "emulator" of the web client. It enables you to test changes in the WHSWorkExecute framework logic that drives the mobile device functionality without having to set up the components that enable the web client. But changing this form at run time with FormBuild classes like in your code will have no effect on the web client, because this has no effect on the xml data sent to the website.
Instead, you should use the methods provided by the WHSWorkExecute framework to add controls. See Creating Custom Solutions with the Warehouse Mobile Device Portal, it has a section on the buildControl method of the framework.
How to handle a modified event of a control depends on what you want to do. The second link describes briefly how you could implement some client side only logic.
If you need to execute logic on the AOS, you would have to modify one of the specialized build methods or create your own. The second link also has some guidance on this. Registering override methods for FormControl objects will not work, because again this will not change the xml data sent to the web client.

Related

Access application items from other applications only through fetch_app_item?

I have several applications in an Oracle APEX 19.2 workspace that use shared authentication. In order to access enduser metadata, I want to use an application item defined as global in the master application. It seems to be configered correctly: In a slave application, I can see the correct session value in the debugger windows (Session State, View: Application Items).
But the usual replacement syntaxes do not work: I can not access the value with any of those methods:
:VARIABLE
&VARIABLE.
apex_util.get_session_state('variable')
The only method that is working is apex_util.fetch_app_item('variable',[application id]) - this is cumbersome, as I would like to work with application aliases and I would need to translate the alias using the view apex_applications.
Is this working as intended or did I do something wrong?
Have you created the same application item in the slave application as well? You will also have to set it to Scope = Global. This will expose the value in the current application.

How to call on Web Service API and route data into Azure SQL Database?

Having configured an Azure SQL Database, I would like to feed some tables with data from an HTTP REST GET call.
I have tried Microsoft Flow (whose HTTP Request action is utterly botched) and I am now exploring Azure Data Factory, to no avail.
The only way I can currently think of is provisioning an Azure VM and install Postman with Newman. But then, I would still need to create a Web Service interface to the Azure SQL Database.
Does Microsoft offer no HTTP call service to hook up to an Azure SQL Database?
Had the same situation a couple of weeks ago and I ended up building the API call management using Azure Functions. No problem to use the Azure SDK's to upload the result to e.g BLOB store or Data Lake. And you can add whatever assembly you need to perform the HTTP post operation.
From their you can easily pull it with Data Factory to a Azure SQL db.
I would suggest you write yourself an Azure Data Factory custom activity to achieve this. I've done this for a recent project.
Add a C# class library to your ADF solution and create a class that inherits from IDotNetActivity. Then in the IDictionary method make the HTTP web request to get the data. Land the downloaded file in blob storage first, then have a downstream activity to load the data into SQL DB.
public class GetLogEntries : IDotNetActivity
{
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
etc...
HttpWebResponse myHttpWebResponse = (HttpWebResponse)httpWebRequest.GetResponse();
You can use the ADF linked services to authenticate against the storage account and define where container and file name you want as the output etc.
This is an example I used for data lake. But there is an almost identical class for blob storage.
Dataset outputDataset = datasets.Single(dataset => dataset.Name == activity.Outputs.Single().Name);
AzureDataLakeStoreLinkedService outputLinkedService;
outputLinkedService = linkedServices.First(
linkedService =>
linkedService.Name ==
outputDataset.Properties.LinkedServiceName).Properties.TypeProperties
as AzureDataLakeStoreLinkedService;
Don't bother with an input for the activity.
You will need an Azure Batch Service as well to handle the compute for the compiled classes. Check out my blog post on doing this.
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Hope this helps.

Designing RESTful API for Invoking process methods

I would like to know how do design the RESTful web service for process methods. For example I want to make a REST Api for ProcessPayroll for given employee id. Since ProcessPayroll is time consuming job, I don't need any response from the method call but just want to invoke the ProcessPayroll method asynchronously and return. I can't use ProcessPayroll in the URL since it is not a resource and it is not a verb. So I thought that, I can go with the below approach
Request 1
http://www.example.com/payroll/v1.0/payroll_processor POST
body
{
"employee" : "123"
}
Request 2
http://www.example.com/payroll/v1.0/payroll_processor?employee=123 GET
Which one of the above approach is correct one? Is there any Restful API Design guidelines to make a Restful service for process methods and functions?
Which one of the above approach is correct one?
Of the two, POST is closest.
The problem with using GET /mumble is that the specification of the GET method restricts its use to operations that are "safe"; which is to say that they don't change the resource in any way. In other words, GET promises that a resource can be pre-fetched, just in case it is needed, by the user agent and the caches along the way.
Is there any Restful API Design guidelines to make a Restful service for process methods and functions?
Jim Webber has a bunch of articles and talks that discuss this sort of thing. Start with How to GET a cup of coffee.
But the rough plot is that your REST api acts as an integration component between the process and the consumer. The protocol is implemented as the manipulation of one or more resources.
So you have some known bookmark that tells you how to submit a payroll request (think web form), and when you submit that request (typically POST, sometimes PUT, details not immediately important) the resource that handles it as a side effect (1) starts an instance of ProcessPayroll from the data in your message, (2) maps that instance to a new resource in its namespace and (3) redirects you to the resource that tracks your payroll instance.
In a simple web api, you just keep refreshing your copy of this new resource to get updates. In a REST api, that resource will be returning a hypermedia representation of the resource that describes what actions are available.
As Webber says, HTTP is a document transport application. Your web api handles document requests, and as a side effect of that handling interacts with your domain application protocol. In other words, a lot of the resources are just messages....
We've come up with the similar solution in my project, so don't blame if my opinion is wrong - I just want to share our experience.
What concerns the resource itself - I'd suggest something like
http://www.example.com/payroll/v1.0/payrollRequest POST
As the job is supposed to be run at the background, the api call should return Accepted (202) http code. That tells the user that the operation will take a lot time. However you should return a payrollRequestId unique identifier (Guid for example) to allow users to get the posted resource later on by calling:
http://www.example.com/payroll/v1.0/payrollRequest/{payrollRequestId} GET
Hope this helps
You decide the post and get on the basis of the API work-
If your Rest API create any new in row DB(means new resource in DB) , then you have to go for POST. In your case if your payroll process method create any resource then you have to choose to POST
If your Rest API do both, create and update the resources. Means ,if your payroll method process the data and update it and create a new data , then go for PUT
If your Rest API just read the data, go for GET. But as I think from your question your payroll method not send any data.So GET is not best for your case.
As I think your payroll method is doing both thing.
Process the data , means updating the data and
Create new Data , means creating the new row in DB
NOTE - One more thing , the PUT is idempotent and POST is not.Follow the link PUT vs POST in REST
So, you have to go for PUT method.

How to change client schema during provisioning?

I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting

Kohana 3.1 Web Services Bootstrapping Based on Environment and Stored Like A Session

We are building a n-tiered style application in Kohana 3.1 which distributes JSONP powered widgets to our partners based on a partner_id.
Each partner needs to be able to call a widget and specify an environment parameter: test OR production with the initial call, which will be used to select the appropriate database.
We need our bootstrap to watch for $_REQUEST['environment'] variable and then to maintain the state of that variable whenever the partner makes a call to the widget service.
The problem is, that all requests in the application use Bootstrap.php, but many of the requests are internal - i.e. they do not come with a partner_id or environment variable. We tried to use sessions to store these, but as these are server-to-server GET/POST calls, it does not seem possible to store and recall the session id in a cookie on the server (this is browser-less GET).
Does anyone have any suggestions? We realise we could pass the environment variable with every single call internal or external, but this does not seem very robust.
We have a config file which stores partner settings (indexed by partner_id), such as the width and height of the widget and we thought about storing the partner's environment in here, but not all calls to the server would be made by a partner, so we would still need another way to trigger the environment for other calls and select the correct DB.
We also thought of storing a flat file for the partner which maintains the last requested environment, but again, as we have many internal requests after the initial one, we don't always have a knowledge (i.e. we don't usually care) which partner_id is used in the initial call.
Hope this makes sense...!
The solution would be to call the models and methods that are needed to 'do stuff' from a single controller, keeping the partner_id only in the controller and sending the requested data back once all of the 'do stuff' methods have been run, as per the MVC model.
i.e., request from partner -> route -> controller -> calls models etc -> passes back to controller -> returns view to partner
That allows the partner_id to be kept by the controller and only passed to whatever models require it to 'do stuff', keeping within the MVC framework.
If you've not kept within the confines of MVC, then things will obviously get more complex and you'll need to store the variable somewhere.