How to read file from WSO2 ESB without using poll interval - readfile

I am able to read the file from specified READFILEURI path if I can specify the poll interval.
My current requirement is to read the file from given path only when we trigger the proxy service.It should not poll the file automatically.When I click on "try this service" and send the request then only the file should read from the read file path and do processing.
TO test this I have removed the "transport.vfs.pollInterval" attribute from my proxy configuration and tested it.But file is not getting read from the path once I trigger the proxy request.
Could some one help me how to achieve this.

I guess it should work if you use the file connector.
https://docs.wso2.com/display/ESBCONNECTORS/Working+with+the+File+Connector
You should be able to create a simple proxy that get's triggered via http/https and the uses the above mentioned file connector read operation to read the file.
Unfortunately I cannot give you an exact example because we're still on an older esb version where this connector isn't available.
Hope that helps.
Regards
Martin

Related

Configure Boost Log V2 Text IPC message queue backend via ini file

from the recent documentation, it seems, that Boost Log V2 has been extended with Text IPC message queue backend:
https://www.boost.org/doc/libs/1_80_0/libs/log/doc/html/log/detailed/sink_backends.html
but I haven't found any description, how can I configure it via .ini file:
https://www.boost.org/doc/libs/1_80_0/libs/log/doc/html/log/detailed/utilities.html#log.detailed.utilities.setup.settings_file
Can anybody help me, where are these settings documented?
Regards
There is no built-in factory for creating text IPC message queue sinks from settings. Mainly because the queue setup protocol, including choosing which process should create the queue, and with what parameters, is the application logic and not a configurable setting.
You can register a custom sink factory as described here (you don't need a custom sink as you can use the sink backend provided by Boost.Log). The sink factory should process the settings and create and configure the sink and the associated IPC queue accordingly.

WSO2 SAP endpoint property file - searching for wrong file name and path

I have "export" proxy service which aims to "obtain" sequence. The sequence points to ${server}/services/sapBapi or "sapBapi" proxy service. The "sapBapi" proxy service points to gov:endpoints/sapbapiendpoint.xml endpoint, where is the concrete address: bapi:/abc
I have abc.dest and abc.server property files with SAP endpoint parameters on path
$WSO2_HOME/repository/conf/sap according to official documentation here
When I want to use "export" proxy service and send data, I will find this in logs:
DEBUG - Started sending message to uri=bapi:/abc/services/sapBapi/services/export {org.wso2.carbon.transports.sap.SAPTransportSender}
WARN - JCo configuration file for the destination : abc/services/sapBapi/services/export does not exist - Please specify the JCo configuration in $WSO2_HOME/conf/sap/abc/services/sapBapi/services/export.dest or abc/services/sapBapi/services/export.dest {org.wso2.carbon.transports.sap.CarbonDestinationDataProvider}
ERROR - Error while sending request to the EPRbapi:/abc/services/sapBapi/services/export {org.wso2.carbon.transports.sap.SAPTransportSender}
com.sap.conn.jco.JCoException: (106) JCO_ERROR_RESOURCE: Destination abc/services/sapBapi/services/export does not exist
When I put export.dest file to path $WSO2_HOME/repository/conf/sap/abc/services/sapBapi/services/, then it works perfectly.
My questions:
Why is it using proxy service name ("export") for the .dest property
file in described case?
Why is it searching for .dest property file
on path $WSO2_HOME/conf/sap/abc/services/sapBapi/services/ instead
of $WSO2_HOME/repository/conf/sap/?
WSO2 version: 6.5.0
I don't know WSO2 Enterprise Integrator, but obviously an instance of class org.wso2.carbon.transports.sap.CarbonDestinationDataProvider is the registered DestinationDataProvider at the JCo runtime. This is the instance that decides alone from where to obtain the logon parameters for a JCoDestination based on the destination name string that it gets from the JCoDestinationManager.
From your example error message, this destination name string seems to be "abc/services/sapBapi/services/export" in this case for which the org.wso2.carbon.transports.sap.CarbonDestinationDataProvider is searching a property file with name abc/services/sapBapi/services/export.dest
I hope this info will help you to adapt your code/configuration to fit your expectations.

Camel route POSTs to service that takes 20+ minutes to respond

I have an Apache Camel (version 2.15.3) route that is configured as follows (using a mix of XML and Java DSL):
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Do some processing and auditing.
Synchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6).
The REST service takes a long time to do its job, 20+ minutes.
Receive the reply.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
Use the header set at the start to know which folder to write to.
This is all configured in a single path of chained routes.
The problem is that the connection to the external REST service will timeout while the service is still processing. The infrastructure is a bit complex (edge servers, load balancers, Glassfish), and regardless I don't think increasing the timeout is the right solution.
How can I implement this route such that I avoid timeouts while still meeting all my requirements to (1) write the response to the appropriate FTP folder, (2) audit the transaction, and (3) meet other transaction/context-specific requirements?
I'm relatively new to Camel and REST, so maybe this is easy, but I don't know what Camel and REST tools and techniques to use.
(Questions and suggestions for improvement are welcome.)
Isn't it possible to break the two main steps a part and have two asynchronous operations?
I would do as follows.
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Save the header and file name and other relevant information in a cache. There is a camel component called camel-cache that is relatively easy to setup and you can store key-value or any other objects.
Do some processing and auditing. Asynchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6). Note that we are posting asynchronously here.
Step 2.
Receive the reply.
Lookup the reply identifiers i.e. filename or some other identifier in cache to match the reply and then fetch the header.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
This way, you don't need to wait and processing can take 20 min or longer. You just set your cache values to not expire for say 24h.
This is a typical asynchronous use case. Can the rest service give you a token id or some unique id immediately after you hit them ?
So that you can have a batch job or some other camel route which will pick up this id from a database/cache and hit the rest service again after 20 minutes.
This is the ideal solution I can think of, if the rest service can provision this.
You are right, waiting for 20 minutes on a synchronous call is a crazy idea. Also what is the estimated size of the file/payload which you are planning to post to the rest service ?

wso2am Error while sending stream to wso2das

While trying to follow the instructions from the wso2am (1.10.0) manual, regarding working with statistics with the wso2das (3.0.1) server i have encountered a problem.
If i choose to let the wso2am server define the stream while making the first call of the api, the wso2das server refuses to post statistics to the WSO2_STATS_DB.
If on the other hand i choose to import the analytics.car file in wso2das (as stated here ) i get an exception (AsyncDataPublisher Stream definition already exist) because the org.wso2.apimgt.statistics.request defined in the latest Analytics.car is different to the one being send from wso2am.
I pinpointed the problem in the definition of the Eventstream_request_1.0 in files
org.wso2.apimgt.statistics.request_1.0.0.json ,
throttledOutORG_WSO2_APIMGT_STATISTICS_REQUEST.xml
where the definition of the throttledOut option is missing
Is there a way to solve this issue?
Thank you.
I think your DAS is in some kind of a corrupted state. Can you first delete the car application (/repository/deployment/server/carbonapps) and then log in to DAS and go to Manage > Event > Streams and delete any existing streams. Then try again to deploy the car app in the /repository/deployment/server/carbonapps location.
If everything goes well you would see two scripts in Manage > Batch Analytics > Scripts section. Try to execute each script and see if there is any error. If not then you can point the API manager to DAS

BPS process data fails to be deployed in DAS using KPISample

Using wso2bps-3.5.1, wso2das-3.0.1
Hi,
I've followed the instructions for deploying and testing the KPISample process that comes with BPS.
I'm able to include the extension bundle, deploy the project .zip file and execute a couple of process call without any errors.
But, I'm not getting any stream definitions deployed in the DAS. According to the instructions that is expected to happen automatically when sending data from BPS.
As I said, no errors in log files on either BAS or BPS.
What am I missing here?
Had to create the stream and receiver first in DAS. Not mentioned in the tutorial.