Execute python script in/from WSO2 - wso2

I am quite new to WSO2 so maybe the question is too trivial, but as I can see it is so big, I do not know how to start.
I would like to:
Read some data from some sensors
Get this data to use on a python script.
Perform calculations on the script
Send data to sensors.
I guess I can do steps 1 and 4 with stream processor through http-request (at least I can read them and show them in SP editor console), but:
How do I collect the data and send it as input to the python script? Can I achieve this with Stream Processor?
Can I execute python in/from WSO2 or should it be running apart from WSO2? If so, which componet of WSO2 shoul I use?

I guess I can do steps 1 and 4 with stream processor
Can I achieve this with Stream Processor?
Why do you want to use wso2sp? The SP (stream processor) is intended to collect the data and create analytics, not really to invoke other services (it is possible, but not always feasible)
To process and pass data between system - you may have a look at wso2ei (Enterprise Integrator - I mean the ESB functionality).
send it as input to the python script
The most straightforward way would be exposing your python functions as services, example https://medium.com/#umerfarooq_26378/web-services-in-python-ef81a9067aaf
An example mediation (flow) would be reading data from sensors, send them to esb for processing, the esb will call the backend python service, modify the python response to be consumable by wso2sp and send the to sp for analytics
This is just an example, we don't really know what do you want to achieve and what do you really need
In theory you may invoke an external service directly from the SP, but it may have limited capability and service options comparing to the EI

Related

How can WSO2 collect multiples devices from multiples endpoints with the same payload?

I have 100 devices that do simple calculation.
The only way to extract data from those devices is by their REST API.
I want to schedule a task every minute to collect every new data from those 100 devices.
Each device have its own API endpoint and all the payloads to collect the data are identical for each device. To be able to invoke the REST API I need to provide a valid token. This token can be acquired by calling the authentication (/auth/token) function from each REST API endpoints with specific usernames and passwords
They have all the same version, so the exact same logic is needed to collect the data. I found out we can use WSO2-ESB to collect data.
What I've done so far:
I create an Entreprise Integration Connector for the devices.
I create a New Integration project in the Integration Studio.
I use the Connector and Schedule a task that do the sequence logic to test data collection from 1 device.
Now I need to scale from collecting 1 device to 100 devices at the same time.
How can I collect all devices at once using the same logic with WSO2-ESB?
It seems that you have followed the correct approach. Yes there is a significant change between EI 6 series and the EI 7 series. The EI 6 series has the ESB, BPS, MB and Analytics profiles in the same server. The EI 7 series only has the ESB server. For your use case you need the ESB. Therefore you can use either EI 6.6.0 server or EI 7.1.0 server.
If you need to invoke a REST API no need to use a connector. The schedule task and the sequence would be sufficient. To implement the logic for 100 devices we need more information.
Do you have different API endpoints for each device
Do you need different payloads to get information from different APIs
It depends what do you want to have on the output - collected data or single per device... As source of your endpoint devices you can use Local-entries, or embedded in scheduler task as message. Maybe you should also look at the
Split-aggregate pattern

Writing to a specific already running process instead of creating a new instance of a process

I have written some code that calls an executable and passes arguments to it via the cmd line.
For example it will call
my_process.exe -A my_argument
What I want to happen is that my program my_process will always be running looking for user input and instead of creating a new instance of the process I am wanting to write my data/arguments to the existing process.
I understand that how I am passing the parameters will change from the the initial process start (argc, argv) vs when using stdin.
I also know that I will have to change how I am calling the process but I am unsure of what I need to look into to get this done.
EDIT: so what I am trying to accomplish by doing all of this is below:
Website >> Web Service API >> Hardware API >> PLC
The website is on server A, the web Service and Hardware API is on Server B
OS is Windows 10 Pro 64bit
PLC is a programmable logic controller
The website will send a post to my webservice. The webservice will call the Hardware API which in turn will write or read data to the PLC.
This works when doing a single POST. But when doing multiple POSTS if the connection from the Hardware API to the PLC is still open it will fault.
The Connection between the Hardware API and the PLC is like a COM port not like a socket (which is misleading based on the programming manual).
So what I was trying to do was to keep my Web API the same but create another process that will take all the results from the Web API and put them in a FIFO and then pop them off to the Hardware API (which I will always have running and will have a persistent connection to the plc).
So really the Hardware API would always be running and be a single process that gets data passed to it. The Queue service would always be running and the Web API would pass the results over to it.
I have looked into the below:
https://www.boost.org/doc/libs/1_37_0/doc/html/interprocess.html
child/parent process/fork/file descriptors/dup/dup2
Any thoughts or advice is greatly appreciated.

Kafka Python producer integration with django web app

I have a question on how can we integrate kafka producer with a front end web app. get the data for every minute or second . Can the web app pass the JSON object to a running producer each time the it is created ? or do we need to initiate the kafka client each time we get a JSON object ?
You would want to probably open a new Producer for every session, probably not open and close for each and every request. And this would be done on the backend, not the frontend.
But a web server consisting of a Kafka client is no different underneath the HTTP layer vs a regular console app; you accept an incoming request, deserialize it, then optionally parse, then serialize again for Kafka output, then optionally render something back to the user.
If you're really asking, "is Kafka with HTTP requests possible", regardless of the language and platforms, then sure, the Confluent REST Proxy operates similarly, only written in Java
As far as webapps tracking goes, I would suggest looking into Divolte Collector

Can BAM and CEP monitor requests from client like Zipkin

I am wondering if I can use BAM and CEP to monitor requests from client, and even find the bottleneck of the service.
I found zipkin, a project that could do this, but the base of my application is WSO2, I don't want to get other projects from scratch.
Yes, you can use BAM/CEP for this. If you need real time monitoring you can use CEP and you can use BAM for batch process. From BAM 2.4.0 onwards, CEP features have been added inside BAM also hence you can use BAM and do real time analytics.
What type of services are involved with your scenario? Depends on this you can use already existing data publisher or write new data publisher for BAM/CEP to publish your request details. For example if you are having chain of axis2 webservice calls for a request from client, and you want to monitor where the bottle neck/more time consumed, then you may use the service stats publishing, and monitor the average time take to process the message which will help you to see where the actual delay has been introduced. For this you can use existing service statistics publisher feature. Also BAM will allow you to create your own dashboard to visualize, hence you can customize the dashboard.
Also with BAM 2.4.0 we have introduced notifications feature also which you can define some threshold value and configure to send notification if that cross that threshold value.

Using LoadRunner to Test Server Processes

We currently use LoadRunner for performance testing our web apps, but we also have some server side processes we need to test.
Background:
We call these processes our "engines". One engine receives messages by polling an IBM WebSpere MQ queue for messages. It takes a message off the queue, processes it, and puts the result on an outbound queue. We currently test this engine via a TCL script that reads a file that contains the messages, puts the messages on the inbound queue, then polls the outbound queue for the results.
The other engine receives messages via a web service. The web service writes the message to a table in our database. The engine polls the database table for new messages, takes a message and processes it, and puts the result back into the database. We currently test this engine via a VBScript script that reads a file that contains the messages, sends the message to the web service, then keeps querying the web service for the result unitl it's ready.
Question:
We'd like to do away with the TCL and VBScript scripts and standardize on LoadRunner so that we have one tool to manage all our performance tests.
I know LoadRunner supports a Web Services protocol "out of the box", but I'm not sure how to use it. Does anyone know of any examples of how to use LoadRunner to test a web service?
Does LoadRunner have a protocol for MQ? Is it possible to use a LoadRunner Vuser to drive load (put messages) into an MQ queue? Would we need to purchase something from HP or some other vendor to do this?
Thanks :)
There is an add-in for LoadRunner in the incuded software to interface with MQ series and put the messages directly on the queue. Web services are fully supported also, and VBScript is supported too,perhaps using QTPro for the script and a GUI user in LoadRunner?
Colin.
For #1, as an alternative to a Web Services script, you could try recording a Windows Sockets script. I've used LoadRunner to record winsock scripts to test some (Java) APIs. What I did was write a really simple Java API client and then execute that from a Windows batch file. The batch file would then be referenced as the executable when recording a LR script in VUGen.
I'm not sure if VUGen can load a VBScript file for recording, but you might try. Otherwise, you might try wrapping your VBScript in a batch file that can be run by VUGen.
When VUGen records a winsock script, it's basically monitoring the network communication for the process you're recording with. After you're done recording, it'll generate a dump of the network data in a "data.ws" worksheet that you can look at and edit with VUGen. You can parameterize this data worksheet for your load tests.
One can code SOA requests and parse responses within LoadRunner.
See wilsonmar.com/1lrscript.htm.
But bear in mind that TCL and VBScript developed for functional testing have a different architecture and scope than LoadRunner scripts. QTP and WinRunner take over the application.
LoadRunner scripts focus on the exchange of data across the wire. In the case of headless SOA XML, this architectural distinction doesn't matter.
However, it may be easier for you to maintain VBscript from the GUI, because creating SOA scripts in LoadRunner require a deeper understanding of message formats than what most MQ developers have.
You really have three paths for pushing and popping messages off of an MQ queue using LoadRunner
(1) MQTester. This is a native MQ Protocol Add in for use with LoadRunner
(2) Winsock. Winsock development is best described as tedipously similar to picking fly scat out of ground pepper. Tedious, but in the end very rewarding. Out of the box, no additional add ins are required except license updates (possibly)
(3) JMS using a Java Virtual user, see. http://en.wikipedia.org/wiki/Java_Message_Service . You wind up with a small Java program in the Java template virtual user for LoadRunner. You will have to deal with all of the Java black magic aspects associated with LoadRunner, but once you nail down the combination of release and installation details you can use the virtual same code to post to just about any JMS provider (not just MQ) with some connection factory settings changed.
You should be able to do JMS with the web services virtual user as well, but I have not tested that configuration. Look at the JMS section of the run time settings.