WSO2 CEP export event streams, event receivers etc - wso2

I am new in WSO2 Cep and I would like to ask if anybody knows that how can I export the event streams, event receivers, etc. which I have create in order to have a backup, from the WSO2 CEP Management console?
Thanks in advance!

CEP deployable artifacts that are created from Management Console are stored in the file system. You can find them in <CEP_HOME>/repository/deployment/server folder.
Event Streams can be found in directory eventstreams
Event Receivers can be found in directory eventreceivers
Event Publishers can be found in directory eventpublishers
Execution Plans can be found in directory executionplans
Then, just create a backup of above directories.

When you create an artifacts from management console they are persisted in file system under CEP_HOME/repository/deployment/server. Under this directory you can find event streams, receivers, executions plans, etc. So you can take a backup of this folder to make a backup of your current artifacts.
Regards

Related

Informatica - Trigger next workflow upon completion of the first workflow

I am working on Informatica to automatically run the Workflow B upon the completion of the Workflow A. I did research on how to do this and the best that I encountered is using PMCMD but I cannot find the PMCMD.exe file in the installation folder of my Informatica power center. I am using version 8.1.1. I don't know if the PMCMD is available in this version. Kindly advise for alternative solutions. Thank you in advance.
It's possible with pmcmd utility, but there's another option. You can use an Event Wait task in Workflow B, right after the Start task and make it wait for a flat file, e.g. workflowA.done. And add a Command Task as the last one in your WorkflowA to perform a touch workflowA.done command. Use the appropriate path for your case (might be $PMTargetFileDir for example).
Start both your workflows at the same time, Workflow B will process the tasks after the control file gets created.
pmcmd.exe is available in the Informatica installation folder for Informatica server.
For my system it was in the below path:
/infa/server/bin
Usually this is controlled by an external independant scheduler

Update wowza StreamPublisher schedule via REST API (or alternative)

Just getting started with Wowza Streaming Engine.
Objective:
Set up a streaming server which live streams existing video (from S3) at a pre-defined schedule (think of a tv channel that linearly streams - you're unable to seek through).
Create a separate admin app that manages that schedule and updates the streaming app accordingly.
Accomplish this with as a little custom Java as possible.
Questions:
Is it possible to fetch / update streamingschedule.smil with the Wowza Streaming Engine REST API?
There are methods to retrieve and update specific SMIL files via the REST API, but they only seem to be applicable to those created through the manager. After all, streamingschedule.smil needs to be created manually by hand
Alternatively, is it possible to reference a streamingschedule.smil that exists on an S3 bucket? (In a similar way footage can be linked from S3 buckets with the use of the MediaCache module)
A comment here (search for '3a') seems to indicate it's possible, but there's a lot of noise in that thread.
What I've done:
Set up Wowza Streaming Engine 4.4.1 on EC2
Enabled REST API documentation
Created a separate S3 bucket and filled it with pre-recorded footage
Enabled MediaCache on the server which points to the above S3 bucket
Created a customised VOD edge application, with AppType set to Live and StreamType set to live in order to be able to point to the above (as suggested here)
Created a StreamPublisher module with a streamingschedule.smil file
The above all works and I have a working schedule with linearly streaming content pulled from an S3 bucket. Just need to be able to easily manipulate that schedule without having to manually edit the file via SSH.
So close! TIA
To answer your questions:
No. However, you can update it by creating an http provider and having it handle the modifications to that schedule. Should you want more flexibility here you can even extend the scheduler module to not require that file at all.
Yes. You would have to modify the ServerListenerStreamPublisher solution to accomplish it. Currently it solely looks a the local filesystem to read teh streamingschedule.smil file.
Thanks,
Matt

Best way to automate a process to be run from command line (via AWS)

I am working on a web application to provide a software as a web-based service using AWS, but I'm stuck on the implementation.
I will be using a Content Management System (probably Joomla) to manage user logins and front-end tasks such as receiving file uploads as the input. The program that provides the service needs to be run from the command line. However, I am not sure what the best way to automate this process (starting the program once the input file has been received) would be. It is an intensive program that will take at least an hour on each program, and should be run sequentially if there is more than one input at any one time, so there needs to be a queue where each element in the queue records the file path of the input file, the file path of the output folder, and ideally the email to send a notification to when the job is done.
I have looked into Amazon Data Pipeline and AWS Simple Workflow Service, and Simple Queue Services and Simple Notification Services, but I'm still not sure how exactly these could be used to trigger the start of the process, starting from the input file being uploaded.
Any help would be greatly appreciated!
There are a number of ways to architect this type of process; here is one approach what would work:
On the upload, upload the file to an S3 bucket, so that it can be accessed by any instance later.
Within the upload process, send a message to an SQS queue, which includes the bucket/key of the file uploaded, and the email of the user that uploaded it.
Either with Lambda, or with a cron process on a purpose built instance(s), check the SQS queue, and process each request.
Into the processing phase, add the email notification to the user when the process is complete.
You can absolutely use data pipeline to automate this process.
Take a look at managed preconditions and the following samples.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-preconditions.html
https://github.com/awslabs/data-pipeline-samples/tree/master/samples

WSO2 CEP STREAMS

I create input and output streams in my WSO2 CEP (v3.1.0) with event formatter and event builder as well. I need to find out where this streams are created in WSO2 CEP catalog structure, becasue I can't find it beyond event builder and formatter (wso2cep-3.1.0\repository\deployment\server).
Has anyone know where I can find this streams files?
Kacu
I managed to load streams via xml (only during startup), by modifying the stream-definitions.xml file in this folder wso2cep-3.1.0/repository/conf/data-bridge.
You can take a look at this page in the documentation for more details, just keep in mind that the location written in the documentation doesn't match what I found in the server.
In CEP 3.1.0, event streams are stored in the registry (which comes with CEP), it is not stored in filesystem. Streams can be located under governance section of the registry (See streamdefinitions sub directory)..
Regards,
Mohan

Automated deployment of DSS datasource configuration

We have a "mavenized" project with several containers (wso2esb, wso2dss, tomcat) and many components to deploy to them.
We are trying to find a way to deploy the datasource configuration for all our DSS services but I notice it is stored in its own DB (H2).
Do you know if there is any way to declare something like a XML file in order to create the datasource in the DSS in an automated way?
I tried to see the documentation but did not find anything useful for automatic deployment (meaning without using the admin pages).
Yeah, you can use the Carbon data source configuration file datasources.properties, to provide this information. This file should be located at $SERVER_ROOT/repository/conf.
A sample for this configuration file can be found in BPS sources.
After the data sources are defined using this, you can use them using the data source type "carbon data source" from data services.
You can easily deploy artifacts with the hot deployment functionality in WSO2 Servers by simply copying them to a specific directory in the server.
For Data Services Server you can copy the dbs files (in your case with the help of Maven) to $WSO2DSS_HOME/repository/deployment/server/dataservices dirctory. Similarly for BPELs its $WSO2BPS_HOME/repository/deployment/server/bpel
For CAR files created with carbon studio, its $WSO2CARBON_HOME/repository/deployment/server/carbonapps.
For ESB configs, its $WSO2ESB_HOME/repository/deployment/server/synapse-configs.