Registering NiFi template in Kylo with input options - templates

I am trying to create a template in NiFi just like a data ingest template which provide by kylo.
Basically I want to allow user to select input data source it can be database or a file. If he selects file and then database processor should automatically gets disabled.
I have create a template in NiFi and imported it kylo, but while creating feed It does not show the feed input option.
How I can do this.

While registering the template, in the "Input Properties" section, you have to select which properties have to be shown in feed creation UI for user input i.e. Enable "Allow user input?"
Attached the screenshot for reference:

I think the best approach here would be to use the RouteOnAttribute as the step after the Data Source/ Data type is chosen.
This way you don't have to overthink it.

I have been working on Kylo for around 3 months now and surely know a thing or two about it.
While starting a Feed, Kylo asks you for which source you want to start a feed no matter you have a single processor or multiple which can act as a data producer or fetcher. Once you select one and start a feed, rest of the source processors get disabled automatically by Kylo in the resultant deployment of feed.

Related

Can Alerts be customized?

Is there a way to edit the information included in different Alerts? For instance, if something doesn't include Job # and the customer needs to be able to have Job # in all alerts in order to be able to connect the dots, I would like to know if there is a way to change around what information is available.
Thank you!
Some of the alerts can be altered via configs. They have placeholders that can be changed (though there are only certain available placeholders.
If you hover over the internal note it will show you the available placeholders for that template.
enter image description here
Search template with Ctrl+F and you should see the ones that appear.

WSO2 Siddhi RDBMS Store Extension - how to set batchEnable to false

I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false

Not able to delete multiple campaigns using Postman from Eloqua

I have been trying to delete multiple campaigns from Eloqua at a time using Postman. But I am not able to do. I don't see reference in the tool as well http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAB/index.html#Developers/RESTAPI/REST-API.htm%3FTocPath%3D%2520Application%2520API%7C_____0.
Please let me know if deleting the multiple campaigns is possible.
It is not possible.
The link you provided mentions it's outdated, and a redirection link was available: http://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAC/rest-endpoints.html
Have a look at all the DELETE methods over there, and you will see that there is no provision for sending more than one id at a time.
Edit: You say you are using Postman. It is possible to perform repetitive tasks (like deleting mulitple campaigns) with different parameters each time by using Collections.
Edit 2:
Create an environment,
Type your url with the id as a variable, e.g.: xyz.com/delete/{id}
And send all the id values as a JSON or CSV file. They have given a sample JSON, you would simply have to provide your ids inside an array, e.g.:
[
{"id":1},
{"id":2},
{"id":3}
]

In Sitecore, how to view the content of the submit queue file?

So in the Sitecore site, in Data/Submit Queue, there is a file without an extension that is representing the content of the Submit Queue.
If you try viewing it as a text file, it shows some content, but there is some strange characters in the mix.
So, has someone made an application to view this file? Is it suppose to be in a specific format that should opened with an application able to view that format?
Extra info: Sitecore 8.0, no there is nothing about it in the control panel or in sitecore/admin.
Mark is right, the submit queue isn't meant for users to view. A couple of months ago, I wrote a post on this exact subject.
https://citizensitecore.com/2016/07/01/xdb-session-info-and-mongodb-availability/
From Akinori Taira, a member of the xDB product team:
In the event that the collections database is unavailable, there is a
special ‘Submit Queue’ mechanism that flushes captured data to the
local hard drive (the ‘Data\Submit Queue’ folder by default). When
the collections database comes back online, a background worker
process submits the data from the ‘Submit Queue’ on disk.
No, you're not meant to be opening the Submit Queue and do anything with it.
It is used by xDB (in your case) to submit data, when the xDB cannot be reached. It will be a format related to MongoDB in some way, but I've never seen any formal documentation for it.
References:
http://sitecoreart.martinrayenglish.com/2015/04/sitecore-xdb-cloud-edition-what-you.html
Sitecore 8.1: Purpose of Submit Queue and MediaIndexing folders under $(dataFolder)
This file contains the analytics data that was not flushed to the Mongo database.
In case xDB collection server is unavailable, Sitecore would/must handle this situation correctly. There is a special 'Submit Queue' mechanism introduced that flushes captured data to local server hard drive ( 'Data\Submit Queue' folder by default ) in case xDB is not available.
When xDB is up again, a background worker would submit the data saved on disk, so no data is lost.
As a quick suggestion on this I recommend you to check whether your MongoDB server is available for your Sitecore instance. Once it becomes available, all data from the file should be flushed to the xDB.
The submit queue file stores serialized values as follows: first value - number of entities, second value - position of the next entity, which must be submitted to xDB, the next values contain serialized analytics data.
The submit queue is processed using this class: Sitecore.Analytics.Data.DataAccess.SubmitQueue.FileSubmitQueue
If you want to debug to see how is processed decompile the class and create your own class and replace in Sitecore.Analytics.Tracking.confing
<submitQueue>
<queue type="Sitecore.Analytics.Data.DataAccess.SubmitQueue.FileSubmitQueue, Sitecore.Analytics" singleInstance="true" />
</submitQueue>

WOWZA LiveAutoRecord

I am tired of one problem so please make things clear to me.
Please read these following three points and help me out.
(1)
I have simply followed this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-livestreamrecordautorecord-example#documentation
I have attached my Application.xml. Now when I publish live stream name "test1" via FMLE it get recorded on server but when I run different instance of FMLE on different PC and publish live stream name "test2" it does not get record and I think it goes to previously recorded file "test1" (means no separate file being record, however there should be two files recorded test1 and test2).
Why this happenning ?
Is this com.wowza.wms.plugin.livestreamrecord.module.ModuleAutoRecordAdvancedExample for single stream recording ? means If I publish stream A B C D , it will record them in one single file ? (probably the output file will be A.mp4 as A was first published stream ?)
(2) What is this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-imediastreamactionnotify3#comments module for ?
I have implement this code in Eclipse and successfully put jar in lib folder and configured everything. Now again I am not able to record different streams with their corresponding name. Means If I publish stream1 and stream2 then desired output should be two different files (in content folder) but again I see one single file being record ?
(3) Can I use ModuleLiveStreamRecord.java ? This was in older version of WOWZA but I have properly imported required jar and tested it.
My requirement is very simple:
As soon as users start publishing, WOWZA should start live recording. If 10 users publishing live, 10 files should be generate.
Don't make things more difficult than necessary (assuming you have Wowza 4.x; if you still have 3.x then I highly recommend to upgrade for free)
Open the Engine Manager (http://your.server.com:8088)
Go to "Applications" from the top menu
Select your application from the left menu (e.g. "live")
In the setup window for this application, click the blue Edit button
Enable "Record all incoming streams"
Click "Save"
Click the orange "Restart now" button at the top
Done
Every stream that is published via this application will now automatically be recorded. The default folder for recordings is the /content folder in your Wowza installation. You can change this on the same page under "Streaming File Directory" (make sure it's a directory on your local system, unless you really well understand how Wowza works)
The filename is always the streamname + ".mp4", but when you start a new recording while the file already exists, the old file will be renamed first.
Want to control recording manually? Start publishing first, then select "Incoming streams" from the left menu and use the big red dot button behind a stream name to start recording.
If your server produces any different behavior with regards to the file (re)naming or recording, then you may need to review your Wowza setup.
I appreciate your response KBoek.
I sorted out issue but there were really debugging need if one doing custom module. I had to write custom module for live auto recording because I wanted HTTP authentication and then custom name of live recording.
thanks again