I am working in one of the use case where the mapping to be done from csv to csv. Using WSO2 EI 6.3.0
I have requirement to add pipe delimiters in to the output file. The input file I am getting from database which has pipe delimiter but when the message is going through the data mapper it does not get mapped to the final output and adds ',' on its own.
Is there a way to achieve pipe delimiters in the message in wso2 ei.
Can someone please help....
Thanks
Depending on the OS you're running and how things are configured, there may be Regional Settings that are overriding the default delimiter. For instance, on Windows 10, hit Start -> type Region and Language setting -> Click Additional Date, Time, & Regional Settings -> Click Change Date, Time, and Number Formats -> Additional Settings... -> and then find the List Separator value, change it to a pipe, restart your runtime, and then test.
Related
I need to configure live streaming using OBS studio software, live streaming has already been setup on AWS using ElementalMediaLive(auto wizard). but I am unable to figure out the way to find the server address and stream key which are required the configure the OBS studio.
can some one please guid me where can I find the above required information into AWS panel?
Thanks
That is the "stream name" in the rtmp ingest point URL. The format would be something like
rtmp://domain:1935/live/streamname
stream key in OBS is streamname
For detail, please refer to page 7 and 8 of this pdf: https://d2908q01vomqb2.cloudfront.net/fb644351560d8296fe6da332236b1f8d61b2828a/2020/04/14/Connecting-OBS-Studio-to-AWS-Media-Services-in-the-Cloud-v2.pdf
Have a look here:
https://aws.amazon.com/de/blogs/media/connecting-obs-studio-to-aws-media-services-in-the-cloud/
Especially this document:
https://d2908q01vomqb2.cloudfront.net/fb644351560d8296fe6da332236b1f8d61b2828a/2020/04/14/Connecting-OBS-Studio-to-AWS-Media-Services-in-the-Cloud-v2.pdf
Where it says:
STEP C: CONFIGURE THE APPLIANCE
Launch OBS Studio on the source system. Choose Settings to open the settings window. Choose Stream to access the streaming settings.
Complete the fields:
For Stream Type, choose Custom Streaming Server.
For URL, copy one of the endpoint URLs from the input you created in Step B. Remove the /<stream_name> at the end of the URL.
For Stream key, type the stream name.
Leave the Use authentication box unchecked.
Choose Apply to save your changes.
What OBS refers to as "stream key" is the App Instance.
Regards,
I have created a simple mapping to convert file from one format to another. Power Center 9.5.1 client is running on Windows machine. I am NOT able to set full "Source filename" correctly in Workflow Session.
When I set it to ""c:\temp\people.csv", somehow slashes get converted to backslashes, resulting in file reading error. "READER_1_1_1 FR_3000 Error opening file ["c:/temp/people.csv"]."
I tried URL format ""file:\c:\temp\people.csv", but it did not work either. Also "c:/temp/people.csv" does not work.
Please note, I tried using both "Source file directory" and "Source filename", but slash conversion still takes place, resulting in error.
Any suggestions? IS there any setting to keep paths as is, windows style. TIA
It may not be the slash issue. It seems you're refering to local path, where your Informatica Client is installed. Please note, mappings get executed on Informatica Integration Service and you need to point to a path accessible by the server.
I have setup to log Pentaho jobs and transformations to a database
This works fine provided I define every job and every transformation in its individual log settings dialogue.
I see that I can configure the kettle properties file to hold these values.
However I can't get this to inherit autoamtically in a transformation when it is called by a job. I assume that if defined in properties it should just inherit and work.
Any ideas on what I am missing?
Thanks
(MS windows env with MS Sql server- we don't have Pentaho enterprise).
You can do it by adding below entries in "kettle.properties" file.
kettle logging properties
KETTLE_TRANS_LOG_DB=
KETTLE_TRANS_LOG_SCHEMA=
KETTLE_TRANS_LOG_TABLE=etl_trans_log
KETTLE_JOB_LOG_DB=
KETTLE_JOB_LOG_SCHEMA=
KETTLE_JOB_LOG_TABLE=etl_job_log
Ok so I have foound that provided I set the proerties file on the machine and then set each transformation by right clicking and setting each log to have the connection, then when I call the job it all logs correctly.
So you need the database connection in all transforms and you need to set this a sdefault in the logging tab.
I think this is right anyway unless someone else has a shorter cut
I am trying to create a template in NiFi just like a data ingest template which provide by kylo.
Basically I want to allow user to select input data source it can be database or a file. If he selects file and then database processor should automatically gets disabled.
I have create a template in NiFi and imported it kylo, but while creating feed It does not show the feed input option.
How I can do this.
While registering the template, in the "Input Properties" section, you have to select which properties have to be shown in feed creation UI for user input i.e. Enable "Allow user input?"
Attached the screenshot for reference:
I think the best approach here would be to use the RouteOnAttribute as the step after the Data Source/ Data type is chosen.
This way you don't have to overthink it.
I have been working on Kylo for around 3 months now and surely know a thing or two about it.
While starting a Feed, Kylo asks you for which source you want to start a feed no matter you have a single processor or multiple which can act as a data producer or fetcher. Once you select one and start a feed, rest of the source processors get disabled automatically by Kylo in the resultant deployment of feed.
I am tired of one problem so please make things clear to me.
Please read these following three points and help me out.
(1)
I have simply followed this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-livestreamrecordautorecord-example#documentation
I have attached my Application.xml. Now when I publish live stream name "test1" via FMLE it get recorded on server but when I run different instance of FMLE on different PC and publish live stream name "test2" it does not get record and I think it goes to previously recorded file "test1" (means no separate file being record, however there should be two files recorded test1 and test2).
Why this happenning ?
Is this com.wowza.wms.plugin.livestreamrecord.module.ModuleAutoRecordAdvancedExample for single stream recording ? means If I publish stream A B C D , it will record them in one single file ? (probably the output file will be A.mp4 as A was first published stream ?)
(2) What is this https://www.wowza.com/docs/how-to-start-and-stop-live-stream-recordings-programmatically-imediastreamactionnotify3#comments module for ?
I have implement this code in Eclipse and successfully put jar in lib folder and configured everything. Now again I am not able to record different streams with their corresponding name. Means If I publish stream1 and stream2 then desired output should be two different files (in content folder) but again I see one single file being record ?
(3) Can I use ModuleLiveStreamRecord.java ? This was in older version of WOWZA but I have properly imported required jar and tested it.
My requirement is very simple:
As soon as users start publishing, WOWZA should start live recording. If 10 users publishing live, 10 files should be generate.
Don't make things more difficult than necessary (assuming you have Wowza 4.x; if you still have 3.x then I highly recommend to upgrade for free)
Open the Engine Manager (http://your.server.com:8088)
Go to "Applications" from the top menu
Select your application from the left menu (e.g. "live")
In the setup window for this application, click the blue Edit button
Enable "Record all incoming streams"
Click "Save"
Click the orange "Restart now" button at the top
Done
Every stream that is published via this application will now automatically be recorded. The default folder for recordings is the /content folder in your Wowza installation. You can change this on the same page under "Streaming File Directory" (make sure it's a directory on your local system, unless you really well understand how Wowza works)
The filename is always the streamname + ".mp4", but when you start a new recording while the file already exists, the old file will be renamed first.
Want to control recording manually? Start publishing first, then select "Incoming streams" from the left menu and use the big red dot button behind a stream name to start recording.
If your server produces any different behavior with regards to the file (re)naming or recording, then you may need to review your Wowza setup.
I appreciate your response KBoek.
I sorted out issue but there were really debugging need if one doing custom module. I had to write custom module for live auto recording because I wanted HTTP authentication and then custom name of live recording.
thanks again