WSO2 stream processor unable to save file to workspace - wso2

I installed WSO2 Stream Processor on ubuntu server.
Then started the editor and accessed to Stream Processor Studio from another PC through browser.
I am unable to save a file to workspace ("Error in reading the file location undefined")
What's wrong ? Did I made a mistake ?

Support for accessing remote filesystem had been blocked in v4.1.0 since Product SP was using a custom file import. This has been fixed by integrating with OS native file browser in #565. The fix is available with v4.2.0-rc1.
Product SP 4.2.0 will be released by the end of this month.

Related

Java EE Webservice - How to save a json without a database

I have a REST service with a simple get and post method in Java EE. The post method saves the recieved json in a file using Gson and FileWriter. On my local system the file is saved in C:\Users...\Documents\Glassfish Domains\Domain\config. The Get method reads this file and gives out the json.
When I test this on my local system using Postman, everything works fine, but when I deploy the project on a Ubuntu Server vm with Glassfish installed, I am able to connect but I get a http 500 Internal Server Error Code. I managed to find out, that the errors are thrown, when the FileReader/FileWriter tries to do stuff. I suppose that it is restricted to access this directory on a real glassfish instance.
So my question is, if there is a file path where I am allowed to write a file and read it afterwards. This file has to stay there (at least during the applicationr runs) and has to be the same for every request (A scheduler writes some stuff into the file every 24 hours). If anyone has a simple alternative how to save the json in Java EE without an extra database instance, that would be helpful, too :)
If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.

WSO2 AM 1.8 integration with WSO2 BAM 2.5

I am trying to integrate WSO2 BAM 2.5 with WSO2 AM 1.8 by referring this document. .https://docs.wso2.com/display/AM150/Monitoring+Using+WSO2+BAM. But i am getting error in BAM console .I configured it in windows 7.Any help to fix if,Thanks in advance
Error
[2015-01-13 14:34:03,045] WARN
{org.apache.hadoop.security.ShellBasedUnixGroupsMapping} - got
exception trying to get groups for user Jamshi
org.apache.hadoop.util.Shell$ExitCodeException: id: Jamshi: no such
user
at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)
at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1030)
at org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator.setConf(HadoopDefaultAuthenticator.java:62)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthenticator(HiveUtils.java:222)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:270)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.<init>(HiveServer.java:136)
at org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:63)
at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:104)
at java.sql.DriverManager.getConnection(DriverManager.java:579)
at java.sql.DriverManager.getConnection(DriverManager.java:221)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.getConnection(HiveExecutorServiceI
mpl.java:577)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:
224)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:
192)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
When installing BAM on Windows please make sure to follow the corresponding Windows-specific doc: https://docs.wso2.com/display/BAM241/Installing+on+Windows
According to the doc, try to install Cywin into C drive and make sure no space in file path and add "C:\cygwin64\bin" to the windows PATH, also make sure BAM running user has rights to run cygwin commands ( in default nothing to do in windows, unless you may set some special permission ).
Please note that we are not recommending to run BAM in windows in production environment due to compatibility of Cassandra , Hadoop.
If installing cygwin64 doesn't resolve the issue, please attach the conf files in both BAM and APIM (/repository/conf).
Also, the integration instructions that you are following are for API Manager 1.5.0 (see AM150 in the path). This is the 1.8.0 version: https://docs.wso2.com/display/AM180/Publishing+API+Runtime+Statistics
Alternatively, you can use WSO2 API Cloud (SaaS version of API Manager) in which everything is preconfigured and works out of the box.
In this case the problems is in the max Windows path length.
When BAM is trying to run am_stats_script to make statistics, it has to copy to tmp folder some library. To do that it has to create many nasted folders. And in one moment there occurs the problem with path length.
I found it when I was analyzing log output and found "No such file" exception.
The simple solution is to move to Linux platform (as Jamsheer did). On Linux there isn`t such problem.

How to transfer and execute .exe(py2exe) file in remote windows machines in a network using python

I have a set of python files which takes the information from windows registry,I have converted these files in to a .exe using py2exe.Now I need to scan the network and detect the active endpoints in it, after detecting the endpoint/machines in the network I need to send this .exe to to the machines and make it execute.How to do this in python.I have a code for transfer of files,My problem is how to execute the .exe file in remote windows machine (I have the credentials of the machine)& get back the results to the server?
I need to accomplish this task using python.
Any help or suggestions please.
Thanks in advance

How can I 'touch' a remote file over FTP?

I have some software which periodically uploads data to a remote location. At the remote location (maintained by a customer or a supplier so somewhat out of my control), there's a script running which detects changes to file size or the last-modified time-stamp of a file and if it's changed, passes the updated information on to other systems.
If I want to trigger the remote server to pass that information on without deleting and then re-uploading the contents, is there a way I can just 'touch' the file to changed it's last-modified date?
I'm using the EnterpriseDT FTP Pro (.net) module to do the uploading from a C++/CLI application on a windows platform.
I just tried resuming a transfer and transferring 0 bytes which appears to do exactly what I need. I've leave this Q/A up in case anyone else is interested in doing the same.

PC File server variable performance when importing from Access MDB

I've got some code that runs in Enterprise guide (SAS Enterprise build, Windows locally, Unix server), which imports a large table via a local install of PC File server. It runs fine for me, but is slow to the point of uselessness for the system tester.
When I use his SAS identity on my windows PC, the code works; but when I use my SAS identity on his machine it doesn't, so it appears to be a problem with the local machine. We have the same version of EG (same hot fixes installed) connecting to the same server (with the same roles) running the same code in the same project, connecting to the same Access database.
Even a suggestion of what to test next would be greatly appreciated!
libname ACCESS_DB pcfiles path="&db_path"
server=&_CLIENTMACHINE
port=9621;
data permanent.&output_table (keep=[lots of vars]);
format [lots of vars];
length [lots of vars];
set ACCESS_DB.&source_table (rename=([some awkward vars]));
if [var]=[value];
[build some new vars, nothing scary];
;
run;
Addenda The PC files server is running on the same machine where the EG project is being run in both case - we both have the same version installed. &db_path is the location of the Access database - on a network file store both users can access (in fact other, smaller tables can be retrieved by both users in a sensible amount of time). This server is administered by IT and not a server we as the business can get software installed on.
The resolution of your problem will require more details and best solved by dialog with SAS Tech Support. The "online ticket" form is here or you can call them by phone.
For example, is the PCFILES server running locally on both your machine and your tester's machine? If yes, is the file referenced by &db_path on a network file server and does your tester have similar access (meaning both of you can reach it the same way)? Have you considered installing the PCFILE server on your file server rather than on your local PC? Too many questions, I think, for a forum like this. But I could be wrong (its happened before); perhaps others will have a great answer.