how to resolve pentaho data integration error - kettle

I am using kettele tranformation with pdi I wrote the querys with mongodb my mongodb old version 3.4 kettele transformation is well working resently updating my mongodb version 3.6 ofter updating throw error
- ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : org.pentaho.di.core.exception.KettleException:
2018/09/12 13:40:49 - MongoDB Input.0 - com.mongodb.MongoCommandException: Command failed with error 9: 'The 'cursor' option is required, except for aggregate with the explain argument' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "The 'cursor' option is required, except for aggregate with the explain argument", "code" : 9, "codeName" : "FailedToParse" }
2018/09/12 13:40:49 - MongoDB Input.0 - Command failed with error 9: 'The 'cursor' option is required, except for aggregate with the explain argument' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "The 'cursor' option is required, except for aggregate with the explain argument", "code" : 9, "codeName" : "FailedToParse" }
2018/09/12 13:40:49 - MongoDB Input.0 -
2018/09/12 13:40:49 - MongoDB Input.0 - at org.pentaho.di.trans.steps.mongodbinput.MongoDbInput.processRow(MongoDbInput.java:137)
2018/09/12 13:40:49 - MongoDB Input.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
any one please suggest me.

Related

WSO2AM with WSO2DAS - null apiPublisher for API_DESTINATION_SUMMARY

Connecting wso2am-2.0.0 and wso2am-analytics-2.0.0 on PGSQL (9.5) database (having common WSO2AM_STATS_DB database), we receive a following exception:
TID: [-1] [] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Error while saving dat
a to the table API_DESTINATION_SUMMARY : Job aborted due to stage failure: Task 0 in stage 54296.0 failed 1 times, most recent failure: Lost task 0.0
in stage 54296.0 (TID 50425, localhost): java.sql.BatchUpdateException: Batch entry 0 INSERT INTO API_DESTINATION_SUMMARY (api, version, apiPublisher,
context, destination, total_request_count, hostName, year, month, day, time) VALUES ('test01', 'v1.0.0', NULL, '/test/v1.0.0', 'http://demo6009762.mo
ckable.io', 1, 'wso2apimgr3', 2017, 1, 26, '2017-01-26 15:59') ON CONFLICT (api,version,apiPublisher,context,destination,hostName,year,month,day) DO U
PDATE SET total_request_count=EXCLUDED.total_request_count, time=EXCLUDED.time was aborted: ERROR: null value in column "apipublisher" violates not-nu
ll constraint
full exception is here.
According to the logs the direct cause is that the apipublisher field is null what should not happen.
So now I have a few questions:
How do I prevent that? How do I configure the apipublisher value?. And How do I get rid of the invalid data
Thank you for any hint
There is a reported issue for this. You can apply the fix mentioned in the jira ticket.

Error posting API to API Manager (v1.10.0) from GREG (v5.1.0)

I have followed the instructions laid out in the documentation for GREG (5.1.0) to configure GREG to publish APIs to an external API Manager (1.10.0). I followed the configuration updates (modified the LifeCycle in the configuration.xml) and promoted a REST API. Unfortunately, when I promote the API to Production I don't receive any feedback in the GREG Publisher UI, the API is not imported into API Manager, and I receive the following errors in the GREG logs:
Note: I've scrubbed these logs for potentially sensitive information.
[2016-02-01 15:33:34,432] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_name : xxxxxxxxxxxx
[2016-02-01 15:33:34,451] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - interface_transports : https
[2016-02-01 15:33:34,454] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - uritemplate_httpVerb : get
[2016-02-01 15:33:34,454] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - interface_swagger : /_system/governance/apimgt/applicationdata/api-docs/1.0.0/xxxxxxxxx
[2016-02-01 15:33:34,454] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - contacts_entry : Technical Owner: xxxxxxxxx
[2016-02-01 15:33:34,455] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_endpointURL : http://xxxxxxxx/
[2016-02-01 15:33:34,456] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - uritemplate_authType : None
[2016-02-01 15:33:34,457] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_description : xxxxxxxxx
[2016-02-01 15:33:34,457] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_context : /xxxxxxxx/
[2016-02-01 15:33:34,457] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_version : 0.0.4
[2016-02-01 15:33:34,460] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - uritemplate_urlPattern : /xxxxxx/{id}.xml
[2016-02-01 15:33:34,460] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - overview_provider : admin
[2016-02-01 15:33:34,460] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - endpoints_entry : Prod:https://xxxxxxxxxxxxxxx/xxxxxx/
[2016-02-01 15:33:34,460] ERROR {org.wso2.carbon.governance.registry.extensions.executors.apistore.RestServiceToAPIExecutor} - security_authenticationType : None
I have created a gist with the logs for both API Manager and GREG.

Mongo aggregation doesn't work in pymongo

I am trying to make a query with aggregate in Django, it works fine with my local computer but not in the server.
In both machines, pymongo is installed in Python virtual enviroment:
pip freeze | grep mongo
pymongo==2.5.2
I can also get the inserted data in two machines with find() method:
conn.firmalar.searchlogger.find()
But aggregate method works in my local but not in the server even everything installed are the same. I got this error when i attempt to run it on server:
import pymongo
conn = pymongo.Connection()
search = conn.firmalar.searchlogger.aggregate([{"$group": {"_id": "$what", "count": {"$sum": 1}}}])
OperationFailure at /admin/weblog/
command SON([('aggregate', u'searchlogger'), ('pipeline', [{'$group': {'count': {'$sum': 1}, '_id': '$what'}}])]) failed: no such cmd: aggregate
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/collection.pyc in aggregate(self, pipeline)
1059 self.secondary_acceptable_latency_ms),
1060 slave_okay=self.slave_okay,
-> 1061 _use_master=use_master)
1062
1063 # TODO key and condition ought to be optional, but deprecation
/home/cem/env/firmalar/local/lib/python2.7/site-packages/pymongo/helpers.pyc in _check_command_response(response, reset, msg, allowable_errors)
145 if code in (11000, 11001, 12582):
146 raise DuplicateKeyError(errmsg, code)
--> 147 raise OperationFailure(msg % errmsg, code)
148
149
It's not about the driver - it's about mongodb itself. aggregate() was introduced in mongodb 2.2: docs.
Most likely, you are using an older version of mongodb. Check your mongodb version and upgrade if needed. Also check that in your python code you are connecting to the mongodb version >=2.2.
Also see:
MongoDB Java driver : no such cmd: aggregate
Aggregate failed: no such cmd: aggregate

pig REPLACE gives error

Let's assume that my file is named 'data' and looks like this:
2343234 {23.8375,-2.339921102} {(343.34333,-2.0000022)} 5-23-2013-11-am
I need to convert the 2nd field to a pair of coordinate numbers. So I wrote the follwoing code and called it basic.pig:
A = LOAD 'data' AS (f1:int, f2:chararray, f3:chararray. f4:chararray);
B = foreach A generate STRSPLIT(f2,',').$0 as f5, STRSPLIT(f2,',').$1 as f6;
C = foreach B generate REPLACE(f5,'{',' ') as f7, REPLACE(f6,'}',' ') as f8;
and then used (float) to convert the string to a float. But, the command 'REPLACE' fails to work and I get the following error:
-bash-3.2$ pig -x local basic.pig
2013-06-24 16:38:45,030 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1 (r1459641) compiled
Mar 22 2013, 02:13:53 2013-06-24 16:38:45,031 [main] INFO org.apache.pig.Main - Logging error messages to: /home/--/p/--test/pig_1372117125028.log
2013-06-24 16:38:45,321 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/isl/pmahboubi/.pigbootup not found
2013-06-24 16:38:45,425 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2013-06-24 16:38:46,069 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
Details at logfile: /home/--/p/--test/pig_1372117125028.log
And this is the details of the pig_137..log
Pig Stack Trace
---------------
ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
org.apache.pig.tools.pigscript.parser.TokenMgrError: Lexical error at line 7, column 0. Encountered: <EOF> after : ""
at org.apache.pig.tools.pigscript.parser.PigScriptParserTokenManager.getNextToken(PigScriptParserTokenManager.java:3266)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.jj_ntk(PigScriptParser.java:1134)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:104)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:604)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
================================================================================
I've got data like this:
2724 1919 2012-11-18T23:57:56.000Z {(33.80981975),(-118.105289)}
2703 6401 2012-11-18T23:57:56.000Z {(55.83525609),(-4.07733138)}
1200 4015 2012-11-18T23:57:56.000Z {(41.49609152),(13.8411998)}
7104 9227 2012-11-18T23:57:56.000Z {(-24.95351118),(-53.46538723)}
and I can do this:
A = LOAD 'my_tsv_data' USING PigStorage('\t') AS (id1:int, id2:int, date:chararray, loc:chararray);
B = FOREACH A GENERATE REPLACE(loc,'\\{|\\}|\\(|\\)','');
C = LIMIT B 10;
DUMP C;
This error
ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
came to me because I had used different types of quotation marks. I started with ' and ended with ยด or `, and it took quite a while to find what went wrong. So it had nothing to do with line 7 (my script was not so long, and I shortened data to four lines which naturally did not help), nothing to do with column 0, nothing to do with EOF of data, and hardly anything to do with " marks which I didn't use. So quite misleading error message.
I found the cause by using grunt - pig command shell.

Sismo - PHP Fatal error: Class 'Sismo\GrowlNotifier' not found

Trying to setup Sismo , running php sismo.php results in :
PHP Fatal error: Class 'Sismo\GrowlNotifier' not found
My config is setup in the default ~/.sismo/config.php location
and the line is:
$notifier = new Sismo\GrowlNotifier('pa$$word');
The correct FQCN is Sismo\Notifier\GrowlNotifier.