Having [FATAL] Data Transformation Engine Initialization failed - informatica

Having this error when I try to start a transformation
[FATAL] Data Transformation Engine initialization failed:
<Status>
<severity>5</severity>
<Event error_code="309118" severity='Failure">
<description>Attempt to write [109144] to event file failed. stopping execution.-for more information see file:///opt/app/Informatica/powercenter_961_HF3/DataTransformation/CMReports/Init/Events.cme</description>
<log_file>opt/app/Informatica/powercenter_961_HF3/DataTransformation/CMReports/Init/Events.cme </log file> <time>N/A</time>
</Event>
</Status>.

Found out that it is due to that opt/app/informatica is taking all of the disk space. Freed some space and the transformation worked again.

Related

Can't mine the genesis block of an PoS + mn coin

I am studying blockchain and I am trying to mine the genesis block of an crypto source.
The source I have is an PoS + masternode source. Of course there is PoW in it to mine the first blocks.
So I generated the genesis hash and merkle root. The daemon boots up and everything works. But the moment I use the "setgenerate true" or "getblocktemplate" commands nothing happens. The genesis block can't be mined.
The "getblocktemplate" returns "Out of memory (code -7)"
Debug.log shows:
2019-01-21 16:23:42 ERROR: CheckTransaction() : txout.nValue negative
2019-01-21 16:23:42 ERROR: CheckBlock() : CheckTransaction failed
2019-01-21 16:23:42 CreateNewBlock() : TestBlockValidity failed
2019-01-21 16:23:42 CreateNewBlock: Failed to detect masternode to pay
2019-01-21 16:23:42 CreateNewBlock(): total size 1000
I disabled the masternode enforcement sporks
Is there anyone who experienced something like this or can help me with it?
The genesis block doesn't actually require mining. You can create it as whatever you want as long as it follows the serialisation of your protocol. Genesis blocks tend to follow slightly different rules to normal blocks and so often do not pass validation under normal circumstances.
Here is how we handle the genesis block in our code-base. It has slightly different rules to how we handle other blocks.
All a block needs is a block to point backwards to. So as long as you have some previous hash new blocks should be able to be formed on top of your genesis block.
I suggest you try Bitshares or Steem code and see how the mining goes. You can use the TEST mode in either one to starting creating / mining blocks from the Genesis block.

Streaming MutationGroups into Spanner

I'm trying to stream MutationGroups into spanner with SpannerIO.
The goal is to write new MuationGroups every 10 seconds, as we will use spanner to query near-time KPI's.
When I don't use any windows, I get the following error:
Exception in thread "main" java.lang.IllegalStateException: GroupByKey cannot be applied to non-bounded PCollection in the GlobalWindow without a trigger. Use a Window.into or Window.triggering transform prior to GroupByKey.
at org.apache.beam.sdk.transforms.GroupByKey.applicableTo(GroupByKey.java:173)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:204)
at org.apache.beam.sdk.transforms.GroupByKey.expand(GroupByKey.java:120)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1585)
at org.apache.beam.sdk.transforms.Combine$PerKey.expand(Combine.java:1470)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:868)
at org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteGrouped.expand(SpannerIO.java:823)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:286)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:52)
at quantum.base.transform.entity.spanner.SpannerProtoWrite.expand(SpannerProtoWrite.java:20)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:388)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline$Write$SpannerWrite.expand(EntityBuilderPipeline.java:372)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:491)
at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:299)
at quantum.entitybuilder.pipeline.EntityBuilderPipeline.main(EntityBuilderPipeline.java:122)
:entityBuilder FAILED
Because of the error above I assume the input collection needs to be windowed and triggered, as SpannerIO uses a GroupByKey (this is also what I need for my use case):
...
.apply("1-minute windows", Window.<MutationGroup>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
.apply(SpannerIO.write()
.withProjectId(entityConfig.getSpannerProject())
.withInstanceId(entityConfig.getSpannerInstance())
.withDatabaseId(entityConfig.getSpannerDb())
.grouped());
When I do this, I get the following exceptions during runtime:
java.lang.IllegalArgumentException: Attempted to get side input window for GlobalWindow from non-global WindowFn
org.apache.beam.sdk.transforms.windowing.PartitioningWindowFn$1.getSideInputWindow(PartitioningWindowFn.java:49)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.issueSideInputFetch(StreamingModeExecutionContext.java:631)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$UserStepContext.issueSideInputFetch(StreamingModeExecutionContext.java:683)
com.google.cloud.dataflow.worker.StreamingSideInputFetcher.storeIfBlocked(StreamingSideInputFetcher.java:182)
com.google.cloud.dataflow.worker.StreamingSideInputDoFnRunner.processElement(StreamingSideInputDoFnRunner.java:71)
com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
com.google.cloud.dataflow.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:271)
org.apache.beam.runners.core.SimpleDoFnRunner.outputWindowedValue(SimpleDoFnRunner.java:219)
org.apache.beam.runners.core.SimpleDoFnRunner.access$700(SimpleDoFnRunner.java:69)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:517)
org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.output(SimpleDoFnRunner.java:505)
org.apache.beam.sdk.values.ValueWithRecordId$StripIdsDoFn.processElement(ValueWithRecordId.java:145)
After investigating further it appears to be due to the .apply(Wait.on(input)) in SpannerIO: It has a global side input which does not seem to work with my fixed windows, as the docs of Wait.java state:
If signal is globally windowed, main input must also be. This typically would be useful
* only in a batch pipeline, because the global window of an infinite PCollection never
* closes, so the wait signal will never be ready.
As a temporary workaround I tried the following:
add a GlobalWindow with triggers instead of fixed windows:
.apply("globalwindow", Window.<MutationGroup>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))
).orFinally(AfterWatermark.pastEndOfWindow()))
.discardingFiredPanes()
.withAllowedLateness(Duration.ZERO))
This results in writes to spanner only when I drain my pipeline. I have the impression the Wait.on() signal is only triggered when the Global windows closes, and doesn't work with triggers.
Disable the .apply(Wait.on(input)) in SpannerIO:
This results in the pipeline getting stuck on the view creation which
is described in this SO post:
SpannerIO Dataflow 2.3.0 stuck in CreateDataflowView.
When I check the worker logs for clues, I do get the following warnings:
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type SpannerSchema have well defined equals method. This may produce incorrect results on some PipelineRunner
logger: "org.apache.beam.sdk.coders.SerializableCoder"
message: "Can't verify serialized elements of type BoundedSource have well defined equals method. This may produce incorrect results on some PipelineRunner"
Note that everything works with the DirectRunner and that I'm trying to use the DataflowRunner.
Does anyone have any other suggestions for things I can try to get this running? I can hardly imagine that I'm the only one trying to stream MutationGroups into spanner.
Thanks in advance!
Currently, SpannerIO connector is not supported with Beam Streaming. Please follow this Pull Request which adds streaming support for spanner IO connector.

Getting hadoop pipes to work on os x with file access

my apologies if this question is trivial, but I haven't been able to solve it properly despite spending a few hours on google…
I have compiled and installed hadoop 2.5.2 from source on OS X 10.8. I think all went well with this, even though getting the native libraries compiled was a bit of a pain.
To verify my installation, I tried the examples that ship with hadoop like this:
$ hadoop jar /Users/per/source/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 2 5
which gives me back
Job Finished in 1.774 seconds
Estimated value of Pi is 3.60000000000000000000
So that seems to indicate that hadoop is at least working, I think.
Since my end goal is to get this working with file I/O from a C++ program using pipes, I also did try the following as an intermediate step:
$ hdfs dfs -put etc/hadoop input
$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'
Which also seems to work (as in producing the correct output).
Then, finally, I tried the wordcount.cpp example, and this is not working, although I fail to understand how to fix it. The error message is quite clear.
2015-10-21 13:38:33,539 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user per
2015-10-21 13:38:33,678 INFO org.apache.hadoop.security.JniBasedUnixGroupsMapping: Error getting groups for per: getgrouplist: error looking up group. 5 (Input/output error)
So obviously there is something I don't get with the file permissions, but what? My hdfs-site.xml file looks like this, where I have tried to turn off permissions checks altogether.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>per</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
But I am confused since I at least seem to be able to use grep on files in hdfs (per the built-in example above).
Any feedback would be greatly appreciated. I am running all of this as myself, so I haven't created a new user for only hadoop, since I am only running it locally on my laptop for now.
Edit: Update with some additional output from my terminal window, according to the discussion below:
<snip>
15/10/21 14:47:41 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/10/21 14:47:41 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/10/21 14:47:41 INFO mapred.LocalJobRunner: map task executor complete.
15/10/21 14:47:41 WARN mapred.LocalJobRunner: job_local676909674_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:104)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
<snip>
I can add more if that is relevant

Why this SOAP-ENV:Server message occurs on the second test case?

I am developing this BPEL module which is interacting with a service on the localhost.
When I run the first test case, I receive the correct output. However, when I create a second test case, it fails and outputs this error message:
<SOAP-ENV:Fault>
<faultcode>SOAP-ENV:Server</faultcode>
<faultstring>BPCOR-6135: A fault was not handled in the process scope; Fault Name is {http://docs.oasis-open.org/wsbpel/2.0/process/executable}selectionFailure; Fault Data is &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;jbi:message xmlns:sxeh=&quot;http://www.sun.com/wsbpel/2.0/process/executable/SUNExtension/ErrorHandling&quot; type=&quot;sxeh:faultMessage&quot; version=&quot;1.0&quot; xmlns:jbi=&quot;http://java.sun.com/xml/ns/jbi/wsdl-11-wrapper&quot;&gt;&lt;jbi:part&gt;BPCOR-6174: Selection Failure occurred in BPEL({http://enterprise.netbeans.org/bpel/BpelModuleHope2/fucking_bpel}fucking_bpel) at line 49&lt;/jbi:part&gt;&lt;/jbi:message&gt;. Sending errors for the pending requests in the process scope before terminating the process instance</faultstring>
<faultactor>sun-bpel-engine</faultactor>
<detail>
<detailText>BPCOR-6135: A fault was not handled in the process scope; Fault Name is {http://docs.oasis-open.org/wsbpel/2.0/process/executable}selectionFailure; Fault Data is &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;jbi:message xmlns:sxeh=&quot;http://www.sun.com/wsbpel/2.0/process/executable/SUNExtension/ErrorHandling&quot; type=&quot;sxeh:faultMessage&quot; version=&quot;1.0&quot; xmlns:jbi=&quot;http://java.sun.com/xml/ns/jbi/wsdl-11-wrapper&quot;&gt;&lt;jbi:part&gt;BPCOR-6174: Selection Failure occurred in BPEL({http://enterprise.netbeans.org/bpel/BpelModuleHope2/fucking_bpel}fucking_bpel) at line 49&lt;/jbi:part&gt;&lt;/jbi:message&gt;. Sending errors for the pending requests in the process scope before terminating the process instance
Caused by: BPCOR-6174: Selection Failure occurred in BPEL({http://enterprise.netbeans.org/bpel/BpelModuleHope2/fucking_bpel}fucking_bpel) at line 49
BPCOR-6129: Line Number is 47
BPCOR-6130: Activity Name is Assign2</detailText>
</detail>
</SOAP-ENV:Fault>
However, if I restart the tomcat server and re-run a test, it works fine. But the second test case fails.
Do you think, it is a problem with the java implementation for the service?
Thank you
To your last question, yes, it is probably something in the java implementation -- There is no possible way for us to help you unless you provide us your request, and some example code that might give us an opportunity to reproduce or discern the issue.

QuickBooks QBXML API Error: Failed to get an Interface ptr: Source: .\src\TimeActFilter.cpp

My clients stumbled upon an issue when attempting to make a straightforward TimeTracking query utilizing two tools: OpenSync and QODBC. They both present the same error in the qbsdklog.txt files. Have any QBXML developers experienced this / devised a remedy? Intuit general support has told them the fault lies with the 3rd party developers however I'm not sure how they came to such a conclusion.
QODBC:
20130913.095701 I 2588 QBSDKProcessRequest Application named 'FLEXquarters QODBC' starting requests (process 6920).
20130913.095701 I 2588 SpecVersion Current version of qbXML in use: 10.0
20130913.095701 I 2588 QBSDKMsgSetHandler QUERY: Time Tracking
20130913.095701 I 2588 TimeTrackingStorage::DoQuery Setting iterator chunk size to 00001000
20130913.100310 E 2588 TimeTrackingStorage::DoQuery Failed to get an Interface ptr: Source: .\src\TimeActFilter.cpp line #86 HRESULT=0x80004005
20130913.100310 I 2588 QBSDKMsgSetHandler Request 1 failed.
20130913.100310 I 2588 MsgSetHandler Finished.
OpenSync:
20130913.142336 I 2588 QBSDKProcessRequest Application named 'OpenSync' starting requests (process 700).
20130913.142336 I 2588 SpecVersion Current version of qbXML in use: 10.0
20130913.142336 I 2588 QBSDKMsgSetHandler QUERY: Time Tracking
20130913.142336 I 2588 TimeTrackingStorage::DoQuery Setting iterator chunk size to 2147483647
20130913.142337 E 2588 TimeTrackingStorage::DoQuery Failed to get an Interface ptr: Source: .\src\TimeActFilter.cpp line #86 HRESULT=0x80004005
20130913.142337 I 2588 QBSDKMsgSetHandler Request failed.
20130913.142337 I 2588 MsgSetHandler Finished.
Any information is appreciated.
Update: Here is the XML request from OpenSync
<Trace>
<OUTGOING>
<?xml version="1.0" encoding="ISO-8859-1"?><QBXML><QBXMLMsgsRq onError = "continueOnError"><TimeTrackingQueryRq><ModifiedDateRangeFilter><FromModifiedDate>2013-09-12T22:22:59</FromModifiedDate><ToModifiedDate>2013-09-12T22:50:17</ToModifiedDate></ModifiedDateRangeFilter></TimeTrackingQueryRq></QBXMLMsgsRq></QBXML>
</OUTGOING>
<RETURNS>
<QBXML>
<QBXMLMsgsRs>
<TimeTrackingQueryRs statusCode="1" statusSeverity="Info" statusMessage="A query request did not find a matching object in QuickBooks" />
</QBXMLMsgsRs>
</QBXML>
</RETURNS>
</Trace>