MultiDelimiterSerDe setup in Amazon Hive - amazon-web-services

I'm trying to use a multidelimiter in a table insert for a hive job in emr on amazon aws. As explained in this link. The delimiter for the file is "|".
https://cwiki.apache.org/confluence/display/Hive/MultiDelimitSerDe
However, I ended up having to use...
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe'
Instead of the documented...
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe'
in order for it to not give me this error.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot validate serde: org.apache.hadoop.hive.serde2.MultiDelimitSerDe
OK. So when I don't get that error, by adding the .contrib, I get this error which is caused by Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1548264520414_0027_1_00, diagnostics=[Task failed, taskId=task_1548264520414_0027_1_00_000021, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1548264520414_0027_1_00_000021_0:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:184)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:328)
at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:420)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:286)
... 15 more
So I've been reading that you have to add the .jar file.
https://community.hortonworks.com/questions/82189/hive-cannot-see-jar.html
And so I've tried all kinds of things to get this to work. It says that it is adding it it to the class path.
hive> add jar /usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar
> ;
Added [/usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar] to class path
Added resources: [/usr/lib/hive/lib/hive-contrib-2.3.3-amzn-1.jar]
hive> add jar /usr/lib/hive/lib/hive-contrib.jar
> ;
Added [/usr/lib/hive/lib/hive-contrib.jar] to class path
Added resources: [/usr/lib/hive/lib/hive-contrib.jar]
hive> exit;
So I'm not sure what to do. It's acting as if the .jar file for hive-contrib isn't in the class path despite me adding it. I've also tried running...
export HADOOP_USER_CLASSPATH_FIRST=true
which is found here...
How to include jars in Hive (Amazon Hadoop env)
And that doesn't fix it either.
How can I use a multidelimiter SerDe property for a hive job on aws?
Thank you.

I could not get MultiDelimitSerDe to work. Instead, I was lucky in that the delimiter had quotations on either side of the pipe. So it looks like "|". This turns the values between the quotes into strings, so the additional pipes in those column values don't act as delimiters.
"Test | Test2 "|" Test3 | Test 4 | Test 5 "|" Test 6 "
You can see an explanation in the link below. The part that talks about it is in the comments, not the article.
https://www.ericlin.me/2015/07/how-to-create-a-hive-multi-character-delimitered-table/
If I didn't have those quotation marks around the delimiter, I'm not sure how I would have been able to work with a multi delimiter. Especially if I had quotations in any of my fields, but after checking, out of the billions of rows, there is not a single quote.

Related

How to parse Apache Catalina.log

I am trying to find a way to parse a Catalina.log and i am really struggling.
This a piece of the code:
May 12, 2017 2:14:38 PM org.apache.coyote.AbstractProtocol init
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-apr-10.1.31.104-443"]
java.lang.Exception: Connector attribute SSLCertificateFile must be defined when using SSL with APR
at org.apache.tomcat.util.net.AprEndpoint.bind(AprEndpoint.java:490)
at org.apache.tomcat.util.net.AbstractEndpoint.init(AbstractEndpoint.java:649)
at org.apache.coyote.AbstractProtocol.init(AbstractProtocol.java:434)
at org.apache.catalina.connector.Connector.initInternal(Connector.java:978)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.core.StandardService.initInternal(StandardService.java:559)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.core.StandardServer.initInternal(StandardServer.java:821)
at org.apache.catalina.util.LifecycleBase.init(LifecycleBase.java:102)
at org.apache.catalina.startup.Catalina.load(Catalina.java:638)
at org.apache.catalina.startup.Catalina.load(Catalina.java:663)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:253)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:427)
I wanna get
Date = May 12, 2017 2:14:38 PM
class = org.apache.coyote.AbstractProtocol init
Error level = SEVERE
Error Msg = Failed to initialize end point associated with ProtocolHandler ["http-apr-10.1.321.224-443"]
Error Msg Body = java.lang.Exception: Connector attribute SSLCertificateFile must be defined when using SSL with APR
at org.apache.tomcat.util.net.AprEndpoint.bind(AprEndpoint.java:490)....
i don even know where to start :)
any ideas are very welcomed
I have prepared for you the following regex:
((Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+\d{1,2},\s+\d{4}\s+\d{1,2}:\d{1,2}:\d{1,2}\s(AM|PM))\s(.+)(\r)?\n(FATAL|SEVERE|ERROR|WARN(ING)?|INFO|CONFIG|INFO|DEBUG):\s(.+)(\r)?\n(.+)(\r)?\n(?=\s+at.+java:\d+\))
You can use the following back reference to capture your groups
DATE -> $1
CLASS -> $4
ERROR_LEVEL -> $6
ERROR_MSG -> $8
ERROR_BODY -> $10
The regex will only fetch strings that met the following conditions:
starts by a date in the format specified in your post
after the date, the first line is composed of the class name
the 2nd line is composed of the error level and the error msg
the 3rd line is your error msg body
followed by a java strack trace of n lines starting by \s at and ending by java:\d+)
The regex works in the following way:
((Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+\d{1,2},\s+\d{4}\s+\d{1,2}:\d{1,2}:\d{1,2}\s(AM|PM))
This part will fetch the date in the format of your post:
3 char month followed by space(s) then 1 or 2 digits, ',' then year in 4 digit
then space(s), then time(column char, followed by space(s) then AM or PM
\s(.+)(\r)?\n
this part of the regex will allow you to get the rest of your first line corresponding to your class
(FATAL|SEVERE|ERROR|WARN(ING)?|INFO|CONFIG|INFO|DEBUG):\s(.+)(\r)?\n(.+)(\r)?\n
This part will allow you to get the error level (in this exhaustive list) followed by column and the following 2 lines corresponding to your error msg/body
(?=\s+at.+java:\d+\))
This last part is a condition the enforce that your error is followed by a java stack trace.
You might need to adapt a bit some parts of the regex (like the number of lines of the error body, error message) or the stack trace conditions but I think this is a great starting point for your case.
CHEERS!!!

Start token not found error while using JsonSerDe

I am trying to import a JSON data from S3, and after making some queries, export the output as JSON format to S3 again. However, I get the "org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected" error at hive step on EMR cluster. In order to understand what the problem is, I simplify the Hive script and JSON data, but it keeps giving the same error. How can I solve this problem?
Cluster configuration:
Release: emr-5.3.1
Hive version: 2.1.1
Hadoop distribution: Amazon 2.7.3
Service Role: EMR_DefaultRole
MasterInstanceType: m4.large
The content of the simplifed JSON data:
[{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
Hive script:
DROP TABLE IF EXISTS SOURCE;
DROP TABLE IF EXISTS DESTINATION;
CREATE EXTERNAL TABLE SOURCE(MyID STRING, MyField STRING)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION 's3://myPath/subPath/';
CREATE EXTERNAL TABLE DESTINATION(MyID STRING, MyField STRING)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION 's3://anotherPath/subPath/';
INSERT OVERWRITE TABLE DESTINATION SELECT MyID, MyField FROM SOURCE;
And here is the stack trace:
Vertex failed, vertexName=Map 4, vertexId=vertex_1278452616863_0001_1_00, diagnostics=[Task failed, taskId=task_1278452616863, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1278452616863:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:95)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:70)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:383)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable [{"MyID":"FOO123","MyField":"FOO"},{"MyID":"BAR123","MyField":"BAR"}]
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:86)
... 17 more
Caused by: org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected
at org.apache.hive.hcatalog.data.JsonSerDe.deserialize(JsonSerDe.java:183)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.readRow(MapOperator.java:128)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.access$200(MapOperator.java:92)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:488)
... 18 more
Caused by: java.io.IOException: Start token not found where expected
at org.apache.hive.hcatalog.data.JsonSerDe.deserialize(JsonSerDe.java:169)
... 21 more
Thanks.
JSON should start with { and not with array ([)
I tried with this approach updated my JSON file with structure as
{"MyID":"FOO123","MyField":"FOO"},
{"MyID":"BAR123","MyField":"BAR"}
but after done, I noticed only the first object is being inserted into the table.

hive table query Error in configuring object

I'm loading a log file in S3 into Hive running on EMR but I'm getting all NULLs back when viewing the data...
I created the table like this:
create external table coglogs (
HostID string,
ProcessID string,
Time string,
TimeZoneOffset string,
SessionID string,
RequestID string,
SubRequestID string,
StepID string,
Thread string,
Component string,
BuildNumber string,
Level string,
Logger string,
Operation string,
ObjectType string,
ObjectPath string,
Status string,
Message string,
LogData string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([\d+]\S+[\d+])\t(\d+)\t([\d+]\S+[\d+] [\d+]\S+[\d+])\t(-[\d+])\t([a-zA-Z0-9_\S]*)\t([a-zA-Z0-9_\S]*)\t([a-zA-Z0-9_\S]*)\t([a-zA-Z0-9_\S]*)\t([a-zA-Z0-9_\S]*)\t([a-zA-Z_\S]*)\t([0-9]*)\t([0-9]*)\t([a-zA-Z_\S]*)\t([a-zA-Z_\S]*)\t([a-zA-Z_\S ]*)\t([a-zA-Z_\S ]*)\t([a-zA-Z_\S ]*)\t([a-zA-Z_\S ]*)\t([a-zA-Z_\S ]*)",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s %10$s %11$s %12$s %13$s %14$s %15$s %16$s %17$s %18$s %19$s"
)
location 's3n://infinilog/cogserver_hive.log';
and then load the data:
load data inpath 's3n://infinilog/cogserver.log' into table coglogs;
I checked the regex rubular.com and it seems to be correct but why am i not getting any data back in the hive table?
here is an example line in the log file:
101.196.242.160:9300 11329 2016-01-27 18:35:14.132 -5 http-9300-297 caf 2047 2 Audit.dispatcher.caf Request Warning secure error - found userCapabilities
101.196.242.160:9300 11329 2016-01-27 18:35:14.195 -5 F3820773ADD59754DEAFAFDFA0D2C3F2CDDF715483446C99B16BFA8040D0DDA4 9ss8Csyswh28w8sGC4hhvvMy2hw9d48Cd42y92y8 http-9300-299 CM 6102 3 Audit.Other.cms.CM QUERY REPORT /Public Folders/1A Reporting/PT/Reports/Summary Dashboard Success
EDIT:
so there is data in the table (I see a lot of rows with all column values as "None" though) but when I run something like this:
select count(*)
from coglogs
where status = 'Failure'
or any query, I get this back:
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 9 more
Caused by: java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:42)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
... 17 more
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:153)
... 22 more
Caused by: java.lang.NoSuchFieldError: stringTypeInfo
at org.apache.hadoop.hive.contrib.serde2.RegexSerDe.initialize(RegexSerDe.java:115)
at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521)
at org.apache.hadoop.hive.ql.plan.PartitionDesc.getDeserializer(PartitionDesc.java:138)
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:297)
at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:333)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:122)
... 22 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
In hive you have to use "\" instead of "\" in regex

symmetricds SYSADMIN send-script error

the command of sending a sql script to node or node group is working fine but the issue is with parsing the file itself.
Here you are, the log of the target node
2014-08-27 16:51:12,130 ERROR [station-001] [DataLoaderService] [station-001-pull-1] Failed to load batch 000-31 because: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
java.lang.RuntimeException: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:919)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:196)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:167)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.model.ProcessInfoDataWriter.write(ProcessInfoDataWriter.java:65)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.io.data.writer.TransformWriter.write(TransformWriter.java:217)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachDataInTable(DataProcessor.java:194)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachTableInBatch(DataProcessor.java:164)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:114)
at org.jumpmind.symmetric.service.impl.DataLoaderService$LoadIntoDatabaseOnArrivalListener.end(DataLoaderService.java:779)
at org.jumpmind.symmetric.io.data.writer.StagingDataWriter.notifyEndBatch(StagingDataWriter.java:75)
at org.jumpmind.symmetric.io.data.writer.AbstractProtocolDataWriter.end(AbstractProtocolDataWriter.java:220)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:124)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromTransport(DataLoaderService.java:407)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromPull(DataLoaderService.java:265)
at org.jumpmind.symmetric.service.impl.PullService.execute(PullService.java:129)
at org.jumpmind.symmetric.service.impl.NodeCommunicationService$2.run(NodeCommunicationService.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at bsh.Parser.generateParseException(Parser.java:6068)
at bsh.Parser.jj_consume_token(Parser.java:5939)
at bsh.Parser.BlockStatement(Parser.java:2780)
at bsh.Parser.Line(Parser.java:147)
at bsh.Interpreter.Line(Interpreter.java:1000)
at bsh.Interpreter.eval(Interpreter.java:635)
at bsh.Interpreter.eval(Interpreter.java:739)
at bsh.Interpreter.eval(Interpreter.java:728)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:916)
... 20 more
2014-08-27 16:51:12,470 ERROR [station-001] [DataLoaderService] [station-001-pull-1] Failed while parsing batch
java.lang.RuntimeException: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:919)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:196)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.write(DatabaseWriter.java:167)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.model.ProcessInfoDataWriter.write(ProcessInfoDataWriter.java:65)
at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
at org.jumpmind.symmetric.io.data.writer.TransformWriter.write(TransformWriter.java:217)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachDataInTable(DataProcessor.java:194)
at org.jumpmind.symmetric.io.data.DataProcessor.forEachTableInBatch(DataProcessor.java:164)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:114)
at org.jumpmind.symmetric.service.impl.DataLoaderService$LoadIntoDatabaseOnArrivalListener.end(DataLoaderService.java:779)
at org.jumpmind.symmetric.io.data.writer.StagingDataWriter.notifyEndBatch(StagingDataWriter.java:75)
at org.jumpmind.symmetric.io.data.writer.AbstractProtocolDataWriter.end(AbstractProtocolDataWriter.java:220)
at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:124)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromTransport(DataLoaderService.java:407)
at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromPull(DataLoaderService.java:265)
at org.jumpmind.symmetric.service.impl.PullService.execute(PullService.java:129)
at org.jumpmind.symmetric.service.impl.NodeCommunicationService$2.run(NodeCommunicationService.java:307)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: In file: inline evaluation of: ``DROP TABLE ofep.PRODUCT_RESTRICTIONS;'' Encountered "ofep" at line 1, column 12.
at bsh.Parser.generateParseException(Parser.java:6068)
at bsh.Parser.jj_consume_token(Parser.java:5939)
at bsh.Parser.BlockStatement(Parser.java:2780)
at bsh.Parser.Line(Parser.java:147)
at bsh.Interpreter.Line(Interpreter.java:1000)
at bsh.Interpreter.eval(Interpreter.java:635)
at bsh.Interpreter.eval(Interpreter.java:739)
at bsh.Interpreter.eval(Interpreter.java:728)
at org.jumpmind.symmetric.io.data.writer.DatabaseWriter.script(DatabaseWriter.java:916)
... 20 more
The script contains only one statement “DROP TABLE ofep.PRODUCT_RESTRICTIONS;”
Could you please help me?
Thanks,
Ayman
Symadmin has three different send subcommands...
send-sql Send SQL statement to node
send-schema Send schema change to node
send-script Send script to node
You used send-script which is used for sending BSH scripts.
What you want to use is send-sql.

Trigger Syntax on HsqlDB : expected ;

I use hsqldb for my unit tests. My production use Oracle 11G Db.
When i run my start script as above:
<jdbc:embedded-database id="dataSource" type="HSQL">
</jdbc:embedded-database>
<jdbc:initialize-database data-source="dataSource" ignore-failures="DROPS">
<jdbc:script location="classpath:/sql/init-cct-schema.sql" separator=";" />
<jdbc:script location="classpath:/sql/init-cct-insert.sql" separator=";" />
</jdbc:initialize-database>
I am really quite the trigger example in HSQL docs.
I see this post:
But his solution doesn't work for me, or I don't understand it.
I have always this error:
java.lang.IllegalStateException: Failed to load ApplicationContext
at org.springframework.test.context.TestContext.getApplicationContext(TestContext.java:157)
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:109)
at
...
... 38 more
Caused by: org.springframework.jdbc.datasource.init.ScriptStatementFailedException: Failed to execute SQL script statement at line 9 of resource class path resource [sql/init-cct-schema.sql]: CREATE TRIGGER TI_TYPE_MVT BEFORE INSERT ON TYPE_MVT REFERENCING NEW AS newrow FOR EACH ROW BEGIN ATOMIC IF newrow.TYPE_MVT_PK is null THEN SET newrow.TYPE_MVT_PK = SQ_TYPE_MVT.nextval
at org.springframework.jdbc.datasource.init.ResourceDatabasePopulator.executeSqlScript(ResourceDatabasePopulator.java:199)
at org.springframework.jdbc.datasource.init.ResourceDatabasePopulator.populate(ResourceDatabasePopulator.java:132)
at org.springframework.jdbc.datasource.init.CompositeDatabasePopulator.populate(CompositeDatabasePopulator.java:55)
at org.springframework.jdbc.datasource.init.DatabasePopulatorUtils.execute(DatabasePopulatorUtils.java:45)
... 41 more
Caused by: java.sql.SQLSyntaxErrorException: unexpected end of statement: required: ;
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source)
at org.hsqldb.jdbc.JDBCStatement.execute(Unknown Source)
at org.springframework.jdbc.datasource.init.ResourceDatabasePopulator.executeSqlScript(ResourceDatabasePopulator.java:184)
... 44 more
Caused by: org.hsqldb.HsqlException: unexpected end of statement: required: ;
at org.hsqldb.error.Error.parseError(Unknown Source)
Here is my trigger:
SET DATABASE SQL SYNTAX ORA TRUE;
CREATE TRIGGER TI_TYPE_MVT BEFORE INSERT ON TYPE_MVT
REFERENCING NEW AS newrow FOR EACH ROW
BEGIN ATOMIC
IF newrow.TYPE_MVT_PK is null THEN
SET newrow.TYPE_MVT_PK = SQ_TYPE_MVT.nextval;
END IF;
END;
I try without the final ';' , it's continue to fail.
Here is my dependancy on HSQL:
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>2.2.8</version>
</dependency>
any ideas?
The solution in the link HSQL Create Procedure Syntax doesn't seem to match the documentation is in this line of configuration:
<jdbc:script location="file:Artifacts/Hsql Version Scripts/install/install.sql" separator="/;"/>
By default, the separator used by the Spring script is the semicolon. This means when the first semicolon inside the trigger definition is reached, the incomplete definition is sent to HSQLDB (which results in the error). When you use the above configuration line, it changes the default separator to two characters "/;". Using the special configuration, you need to modify your script to have this separator at the end of each create trigger definition. Leave the semicolons inside the trigger definition body as they are.
Terminate each of the SQL statements (insert, create, select etc) within the script
(some_script.sql) with /;
Whilst configuring in Spring - add the following:
return new EmbeddedDatabaseBuilder()
.setType("HSQL")
.setName("DBNAME")
.addScript("some_script.sql")
.setSeparator("/;")
.build();